Monday, 31 January 2011

Bankers’ bonuses: dubious use more than dubious ethics

Bankers’ bonuses continue to receive ample critique, being qualified as immoral and unethical. However, that the bonus system suffers this critique seems largely the result of awkward semantics. “Bonus” suggests a reward for good performance, which seems something at odds with banks over the past few years.

However, in reality, these bonuses do not really serve the role of rewards for good performance. Instead, they are just a form of “flexible pay”. That is, most bankers’ salary consists of a fixed and a flexible component. The flexible component decreases if their firm performs less well, which it has done for most bankers over the past few years.

If banks would just rename their “bonuses” into something like “flexible pay” that might not only be closer to the reality of things, but also shield them from much of the vigorous critique of outraged outsiders. Instead, some might even be inclined to applaud them for taking such a heavy pay cut in these meagre times. I do not see many other job categories agree to such heavy pay cuts as they went through recently; imagine proposing a 40 percent pay cut to Tube drivers or BA crews – bankers accepted it without much grumble.

However, this does not mean that the flexible pay system that banks have grown to use (i.e. their bonus system) is a smart way of rewarding people. It is not; it is outright clumsy. Flexible pay is intended to motivate people, make them emotionally tied to and proud of their firms, and prevent them from leaving. Bonuses do no such thing. People just wait for their bonus to arrive and then leave; it has no influence in terms of retention. Things get even more awkward (or perhaps ridiculous) when they introduce “guaranteed bonuses”; how on earth are they supposed to motivate anyone?

Research on employee remuneration does not look good for them either. Bonuses tend to increase wage differences between people within the same organisation and team. Yet, research by professor Matt Bloom from the University of Notre Dame showed consistent evidence that wage differences actually demotivate people, and make a team perform worse; they even demotivate the team member with the highest relative pay. In comparison, teams with much more homogeneous pay tend to perform much better. In addition, research by professor Dan Ariely from Duke University and colleagues provided evidence that individuals are also negatively affected by extremely high pay, the same people performed better when their remuneration was within more normal bounds.

Thus, rather than being immoral and unethical, the bonus system is simply rather crude and clumsy; it does not get the job done. That is not unusual for banks, which generally seem to be run by people who are excellent bankers but very poor at managing stuff. Common sense would suggest that they rethink the bonus system’s name (and change it into something like flexible pay). Scientific evidence would suggest that they rethink the set-up of the entire system altogether.

Thursday, 27 January 2011

Business schools suffer from a dangerous lack of evidence based teaching

There is a great divide in business schools, and one that not many outsiders are aware off. It is the divide between research and teaching. There is very little relation between them. What is being taught in management books and classrooms is usually not based on rigorous research. Vice versa, the research that gets published in the prestigious academic journals seldom finds its way into the MBA classroom.

The consequences of this divide are grave. First of all for research: because none of this research is really intended to be used in the classroom, or to be communicated to managers in some other form, it is not suited to serve that purpose. The ultimate goal is publication in a prestigious academic journal, but that does not make it useful, or even offer a guarantee that the research findings provide much insight into the workings of business reality.

It is not a new problem. In 1994, the then president of the leading association of business academics called the Academy of Management, Professor Don Hambrick, noted “We read each others’ papers in our journals and write our own papers so that we may, in turn, have an audience . . . an incestuous, closed loop”. Management research is not required to be relevant. Consequently much of it is not.

But business education clearly also suffers. What is being taught in management courses is usually not based on solid scientific evidence. Instead, it concerns the generalization of individual business cases or the lessons from popular management books. Such books often are based on the simple yet appealing formula that they look at a number of successful companies, see what they have in common, and then conclude that other companies should strive to do the same thing. However, how do you know that the advice provided is fair and reasonable, or whether it comes from tomorrow’s Enrons, Lehmans, and Worldcoms? How do you know that today’s advice and cases will not be soon heralded as the epitome of mismanagement?

How could rigorous – and relevant – management research help? Let me give an example. In the 1990s, ISO9000 (a quality management system) spread through many industries. However, academic research by Professors Mary Benner and Mike Tushman showed that its adoption could actually lead to a decrease in innovation after a few years (because ISO9000 does not allow for deviations from a set standard, which innovation requires), making the adopter worse off. This research was overlooked by practitioners, many business schools continued to applaud the benefits of ISO9000 in their courses, while firms continued – and still continue – to implement the practice, ignorant of its potential pitfalls. Yet this piece of research offers a clear example of the possible benefits of scientific research methods: rigorous research which reveals unintended consequences to expose the true nature of a business practice. However, such research with important practical implications unfortunately is the exception rather than the rule. That even relevant research is largely ignored in business education – as happened to the findings by Benner and Tushman – unfortunately is the rule, and not the exception.

Of course one should not make the mistake that business cases and business books based on personal observation and opinion are without value. They potentially offer a great source of practical experience. Similarly, it would be naïve to assume that scientific research can provide custom-made answers. Like medical research provides general insights that a skilled doctor needs to combine into a unique treatment plan for an individual patient, rigorous management research could and should provide the basis for skilled managers to make better decisions. However, they cannot do that without the in-depth knowledge of their specific organization and circumstances.

Yet, at present, business schools largely fail in providing rigorous, evidence-based teaching. Instead, the near perfect separation between research and teaching causes their courses to largely rely on dangerously simplified generalizations of popular insights, at a time that the corporate pitfalls that rocked our economies over the recent past epitomize a clear need for more sound management in favor of popular fads.

Tuesday, 18 January 2011

The BP oil rig disaster – better brace yourself: there is surely more to come

Last week’s report of the Presidential Commission examining the oil rig disaster in the Macondo well in the Gulf of Mexico draws a sharp and clear conclusion about its cause and who is to blame: it is the systemic failure of management; at BP, its partners and subcontractors Transocean and Halliburton. At the end, it also places a bit of guilt on the US government, which provided inadequate regulation and resources.

The report is to be applauded for its clarity and thoroughness, and for recognising the complex and systemic nature of the cause. However, what it fails to recognise is that the structural failure of management is embedded in an even wider context, namely how in our society we run our economies and corporations. Given this wider economic context, it is inevitable that similar disasters – of similar apocalyptic proportions – will happen in the future.

Strikingly, when reading the report, the parallels between this debacle and other corporate disasters of the recent and more distant past are stunningly clear. Many of the descriptions of how the oil rig disaster unfolded, as well as the reports’ conclusions, for example, could word for word have been taken from reports on the Union Carbide gas disaster in Bhopal in 1984. Swap some names and dates and a few technicalities and the various reports’ descriptions of a lack of a top-down safety culture, design errors, break-down of communication, and the influence of cost-cutting, and so on are near identical. And that tells us something; if alone that this is unlikely to be the last disaster of its kind that we are going to witness.


In fairness, the committee has done well to resist the common temptation, when looking at things superficially, to name and blame a particular party, or even a particular person, like the Obama government clearly could not avoid the same temptation in the weeks following the disaster, explicitly and exclusively heaping blame on BP and its CEO Tony Hayward in particular. The same happened to Warren Anderson, Union Carbide’s hapless CEO in the 1980s, whose extradition for manslaughter charges is still being sought by the Indian government.

And I am sure these companies are to blame, and their CEOs do carry responsibility for the disaster, but to name and shame them as the sole cause of the misfortune seems a dangerous oversimplification.

Professors Gabriel Szulanski from INSEAD and Sid Winter from the Wharton School, who examined corporate disasters, wrote about this “When people try to explain a disaster after the fact (an accident in a nuclear plant, for example), they are typically under pressure to name a relatively simple cause so that existing policies can be revised to prevent similar events in the future”. We are eager to find a culprit, and someone to blame, and the CEO of the offending company is the most logical and easiest target for our tar and feathers.

But, as the Presidential Committee rightly concludes “the root causes are systemic”, representing an “overall failure of management”, rather than the actions of a particular individual or even a particular firm. When you analyse the lack of communication systems, safety culture, inadequate decision making processes, and so on, a disaster – somewhere at some point – seemed inevitable and the proverbial accident waiting to happen.

The anthropologists Anthony Oliver-Smith and Susanna Hoffman, who examined a variety of man-made disasters, concluded about this “a disaster becomes unavoidable in the context of a historically produced pattern of vulnerability”. And that is what we saw at BP; a pattern that produced a situation that at some point was going to go off the rails. Hence, the committee is certainly right that “the missteps were rooted in systemic failures by industry management (extending beyond BP)”.


Where the report falls short, however, probably also because it extends the scope and vision of the committee, is recognising that the way BP, Transocean and Halliburton are managed is the logical consequence of how the world of business operates and is organised in our society. The report for example concludes, with ample surprise and indignation, that safety was not the firm’s top priority. Well, of course it is not, I’d say, because in today’s society we tell our companies that their top priority is shareholder value.

Now, certainly this disaster did not do the shareholders of BP much good, but the point is that, in financial terms, there is an optimal risk-return trade-off to be made. And all BP did and has been doing is to optimise that trade-off for its shareholders – precisely as we expect them to do.

Whenever I ask a group of executives to whom the ultimate responsibility of a company is they proclaim in chorus “shareholders” – some of them even get annoyed if not angry by questioning that very assumption. Because that is what they are supposed to do: maximise the value of the corporation for its owners. And, as said, that implies making risk-return trade-offs. The tricky thing is of course that such trade-offs inevitably at some point somewhere down the line lead to something going seriously off the rails.

In fact, the way we remunerate top managers – including Tony Hayward – is largely through stock options. The only reason to so abundantly use stock options (and not, for instance, stock) is that they stimulate top managers to take more risk. And the world of business and our stock markets in specific are organised in such a way that we believe that that is what we want: top managers who take risks. We applaud them when it goes well, but we vilify them when it goes badly wrong, although that’s simply the other, inevitable side of the same risky coin.

Of course, oil disasters are the type of risk we would like them to avoid, but governance mechanisms such as stock options simply stimulate risk taking and do not discriminate between different types of risk. Research – by professors Gerry Sanders from Rice University and Don Hambrick from Penn State – confirmed that CEOs with more stock options take more risks, but they also experience bigger losses. Furthermore, research by professors Xiaomeng Zhang and colleagues from the American University of Washington showed that option-loaded CEOs are more likely to engage in earnings manipulations. Clearly these are not the risks we want CEOs to take, but they are the logical consequence of the way we remunerate them. We ask and reward them for taking risks, so they do.


‘But I did not ask them to take more risk’, you might think. But yes, you probably did. Perhaps not directly, but indirectly; very likely. Individual investors select shares with the highest return and track record, consumers select the bank with the best rates, your pension fund invests your savings in companies with the highest risk-return trade-off, and so on. By selecting the best returns, we stimulate those companies to optimise their own risk-return balance. As individuals, we just look at the financial results, and seldom query how they came about.

But at least we are in good company; following the recent banking crisis, even the Church of England was found to have invested in the same financial instruments they so heavily criticised after the collapse of the financial system. However, you just cannot have your cake and eat it too. If we design a system in which firms are expected to maximise shareholder value and CEOs are stimulated to take risks, some of the investments are going to go wrong. And both Union Carbide in Bhopal and BP in the Gulf of Mexico were clearly investments that went wrong.

So the White House committee was right; the Deep Water Horizon rig disaster was caused by a systemic failure of management, but the system surpasses that of the three companies involved. As a matter of fact, whether you analyse the cases of Enron, the old Barings Bank, Lehman or the Royal Bank of Scotland, similar conclusions would be drawn. All these firms and managers operated conditioned by the economic context in which they operated. And since the Presidential Commission is unlikely to change that very context, we will be facing more such corporate disasters of the same kind at some point in the future.

Thursday, 13 January 2011

‘Stretch goals’ tend to stretch all the way into fraud

Goal setting works. Give your employees some concrete goals and they will work harder to reach them than when you just tell them “do the best you can”. There are ample studies confirming that relationship. Professors Edwin Locke and Gary Latham, from the University of Maryland and the University of Texas, even called goal-setting “the most effective managerial tool available”.

So, I am not arguing with the effectiveness of goal-setting, but I would say it should come with a health warning. And that is because it also induces some more dubious behavior.

Let me explain. “Stretch goals” is one of these terms that have persistently entered managerial vocabulaire, earning a prominent and enduring place in consultant speak. The idea is that you set your people goals that they might just reach if try really really hard – or just not. Hence, they stretch your people’s effort to the limit.

However, there is a little catch to that. We also know from research that if people almost reach this goal (but, crucially, not entirely…) they tend to make it up.

Cooking the books

Really – quite literally. If they almost reached the stretch goal but not entirely, they will be inclined to pretend they did, and cook the books. Professors Maurice Schweitzer for the University of Pennsylvania, Lisa Ordonez from the University of Arizona, and Bambi Douma from the University of Montana designed a clever experiment. They gave seven random letters to 154 participants and asked them to create as many words as they could within a minute, writing them on a workbook. If they came up with 9 words or more they would get a monetary award. And they did that 7 times. The participants had to record themselves how many words they got each round and, at the end of the experiment, hand in a notebook of their achievements and take the corresponding amount of money out of an envelope. The experiment was guaranteed to be entirely anonymous.

But, unknown to the participants, there was a trick… Although the experiment was indeed anonymous, the researchers could match the workbooks to the notebooks and envelopes with the remaining money. Thus, they could look up how many people cheated (although they were not able to identify their names). And the answer was pretty clear and precise.

Each time people got 5 or 6 words or so – pretty far off the mark of 9 – they reported it honestly and left the money in the envelope. However, things were markedly different when people had reached 8; just one word short off the mark. In that case, a significant number of people cheated: they filled out that they had come up with 9 words and took the money out of the envelope, although their workbooks clearly showed they had missed the mark by 1 word. They cooked the books and deceptively misappropriated the corresponding monetary award.

It reminded me of organizations like Enron, Ahold, or Worldcom, where management habitually set their people ambituous goals. These firms may have gone down attributing their fall to fraud, but the fraud was induced by the organizational context created, which stimulated their people to cheat. Stretch goals may stretch employees’ effort to the limits, it also stretches their sense of ethics.


Because the interesting question is also, who were they cheating? When people reached only 5 or 6 they could have just as easily cooked the books and taken the money as when they had reached 8. But they didn’t. As a matter of fact, there was no need to fraudulently fill out the notebook at all; they could just have taken the money out of the envelope and leave. But only one out of the 154 participants took more money out of the envelope than reported on the notebook (and even that could have been an honest mistake). So why did people only commit fraud when they fell just short of the goal, and not when they missed it by a mile? And why did they insist on writing the false information in the books, when they could have just taken the money and run?

It has to do with self-justification. It seems we are inclined, when we have been set a stretch goal that we just did not reach, to tell ourselves that we sort of did, and really do deserve the reward. We can’t tell that to ourselves when we are way underperforming and very far off the mark. We also can’t justify to ourselves to just grab all of the money. But when we almost reached the stretch goal (but not entirely…) we humans are perfectly able to tell ourselves a nice little story that permits us to take the loot after all. And the people at Enron, Ahold, and Worldcom were just as human as the rest of us.

Thursday, 6 January 2011

Rethinking employee remuneration: Or why is it difficult to find a taxi when it’s raining

How to set up a remuneration system that gets the best out of your employees continues to be a tricky – and sometimes controversial – topic. Whether it concerns labourers or top managers, it seems difficult to get it right. Individual incentives, team incentives, tying bonuses to firm-wide performance, quantitative metrics, qualitative metrics, stock or options; all of them can potentially stimulate desired performance but could also trigger all sorts of unintended and undesired behaviours.

What does not help is that management theory about remuneration seems often to be based on a set of completely erroneous assumptions about human behaviour. And a theory with the wrong foundations can hardly make helpful recommendations.

Basic assumption: they will work more if I pay them more

Consider, for example, what we call the price elasticity of wages, as defined in economics to capture the relationship between pay and employee effort. Simply put, we tend to assume it is positive: If people make a lot of dosh per hour, we expect them to want to put in more hours. If we reduce the hourly wage, they will be less inclined to want to work many hours and, instead, prefer leisure time; catching a movie, go fishing, or crash out on the sofa holding a beer.

Of course, many people in regular jobs often have to work a fixed number of contractual hours (e.g. 9 to 5), so they don’t really face this choice. Similarly, people who in their jobs are largely judged based on their output, regardless of how many hours they put in (e.g. top managers, violists, professors) don’t get paid “overtime” so they also do not face this choice. However, in some professions people do. More importantly, this assumption – of a positive elasticity between hourly wage and how many hours someone is seeking to work – is the basis of much of the theory of remuneration, and therefore also influences how 9-to-5 workers are paid, and how top managers and violists are remunerated.

The slight problem is, the assumption appears to be wrong…

It is actually quite likely that, in reality (which is hardly the same as economic theory), people’s price elasticity is negative: meaning, if they get paid a higher hourly wage, people start to work fewer hours…

No drivin’ in the rain: NYC taxi drivers

A classic study on this topic was conducted by Professor Colin Camerer and colleagues from the California Institute of Technology. They examined New York City taxi drivers – who basically have to charge a fixed price per mile driven, but can determine for themselves how many hours per day they drive their taxi – and measured the relation between how much money they were making per hour and how many hours they were inclined to work on a given day. And basically what they found was that it is harder to find a taxi when it is raining.

“Eh…?” thou might think, and probably something like “of course it is harder to find a taxi when it is raining, because then they are all occupied”. That might be partially true, and admittedly what I used to think sheltering under an umbrella on a London street corner waiting (in vain) for a free taxi, but there is more going on. And that says something about the relation between remuneration and employee effort.

When it is raining, taxi drivers make more money, at least per hour. Because so many people want a taxi when it is pouring down – likewise when there is a Tube strike or a big convention going on – cabbies make more money per hour. That is because they don’t have to wait long for a new customer or drive around empty hoping someone will flag them down; there is such an abundance of potential customers that they’re hardly ever empty. Consequently, they make substantially more per hour when it is raining than when it’s a sunny day.

Economic theory would now predict that taxi drivers make longer days when it is raining because then they make more money per hour. Vice versa, we’d expect that they go home early when it is sunny (to lie down in the park, play with their kids, or take up knitting – or whatever excites taxi drivers). That sounds quite logical, right? Right… The only problem being that Colin’s research pointed out that the exact opposite is true. When it is a sunny day, and taxi drivers are not getting much buck for their hour, they continue driving and make long days. Instead, when it is raining, and taxi drivers have a high hourly wage, they tend to call it a day early… They, en-masse, were doing the exact opposite of what economic theory would predict. And that is a bit of bummer for our whole remuneration system and theory, because apparently it is built on shaky grounds.

But why…?

The question remains, why would they do that? Colin and colleagues speculated – based on a bunch of interviews with NYC taxi drivers – that people simply apply a different rule in their professional lives. They basically tell themselves, at the beginning of the day, that they have to make a certain amount of money and are then allowed to go home. And the taxi drivers continued driving till they had reached that amount. Some days, they (told themselves they) were lucky because it started raining and they allowed themselves to go home early. Other days – bummer – the sun stayed out and they had to drive longer to reach their target for the day. The vast majority of taxi drivers they interviewed uttered this logic; only one taxi driver said “drive a lot when doing well, quite early on a bad day” (the economists’ prediction), and I guess that’s simply because you always need one exception to confirm the rule.

What does this say about human behaviour? In economic terms this behaviour (“drive till I have reached a certain amount of money”) is plain irrational. As a matter of fact, Colin and colleagues computed that by only adopting the simple alternative rule “drive a fixed number of hours every day”, taxi-drivers could already enhance their income by 50-78 percent. They could increase it with 156 percent if they would just drive more hours when it is raining and less when it is sunny (which, incidentally, would also enable them to spend their leisure time in the sun, rather than inside grumbling at the rain!). Plain irrational indeed. But, as we both know, people aren’t always rational… So perhaps it is also time to rethink how to reward them differently.