How Ridiculous Ideas Crafted By Nobel Laureates Gain Followers
Story Stream
recent articles

Mr. Kahneman and Mr. Tversky (the first died recently, the second in 1996) started working together in 1969. They, together with Richard Thaler, claimed to have stumbled on a new field of study, called “Behavioral Economics.”  The practitioners of this field alleged to have found a range of behavior that required political, bureaucratic and economists’ interventions to correct, since – according to them - people did not understand what they were doing, and were unable to learn from their mistakes without expert advice from academics in this new field.  Closer inspection reveals that there is nothing in their models, laboratory experiments, methods and anecdotal evidence to reach their conclusions.   

As Israel and its Air Force are in the news, here is a first example of what Mr. Kahneman identified as a behavioral pattern requiring his new theory and jargon.  Superiors praised pilots for good landings, but criticized them when they were not so good.  Turns out that in following landings the praised pilots did worse, whereas the criticized improved.  Kahneman concludes: “It is part of human condition that we are statistically punished for rewarding others and rewarded for punishing them.”  How silly is this conclusion?

People do not learn a thing from just being praised: if you are perfect, how can you improve?    However, if superiors point out mistakes and give you more options to try – reflecting trust in your abilities – you will try to cut down on mistakes.  In the first case, you may even become complacent, whereas in the second you learned from mistakes.  Really, millennia old observation.

In an experiment, Kahneman & Tversky told participants that an invented character called “Linda” was both smart and socially conscious.  They asked the participants which answer is more accurate: 1. That Linda is a bank teller, 2. That she is a bank teller and a feminist.  The participants chose 2.  Kahneman and Tversky concluded that statistically they should have answered 1.  Based on the results, they deduced that people strongly believe in stereotypes?!. Perhaps if Kahneman & Tversky would not have put the words “socially conscious” in the test, they may have had a – weak - point.  However, these two words had to mean something and be used – that is what minimally intelligent people  participating in the test would have inferred before answering, thus associating them with “feminism” simply makes sense 

After all, people have only 24 hours a day, live once and they know – or should perhaps know - what Confucius centuries ago concluded: We humans are similar by nature, but we vary greatly by virtue of our habits” – which is why people think in terms of stereotypes to start with.  Do Kahneman & Tversky’s tests give even minimally better insight into human behavior?   

Elsewhere Kahneman argued that emotions heavily influence decisions. No doubt. However, his illustrations do not support such conclusions. He gave this anecdotal example: Consumers drive across town to save $5 on a $15 calculator, but do not drive to save $5 on a $125 coat – even if – quote - the gain is “precisely the same.”  This conclusion makes no sense.  Both calculators and coats are “durables.” Saving $5 on a $15 item, you increase the return by 33%, whereas on the coat by a mere 4%.  By no stretch of the imagination are these two cases “precisely the same” and no need of getting emotional. 

Take now a closer look to their more sophisticated experiments: at closer examination, they turn out to be even shallower than the anecdotal ones above.  Kahneman and followers rely on them to illustrate people’s misperception of “risk”; their inability to assess probabilities and by implication, having preferences easily manipulated, and being bad investors and financial planners. 

 Kahneman & Tversky asked people to think of a coin toss, where with 50/50 probability they could either win $15,000 or lose $10,000.  Many answer that they would refuse to take such bets. Then, they tell participants to think they have $1 million, the choice now being between $990,000 and $1,015,000.  Kahneman & Tversky found that more people would take this bet. They conclude: “when you think in terms of overall wealth, you have a different attitude to risk.  Gains and losses loom less large.”  What does this laboratory experiment reveal about people’s views of risk?  Nothing.

If people were not in the laboratory and the bets were real money (not fiction), there is no surprise getting answers all over the map. One person may have $10,000, whereas another $10 million.  For the first, the potential outcomes of the coin toss imply a chance of becoming homeless and starving, or, if he wins, having access to better food and health care. For the millionaire though, the option of losing $10,000 brings to mind entertainment options, and absolutely nothing about risk:  “If I lose $10K, I’ll spend one day less renting the yacht, and if I win, I’ll buy a nice present for my wife.”  What do participants imagine when paid $5 to take part in such academic experiment – where all is fiction – who knows, and who cares?

Consider too that Kahneman & Tversky do not ask, and know nothing about the participants’ age.  If you own $100,000, and face the prospect of 50% chance of losing $10,000, you can expect people in different age groups to give different answers. A young one may expect to recoup the loss, whereas someone between sixty-five and … death may not. The latter may buy a lottery ticket, the younger - not. Attributing the different answers to differences in “preferences” (not having asked about age), and which anyway these economists believe can be easily manipulated, should not be extrapolated and used to rationalize any policy or investment decision.      

In other writings Kahneman states overconfidence is “the engine of capitalism” – which is another very silly statement.  People can be as overconfident as they want to be – that trait alone will not lead anywhere, except perhaps like Icarus, gluing on wings, flying toward the skies – and crashing.  To bring ideas to life, entrepreneurs must raise money and they must be convincing to do so. Kahneman mentions the fact that only 35% of small businesses survive for more than five years, but 81 percent of individuals say that they have better than seven in ten chance of success.  These statistics mean nothing except what everyone knows: that a start-up is a matter of trial and error, very much depends on the entrepreneur, his team – and circumstances.  The entrepreneur may be overconfident – but those working in finance know how both discount such traits and structure the staggered financing, keeping the overconfident on a leash. 

Of course, both the entrepreneur, his team and the financiers make mistakes, even if the company and the capital are structured to assure accountability. The engine of “capitalism” is in such accountable matching, not overconfidence.

Kahneman and company also did the following type of surveys to suggest that people are bad at assessing probabilities and thus dealing with risks.  They gave Harvard Medical School doctors and the staff this hypothetical scenario: A woman asks for an HIV test.  The doctor tells her that one in a thousand at her age and having similar backgrounds, is infected. The doctor also tells her the test is 95% accurate. When they asked the participants: If the woman tested positive, what are the chances she is infected?  Most physicians answered: 95%, whereas the correct answer is around 2% (basic Bayesian calculation).

Do such surveys prove anything about actual behavior? Will these doctors behave as if the probability was 95%?  No, because there are no consequences to giving wrong answers in laboratory experiments and such surveys, whereas in real life they would be sued and lose their license. The physicians will – call up a statistician – an option that does not appear in these academics’ survey. The world has plenty of statisticians and lawyers to impose prudent behavior on physicians.  They just do not exist in Kahneman & Tversky’s laboratory world and ivory tower models.  

In fact, a British Royal Commission on gambling indeed concluded that gamblers – real ones, not the imaginary, laboratory variety - did not over-estimate the chances to win, an argument often used to rationalize prohibitions.  This should not be surprising, since even in the eighteenth and nineteenth century information about probability distributions of winning prizes was widely disseminated. There was no asymmetric information.

Stephen Stigler, in his 2003 Ryerson lecture titled “Casanova’s Lottery” (yes, that “Casanova”) found that in the 18th and 19th centuries too there was no evidence that people were betting “over their heads” and ignorant of probabilities of winning.  He found detailed statistics in a 1834 book,  Almanach Romain sur la Loterie de France. The book summarizes the winning numbers of every draw in France between 1758 and 1833.  Stigler notes that the winning numbers and the geographic distribution of winners were randomly distributed, implying that there was no fraud, even though public authorities at the time made such accusations.          

Briefly: I fully agree that it is impossible to examine people’s behavior by putting them in the arbitrary boxes of today’s accidental academic disciplines and their jargons, economics and psychology among them. That does not imply that surveys or laboratory experiments are the solution to learn about human behavior – especially when there is so much evidence about people’s actual behavior.  It is just so much easier to play trivial academic games than go out and examine hard evidence.

I would not make this statement if I did not find such alternative ways of shedding far better light on human behavior when facing risk and uncertainty, and show that people found a range of solutions to the problems that Kahneman and his fellow travelers claim exist and have no solutions. I would not beat a theory if I did find that people’s behavior across countries and time displayed consistency, and not a spineless, too easily malleable human mind “behavioral economists” assume – unless governments  did not bring about and sustained disastrous frames of mind to start with.

As it is true that people have been falling occasionally into traps of exuberance, deception, frenzy of mobs and radicalism – as can be seen now on U.S. campuses, the streets of Western capitals and in the Middle East.  This happened when rulers imposed atavistic institutions, pursued disastrous policies, destroying options to create institutions that stabilize behavior, bring about self-restraint, discipline, responsibility and tolerance.

 Unfortunately much of “Behavioral Economics,” marginal and trivial as it is, does its contribution toward such decline, as its claims of irrationality, imprudent behavior, follies – ending in bubbles or worse - rationalize increasing politicians’ and such “expert economic advice” role in society – they being the  masters to mitigate such human follies. Back in 2010, Richard Thaler advised the British government to create a “Behavioral Insights Team,” which still exists, claiming they know how to “nudge” people – whatever that means (besides a very silly example of designing urinals, not worth mentioning). It sure does not appear it had the slightest success in nudging the British toward sustaining civility, or mitigating anti-semitism on London’s streets and UK’s campuses.

This academic field’s take is the same as Keynes’s, who assumed explicitly that government interventions in spending – no matter on what – are necessary, since the “hoi polloi” is subject to “animal spirits” (Keynes’ words), that – hold your breath - those in government and their advisers are never subject to.  He too assumed both that politicians and economists can overcome the hoi-polloi’s randomly volatile behavior (relying on his model and jargon) – which, he assumed, was not provoked by rulers’ own misguided policies and politics.

This conclusion sheds light why governments subsidize many such fields of study within academia. Politicians want to show their policies rely on “science,” that they have legitimacy and there is independent authority behind government decision.  Unfortunately, once government agencies subsidize the research, create “scientists,” subsidize publications and giving honors too, the “independence” is gone.   A genuine Ponzi scheme comes into being that only cutting heavily the financing of it would end it. It appears that finally parents are waking up to not paying tuitions without serious due diligence, donors not giving checks indiscriminately, though governments are still handing out direct and indirect subsidies (through forgiving student loans and interest on them in particular).

If there is something positive in this “behavioral economic” field, it is NOT that it draws attention to the fact that people make mistakes – who does not? – but to raise the question why we do not correct mistakes more quickly – the badly structured financing of academia being one of them.  What are the obstacles?  Once we find out what they are, it becomes easier to find ways and institutions to avoid making the mistakes.



The article draws on Brenner’s books World of Chance, Force of Finance and series of recent articles about higher education, antisemitism, and the Middle East.


Show comments Hide Comments