Friday, May 28, 2010

Black House MMA

One serious group of bad dudes here:

From L to R: "Junior" Dos Santos, Lyoto Machida, "Minotoro" Nogueira, Jose Aldo, "King Mo" Lawal, Anderson Silva

Monday, May 10, 2010

Laplacian Supermen, Efficient Markets, Behavioral Economics, Adaptive Heuristics

(Pierre-Simon Laplace)

Today's entry will discuss an increasingly popular, relatively new perspective on human economic behavior, and then later we will look at how this perspective has been used to invigorate Keynesian economics and attack the Chicago school. I will contend that part of the seduction of this more recent development, which is generally termed "behavioral economics", is its subtle promise of creating an elite group of cognoscenti which can "steer" the economy using command-and-control central planning methods. These specially trained individuals will be equipped to dispassionately observe the folly of other market participants, and then to either take on a benign, paternal role in regulating market behaviors (a promise that is popular with governments and pro-interventionists), or at the very least they will be able to profit from "contrarian" strategies that exploit the stupidity of the investing public (as opposed to contrarians who bet against the stupidity of government interventions).

These positions are not necessarily ones that would be advocated by many behavioral economists themselves, but we will find that the discipline offers much to the established worldviews of both Keynesians and certain contrarian investors, particularly those who advocate "deep-value" stock-picking. I believe that, ultimately, the best application of behavioral economics is to increase one's own awareness of cognitive biases and psychological vulnerabilities.

I will state right from the beginning that I used to be an enthusiastic fan of behavioral econ, but now fear that a very interesting body of research is being politicized and employed by the forces of Mordor to argue for far greater government control of the market pricing mechanism. I still believe that it is a very effective framework for critically analyzing one's own decision-making behavior, and recognizing times and situations in which innate cognitive biases may lead one astray.

Brief Recap of Previous Blog Post

In a previous post, which I realize was written some time ago, I mentioned six archetypal arguments made to attempt to justify economic central planning.

1. Democracy > Markets
2. Behavioral Finance > TP Model
3. Government Provides Social Justice that Markets Fail to Deliver
4. "European Socialism Works Great"
5. Markets Have Externality Problems
6. Hope Springs Eternal

We have only had a chance to discuss the first one---the notion that the outcomes of government intervention in the economy are intrinsically superior to free market outcomes, at least so long as the government intervention is the product of democratically elected officials. Today we will start to contend with the behavioral Keynesian argument for government intervention in the markets. It is a stimulating argument that generates some fairly sophisticated and convincing attacks. The discussion will have to be broken down into a few installments, because many of the points are quite subtle and require some level of exploded detail.

Future posts will also have to try to deal with the issue of social justice, because it is such an emotional one, and the issues of positive and negative externalities, because those are probably the strongest arguments against unfettered free markets within the economics profession itself.

Regarding Europe, I have come to realize that critiques of European socialist economies, particularly when made by an American, can appear condescending, triumphalist, and jingoistic, and can trigger immediate and understandable defensive reactions among those Europeans who feel patronized. The topic is best left for a European free market type to cover here in the future, as such an individual would be able to speak with more gravitas. For now, we can simply say that the European Union is contending with the problems created by countries that have expensive socialist entitlement programs, debt to GDP ratios nearing or exceeding 100%, chronic fiscal gaps, and basic debt service rates that exceed GDP growth rates (in some cases). The very real risk of sovereign defaults has caused the ECB to take unprecedented steps in an attempt to stabilize the Eurozone. We are seeing once again how socialist programs can run into serious trouble, especially when the printing press is not available to individual country governments (who might otherwise try to obfuscate the signs of severe fiscal mismanagement by debasing their own currencies and paying creditors off with the junk)

The final anti-free market argument on my little list---the one that concerns hope that a superior central planner will appear---is a difficult one to try to take on because, ultimately, it seems to be a matter of religious faith: the eternal optimist is not interested in the historical data or the known problems with economic central planning, because he insists that his favorite politician or group is able to transcend such concerns.

Let's begin with a brief description of the field of behavioral finance and the controversy between this approach, which tends to highlight the errors or "bugs" in our cognitive software and the problems with using rules-of-thumb, and other perspectives on the mind that tend to highlight the spectacular abilities of the human brain and the advantages that mental shortcuts give us in contending with a dynamic, confusing environment.

Homo economicus

Perhaps the best place to begin a look at behavioral finance is with its foil: Homo economicus, the "Economic Man" that serves as the model for individual behavior in Chicago school microeconomics. Homo economicus is what is termed a rational utility-maximizer, meaning that he seeks to optimize for a given utility ("satisfaction") function, given whatever constraints he faces.

The term "utility maximization" must be fine-tuned, because it is where the first point of misunderstanding tends to occur. The individual's utility is not necessarily what onlookers would define as rational. It is ultimately subjective---utility needs to matter to the individual making the decisions, but there is no requirement that it make sense to you or I. We do not need to agree that someone's personal utility goals are worthy or even intelligent. We have to avoid the kind of prejudicial and self-absorbed use of the term "irrational" that caused Ayn Rand and her Objectivist approach to get into so much trouble.

Because emphasis of the subjective can lead to analytical bedlam and makes quantification difficult (the Austrian school of economics is widely known as the "Subjectivist" school, and does prefer to use chains of logical reasoning over high-powered mathematics), there is often a strong desire to constrain choices and information so that we can compare apples to apples. This desire is typically expressed by those who wish to make economics into a "hard science" like physics. If we wish to severely restrict and simplify behaviors, we can create a situation in which money and utility are essentially identical---the individual seeks to maximize utility by maximizing a financial wealth function (gaining as much money as possible per unit of effort or unit of risk). This is a convenient way to set things up and it is not wholly unreasonable because money gained can then be employed to buy a number of different things, depending on what a particular individual wants to do with it.

However, it still a contrivance: to make utility equal to risk-adjusted monetary gain, we usually have to set up our thought experiment so that everything else is equal ("ceteris paribus"). In other words, given that the time and energy expended, morality, non-financial risks, and so on between two choices are absolutely equal, we would expect Homo economicus to choose the option that gave the greatest financial reward. If it is not clear which option would offer the greatest financial reward or if there are a number of different trade-offs involved that are difficult to quantify, then we actually don't know which option Homo economicus would choose. If the requirements of each choice are different, then we don't know that Homo economicus would choose the one that offered the most money in the payoff.

Through accident or deliberate misconception, critics of the rational economic man model seem to get this wrong all the time. They assume that one dependent variable---the selfish pursuit of money---must be maximized for the utility-maximizer model to be true, and then they find occasions when people make decisions that do not maximize personal wealth and immediately claim these as victories for the "irrationality" side.

(the poster boy for unconstrained wealth-maximizing utility functions)

It must be said that the critics are often aided in their efforts by efficient market theorists who, looking for precision and situations that have closed-form mathematical solutions, do tend to prefer to use an objective value like money as a proxy for utility. Using monetary self-interest as a foundation allows for some high-powered mathematical techniques to be employed to prove that free markets will provide the optimal mix of goods and services in the economy. This happy result is epitomized by a celebrated paper that was published in 1954 by Kenneth Arrow (of Arrow's Impossibility Theorem fame) and Gerard Debreau in the journal Econometrica. The Arrow-Debreau paper contains a proof that an economy that consists of independent, selfish, rational individuals and a free market price system can attain a stable equilibrium of perfect efficiency ("Pareto optimality"). Note that Arrow-Debreau is not a proof that independent, selfish behavior is a requirement for market efficiency; it establishes that this is a viable path to market efficiency.

Unable to present a cogent argument in favor of economic central planning since the collapse of the Soviet Union, the enemies of the market efficiency case have founded new attacks on the Chicago school by incorporating experimental results published by clinical psychologists. These psychologists have researched human decision making and found that people are not "rational" in the Bayesian sense, and thus markets composed of these flawed human beings will inevitably be distorted and inefficient. Once again, an elite group of Platonic guardians---perhaps men like Charles Rangell and Barney Frank---must be elevated to administrative overlord status so that the wild herd can be controlled (for its own good).

Do Irrational Individuals Really Mean Inefficient Markets?

It should probably be noted that a failure of individuals to behave rationally in personal settings does not actually mean that markets are inefficient when individual decisions are aggregated; indeed, the irrationalities could cancel each other out if people were irrational in different ways. Perhaps some people are excessively driven by optimism while others are excessively driven by pessimism, but the two are neutralized in the marketplace. This is another concern that is frequently ignored in the behavioral economics literature. Frequently the reader will find that it is stated or implied that cognitive biases and limitations found in individuals inevitably lead to an overall failure of free markets and force us to accept paternal regulatory oversight.

The Emergence of Behavioral Finance

(Daniel Kahneman)

As a result of the way that utility is often constrained so that we can make it an objective value and compare and contrast individual decisions directly, Homo economicus has become associated with certain forms of sophisticated quantitative reasoning. The standard trope is that this creature is the embodiment of Laplace's demon (also called Laplace's Superman), a brilliant combination of game theorist, gambler, and operations research expert. His decisions, if unpacked, would always reflect the state-of-the-art in mathematical decision techniques---decision and game trees with probability-weighted payoffs, real options, Monte Carlo simulation, Bayesian updating, and so on. By defining Homo economicus as a decision science demon, we can set a benchmark for how people should behave, if they in fact were members of this species, and then see how people actually do behave in clinical tests of problem-solving and reasoning ability.

The Chicago school economists have not literally been saying that individuals buying and selling in the market are Laplacian Supermen. What they are saying is that two factors are at work to make it reasonable to act as if the individuals meet the Homo economicus standards:

1. The Miracle of Aggregation. If 49.5% of the investing public is composed of people who are stupidly risk-friendly/overly optimistic, but an equivalent 49.5% of the public is composed of people who are stupidly risk-averse/overly pessimistic, the two groups would actually cancel each other out and prices would reflect the judgments of the 1% who are properly calibrated. Thus, a market filled with random irrationality and a very small percentage of "Smart Money" players ends up looking identical to a market that is entirely composed of "Smart Money" players (this is the cause of the so-called "Wisdom of Crowds" phenomenon that was described in a popular book by the same name).

In fact, it could be argued that the Smart Money percentage would be far greater than 1%, as the majority of the daily trading volume in the major financial markets stems from institutional investors and professionally managed funds.

2. Survival of the Fittest. Over time, a process not unlike Darwinian selection would take place in the markets, with those who are systematically stupid in terms of decisions losing money to those who are not. If there is a truly risk-free way to buy low in one market and sell the same product for a higher price in another market, a process called arbitrage will take place and, given time, will cause prices to be restored to equilibrium. After awhile, we can assume that those who are making high returns are probably also taking correspondingly higher risks. Some will do extremely well simply because, statistically, some pretty much have to do extremely well (i.e., luck tends to trump skill, although the lucky will usually refer to themselves as skillful). In extremely rare cases (the quantitative hedge fund Renaissance Technologies may be a good example), a group may unlock a sustainable source of profits that cannot be explained by mere risk tolerance or luck (these profits are typically referred to in the hedge fund industry as "alpha").

In 2002, psychologist Daniel Kahneman was awarded the Nobel Prize in Economics for his pioneering work in behavioral economics. Kahneman and his colleague Amos Tversky (Tversky passed away in the late 90s---otherwise he and Kahneman would have shared the Nobel) founded "Prospect Theory", a vehicle for understanding why people display risk-seeking behavior in some circumstances, but will reveal risk-aversion when the same situation is presented to them in a different way.

For an example, consider the following:

Given a choice between

A) gaining $1,000 with 100% certainty


B) a coin-flip gamble with a 50-50 split between gaining $2,500 and gaining nothing

...which option would you choose?

Kahneman and Tversky found that a healthy majority of people would prefer the sure $1,000, even though the probability weighted payoff from the coin-flip was actually larger ($2,500 * 50% chance of winning - $0 * 50% chance of losing = expected return of $1,250). The desire to emphasize certainty over theoretical payoff maximization represents risk-aversion.

Now let's modify the scenario just a little bit:

Given a choice between

A) losing $1,000 with 100% certainty


B) a coin-flip gamble with a 50-50 split between losing $2,500 and losing nothing

...which option would you choose?

It turns out that many people become risk-seekers when confronted with losses, and will take the gamble option because they obsess about that 50% chance of losing nothing and tend to downplay the probability-weighted expectancy that the gamble will cause them to lose more (once again, $1,250) than they would lose if they just bit the bullet and took the sure loss.

Kahneman and Tversky used this and other clinical results to argue that the Homo economicus model was deeply flawed, and that the most optimistic way to describe real-world human reasoning was to say that we are "boundedly rational". The problem for the Chicago school is that the irrationalities do not appear to be random; they appear to be systematic, with the majority of the subjects tending to make the same kinds of mistakes. This is a serious issue for the Miracle of Aggregation, because the miracle requires that mistakes be random and cancel each other out in order for the Smart Money crowd to dominate market pricing.

The authors always remained fairly neutral in terms of making policy prescriptions based on these studies, but other behavioral economists have been more aggressive in their pursuit of public policy implications, and in ways to find chinks in the armor of the dominant Chicago model of equilibrium prices and market efficiency. I think that part of this gold rush atmosphere is driven by genuinely leftist political agendas, but some is simply an aspect of a "publish or perish" quality of academic life and a need to find a niche in a world that is to some extent dominated by the economics department at the University of Chicago. The fastest way to get noticed may be to criticize the existing scaffolding and claim that we "need a new paradigm of economic thought-leaders" in order to "rebel against the tyranny of free market zealotry."

The problem of risk-seeking behavior under the stress of losses is, however, very real, and the cemetery of destroyed hedge funds, banks, and proprietary trading programs is littered with examples of traders who were unable to bear losses and preferred to "double-down" instead of taking the immediate punishment and walking away (because Other People's Money---"OPM"---and certain types of incentive structures are frequently also involved, risk-seeking behavior where losses are concerned can also be a selfishly rational choice for some traders). However, we will find later that the very worst double-down offenders of all are governments.

At any rate, the risk-aversion/risk-seeking conundrum is but one of many such enclaves of irrationality that Kahneman, Tversky, and other behavioral economists have discovered over the years. To name just a few examples, Richard Thaler has articulated the problem of "the winner's curse", wherein the winning bidder in an auction has a strong tendency to overpay, and Robert Shiller wrote a famous book during the bubble that discussed how "irrational exuberance" was pushing stock prices far beyond justifiable levels. Regardless of the proposed regulatory implications, these are all excellent works and they should be well worth your time.

The mounting evidence that human beings do not meet the tough reasoning standards of Homo economicus has served as the catalyst for a rebel movement against the imperialistic tendencies of the Chicago school of efficient markets. The research results of behavioral economics have been taken up by current and former Keynesians as the evidence that Keynes was right and that markets are driven by ignorance and animal spirits.

Behavioral Keynesians and the Chicago School

There are three areas of the behavioral Keynesian/Chicago efficient markets war that I will comment on, because I think they serve as the major battlegrounds for the argument. The first is rooted in cognitive science and relates to the issue of heuristics, and whether such decision aids are, on balance, humanity's friend or foe. The second, which will be covered next time, is the issue of equilibrium as the Chicago school has defined it mathematically, and whether or not a market needs to meet what is called the TP equilibrium specification in order to be considered "efficient". The final area, which I will go into more detail on in a future post, concerns inflation, specifically the Friedman argument that inflationary monetary policy is not useful as a way to increase employment (Friedman argued that people will not be tricked for long: they will recognize that inflation is being implemented and form what he termed "rational expectations"---adjusting prices to compensate for inflation---to guide their economic decisions).

Heuristics and Ecological Rationality

Many of the critiques against the Homo economicus model concern the way that people seem to ignore base rates when making inferences from data. This is a violation of Bayesian rationality and, it is explained, a problem for the micro-structure of efficient markets.

A traditional example of the type of question that psychologists like Kahneman have employed to determine whether or not normal people (i.e., non-statisticians) use Bayesian reasoning goes something like this (I borrowed the example from Gerd Gigerenzer):

The probability of breast cancer is 1% for a woman at age 40 who participates in routine screening. If a woman has breast cancer, the probability is 80% that she will get a positive mammogram. If a woman does not have breast cancer, the probability is 9.6% that she will also get a positive mammogram.

A woman in this age group had a positive mammogram in a routine screening. What is the probability that she actually has breast cancer?

When this question was posed to doctors, staff, and medical students at Harvard Medical School, certainly one of the most elite institutions for medical training in the world, virtually no one got the correct answer. The vast majority apparently said that the probability of breast cancer was about 70%---no reason is given for this particular answer, but my guess is that people took the probability of cancer with a positive mammogram (80%) and simply subtracted the false positive rate, or probability of no cancer but a positive mammogram (9.6%).

Alas, the correct answer to the question is a mere 7.8%, and it is determined by using a Bayesian algorithm. In essence, some of the world's most highly trained medical people regularly overstate the probability of a disease being present after a positive test by almost a factor of 10. Most of us are surprised that the answer is such a low number because we have ignored the base rate---the low original probability of breast cancer being present---in our calculations.

Similar tests have been conducted on judges, juries, military officers, lawyers, and many other groups. It has been found time and time again that we tend to ignore low base rates when reacting to things like, say, the probability of being the victim of a terrorist attack: we overreact dramatically and frequently are afraid of the wrong things, at least statistically speaking.

To many behavioral economists, the inability of the vast majority of people to properly incorporate base rates into probabilistic reasoning is a direct refutation of the Homo economicus model, and thus of the notion of efficient markets and tight prior equilibrium. I should state here the obvious notion that politicians should be subjected to these tests as well, since they would almost certainly fail very badly and we might be less inclined to look for government solutions to market problems if we recognized that government was composed of flawed human beings. Unfortunately, politicos seem to have received a free pass when it comes to these exams.

Here is another example---this one posed in qualitative terms, but following the same basic format as the breast cancer question:

You wish to buy a new car. Today you must choose between two alternatives: to purchase either a Volvo or a Saab. You use only one criterion for that choice, the car's life expectancy. You have information from Consumer Reports that in a sample of several hundred cars the Volvo has the better record. Just yesterday a neighbor told you that his new Volvo broke down. Which car do you buy?

(Saab Aero X)

Behavioral economists will frequently tell us that many/most people will end up over-emphasizing the neighbor's troubles with the Volvo because that story is vivid, recent, personal, proximate, probably communicated to us in an accessible story-telling form, and so on. Instead of looking at the statistically significant Consumer Reports results and relying on base rates of reliability, buyers will irrationally move towards the Saab option because of the psychological accessibility of the neighbor's tale of woe with the Volvo.

Why The Brain Has to Use Cognitive Shortcuts

(Smilodon fatalis)

In the brain's defense, these tests deliberately fail to take into account the things that the brain is very good at doing. I have recently seen how some of them are put together: it was very obvious that those designing the tests were well aware of common cognitive blind spots and how to select and frame questions so that the majority of people will have difficulty with them.

Critics of the human cognitive architecture who like to celebrate examples of our stupidity often ignore three of the major constraints that our brains faced during evolution: the needs for mobility, control of operating temperatures, and very efficient use of available energy. Two of my favorite cognitive neuroscientists, Steve Quartz and Terrence Sejnowski, explain these constraints this way:

To give you a better idea of what a piece of engineering your brain is, let's benchmark it against modern supercomputers. You may have noticed that your personal computer tends to heat up the room pretty well. That's because computers pack a lot of circuitry into a small space. A real stumbling block in making supercomputers is figuring out how to cool their circuits so they don't melt. Cray supercomputers, once icons of modern industrial design, were only sleek-looking because the tons of cooling equipment they require were hidden in the basement. In fact, most of Cray's patents are for cooling systems, not computing parts. To make a computer that rivals the brain in all its features, somehow you'd have to pack a thousand supercomputers into a three-pound package that requires only about as much power as the on/off lights on a typical computer. Now make it capable of withstanding not just being dropped but also the occasional blindside quarterback sack or first-round barrage from Mike Tyson. Your brain is the ultimate portable computer.

(cognitive scientist Steve Quartz)

"Fast, Frugal Decision-Making"

(Gerd Gigerenzer)

Perhaps the leading figure in academia who disputes the notion that the standardized toolkits of heuristics and biases are inherently irrational is Gerd Gigerenzer. Gigerenzer believes that the tests of rationality that are used to support the behavioral finance arguments typically involve statements and questions that really do a disservice to the true power and beauty of the mind.


There are two views about the machinery of intelligent behavior. The classical view is that the laws of probability theory or logic define intelligent processes: Intelligent agents have processes such as Bayes's rule, the law of large numbers, transitive inference, and consistency built in. This was the view of the Enlightenment mathematicians, to which Jean Piaget added an ontogenetic dimension, and it is still the dominant view in economics, cognitive psychology, artificial intelligence, and optimal foraging theory. For instance, Bayes's rule has been proposed as a normative or descriptive model of how animals infer the presence of predators and prey... The problem with this view is that in any sufficiently rich environment, Bayesian computations become mathematically so complex that one needs to assume that minds are "Laplacean demons" that have unlimited computational power, time, and knowledge. To find a way out of this problem, researchers oten make unrealistic assumptions about the structure of natural environments, namely assumptions that reduce the Bayesian computations...

The second view, the one I propose here, is that modules of social intelligence, including the triggering algorithms, work with fast and frugal strategies instead of the costly "optimal" algorithms. There are several reasons that favor simple and specifically designed heuristics rather than expensive and general ones.

Gigerenzer's position is that the mind evolved to be good at tasks which had information and search cost consequences: for our prehistoric ancestors, deliberating too long about the true origin of a rustling in the bushes may have meant death at the hands of a saber-tooth or marsupial lion. He has found that people do much better at the mammogram problem if it is stated in a way that approximates how we would have encountered information in the natural world.

For example, consider a restatement of the original question in this new, natural frequency-oriented format:

10 out of every 1,000 women at age 40 who participate in a routine screening have breast cancer. Of the 10 who have cancer, 8 will test positive on a mammogram. Of the 990 who do not have cancer, 95 will---because of imperfections in the test---also test positive on the mammogram (so, in a group of 1,000 women who take part in a screening, 103 will test positive on the mammogram and 8 of those will actually have breast cancer).

A woman in the clinic just tested positive on the mammogram. What is the probability that she actually has breast cancer?

It is still an awkward problem for most of us, but it is far easier to at least get in the ballpark. Certainly few would claim that the probability was 70% or more, since we can see that the number who test positive with cancer is much smaller than the number who test positive but do not have cancer. The answer can be derived by taking the number with cancer who test positive (8) and then dividing by the total number who test positive (8 + 95). The conclusion is .078, or 7.8%, the same that one gets from following the formal Bayesian approach.


The teaching of statistical reasoning is, like that of reading and writing, part of forming an educated citizenship. Our technological world, with its abundance of statistical information, makes the art of dealing with uncertain information particularly relevant. Reading and writing is taught to every child in modern Western democracies, but statistical thinking is not. The result has been termed "innumeracy." But can statistical reasoning be taught? Previous studies that have attempted to teach Bayesian inference, mostly through corrective feedback, had little or no training effect. This result seems to be consistent with the view that the mind does not naturally reason the Bayesian way. However, (the argument we developed) suggests a "natural" method of teaching: Instruct people how to represent probability information in natural frequencies.

...In three studies, the immediate effect of the representation training (the training that uses natural frequencies instead of a formal Bayesian algorithm) was always larger than the rule training (the Bayesian algorithm), by 10 percentage points or more. Transfer was about the same. The most striking difference was obtained in stability. For instance, in the study with the longest interval---people were called back three months after training---the median performance in the group that had received the representational training was a strong 100%. That is, all problems were solved---even after three months. In contrast, the performance of the group that had received the rule training was 57%, reflecting the well-known steep forgetting curve.

...Thus there is evidence that (what I take to be) the natural format of information in the environment in which human beings evolved can be used to teach people how to deal with probability information. This may be good news both for instructors who plan to design precollege curricula that teach young people how to infer risks in a technological world and for those unfortunate souls among us charged with teaching undergraduate statistics.

Why Neglect of Base Rates Can Be Rational

(Australian saltwater crocodile)

Another issue with base rates is that we assume that the information on them is free and is coming from a credible source. We also are implicitly assuming that the world that is generating these base rates is stable, rather than a dangerous and dynamic environment in which situations can shift dramatically. To look at this aspect of ecological rationality vs. formal decision model forms of rationality, let's return to the previous Volvo/Saab purchase problem, but consider how Gigerenzer has reframed it for us in a new scenario:

You live in a jungle. Today you must choose between two alternatives: to let your child swim in the river, or to let the child climb trees instead. You use only one criterion for that choice, your child's life expectancy. You have information that in the last 100 years there was only one accident in the river, in which a child was eaten by a crocodile, whereas a dozen children have been killed by falling from trees. Just yesterday your neighbor told you that her child was eaten by a crocodile. Where do you send your child?

The structure of this scenario is effectively the same as the one for the car purchase, but the use of the wilderness as the contextual sheath helps us to see how our concepts of rational decision-making can change if we imagine that the environment that produced the historical base rates is not fixed and constant. Formally, it still would be considered rational to go with the base rates and send the child into the river because the trees are the more dangerous in terms of the historical frequency of fatal accidents, but there is a dissatisfying aspect to this form of reasoning. Something seems to tug at us and ask us to at least consider the possibility that the original scenario environment (in which crocodile attacks were very rare and tree climbing was more dangerous than swimming in the river) may have changed. Perhaps a group of crocodiles has just moved into our area, and we need to abandon historical base rates until we have more information.


Why do we have different intuitions for the Volvo problem and the crocodile problem? In the Volvo problem, the prospective buyer may assume that the Volvo world is stable and that the important event (good or bad Volvo) can be considered as an independent random drawing from the same reference class. In the crocodile problem, the parents may assume that the river world has changed and that the important event (being eaten or not) can no longer be considered as an independent random drawing from the same reference class. Updating "old" base rates may be fatal for the child.

The question of whether some part of the world is stable enough to use statistics has been posed by probabilists and statisticians since the inception of probability theory in the mid-seventeenth century---and the answers have varied and will vary, as is well documented by the history of insurance.

Ultimately what we can consider "rational" in terms of decision-making may be a question for the philosophers. Critiques of the assumption of environmental stability that is necessary for tools of statistical inference to be applied are the chief intellectual preoccupation of Nassim Taleb, among others, and we are unlikely to find truly satisfying answers because we want both robustness and precision in our quantitative toolkit. I will also briefly note that mankind's environment of evolutionary adaptation featured periods of very violent climatic change, and this would have exerted selection pressure on hardwired cognitive styles.

Here are some related comments from Quartz and Sejnowski:

The dramatic setting of our origins forces a basic rethinking of some of the deepest assumptions about who we are. In particular, it overturns the idea that a single, static environment shaped our brains. Our ancestors lived in a world of intense variability. We owe our existence to how the world's changes and instabilities shaped us as flexible animals. As changing worlds shifted the bare necessities of life, they demanded new ways of living and novel, flexible forms of social organization. Survival of the fittest meant survival of the most flexible.

...As we saw in the examples of chimpanzees and bonobos, the game of life changes when climatic changes shuffle resources. The history of life on earth is replete with such stories of climatic betrayal in which species specialized for one climate are decimated by sudden climatic changes...We saw that the first step in dismissing the flawed image of our brains as specialized contraptions for an unchanging world was to recognize that our brain's expansion occurred during a period of unprecedented climatic oscillation that was at its most furious during the last six hundred thousand years, exactly when our ancestors' brains underwent their greatest expansion.

How did our ancestors respond to their tempestuous world? A species can respond in a few ways to climatic oscillations. It can attempt to track its habitat and its habitat shrinks and expands with the ebb and flow of climatic change...another option is to face change head-on by developing a behavioral repertoire sufficiently flexible to prosper in different ecologies. Our ancestors would have needed ways to deal with sweeping ecological changes, sometimes occurring within a single lifetime, at other times spanning many generations. This means that the appropriate way to think about the fitness of a species may sometimes be in terms of its capacity to prosper in many environments. Rick Potts of the Smithonian Institution's National Museum of Natural History contrasts directional selection, the kinds of pressures that acted on Darwin's finches, with variability selection, the capacity of a lineage to adapt to multiple ecologies. Given the emerging understanding of the earth's climatic history, variability selection looked to be a key piece of the puzzle in understanding the human brain's expansion.

My point in all of this is not to suggest that people have no reason to study modern quantitative techniques, because I believe there is compelling evidence that they grant a significant advantage to decision-makers who know how and when to use them properly. I merely mean to argue that some of the irrationalities that the behavioral economists tend to report (and to use as supporting evidence for claims that we need a stronger regulatory hand in the markets) are not as irrational as they might initially appear. For example, the proper incorporation of base rates into a Bayesian framework does carry with it a need for stability in the environment that has generated those base rates. If the environment is subject to rapid and violent change, then continued reliance on historical frequencies may be irrational. Gigerenzer, Quartz, and Sejnowski essentially argue that human cognitive reasoning ability cannot be studied in isolation of the environment in which our reasoning hardware evolved.

If I could also indulge in a minor bit of impromptu conspiracy theorizing, I would say that I sometimes suspect that those who aim to derive increased political influence through attacks on free markets might actually prefer that the majority of human beings continues to fail tests on statistical reasoning, since this keeps the secrets of Bayesian alchemy within a small, quantitatively-trained priesthood of game theorists, decision scientists, economists, and statisticians. As an unintended side benefit, it also prevents voters from being able to accurately assess the probability that proposed government programs will work (probabilities anchored in historical base rates would reveal extremely low chances of success).

Some current thinking ranks mathematical aptitude up with athletic ability in terms of the strength of the genetic component involved. The danger of a heavy-handed interpretation of such a statement is that innumeracy will be considered virtually inevitable. Those who would (mis)use the results of Bayesian reasoning tests like the mammography question could build a stronger theoretical case in favor of the existence of an elite ruling class of Platonic Guardians who are equipped to make rational decisions. Math, or at least statistical inference, could become less of a subject that is truly taught, and could become more of a Darwinian selection funnel used to identify those who are just naturally good at quantitative work (regardless of the quality of the teaching). As a takeaway, I would echo Gigerenzer's sentiment that the best way to initially approach Bayesian problems and base rates or prior probabilities may be to transform the word problem into one that gives frequencies rather than probabilities.

Going a bit further, some will need the best of both worlds: the future of tactical decision training for those who are tasked with making high-stakes decisions under field conditions (psychological and physiological stresses, time sensitivity, limited information) lies in "enhanced heuristics" training programs. In such a program, a decision problem is articulated and then initially modeled using state-of-the-art quantitative techniques to find the best-performing solution. However, on a parallel track, the real-world environment in which the decision will take place is also analyzed, with particular emphasis on the way that information will be available to the decision-maker in the field. The idealized quantitative solution of the model is then transformed into "representational heuristics"---rules-of-thumb--- that flow intuitively and that can be implemented by a human being operating in real-time, using the information as it is present in his natural environment.

The Ultimatum Game: Short-Term Selfishness vs. Long-Term Enlightened Self-Interest

A qualitatively different kind of example of how decision biases that may initially appear irrational can actually be quite clever can be found in something called "The Ultimatum Game". This game consists of a pool of money, an offer of how to split the money made by one player, and a simple accept/reject response to the offer made by the other player. For example, Player A sees a pool of $10 and then makes the decision about how much of the pool to offer to Player B. Player B then decides to accept or decline the offer. If the offer is accepted, the pool is split accordingly and the game ends. If the offer is rejected, neither player gets anything, the pool is taken away, and the game ends.

In a myopic, game-theoretical view of rationality, Player A should use this reasoning: anything he offers Player B will be at least some gain to her, so Player A should offer as little as possible. After all, Player B will accept any positive offer after rationally deciding that gaining something is better than gaining nothing at all.

The Ultimatum Game has been played in many, many different experiments, and the overwhelming evidence is that people do not reason this way. Offers of less than 20% of the take are normally rejected ("irrationality" on Player B's part). In fact, the vast majority of offers come close to a 50/50 split ("irrationality" on Player A's part).

Variations of Ultimatum have been developed wherein the game continues for a fixed number of rounds, with the amount of money in the pot doubling each time an offer is accepted. Sometimes the two players switch roles after every round. Players have been put in fMRI machines so that their brains could be imaged while they were engaged in Ultimatum. A version of the game has been used to simulate the dynamics of marriage and the auction for available spouses (this one is very dark and I'll get to it later). In all of this, a consistent finding has been that people do not seem to act as selfishly as a strictly competitive theory of rationality would seem to predict.

Does this mean that players are irrational, or does it mean that the model of rationality we want to impose on the game is simplistic? My guess is that the latter is the case. The results of the Ultimatum Game are too similar to Axelrod's "Tit for Tat" optimal strategy for "The Prisoner's Dilemma", another game theory classic, for me to believe that the results of Ultimatum are irrational. On the contrary, the way that Ultimatum is consistently played would seem to be indicative of an evolutionarily stable stategy (ESS) that has long-term benefits for the individual players (this type of explanation was discussed decades ago by Richard Dawkins in his masterpiece The Selfish Gene).

Critics of the Homo economicus model have attempted to use the Ultimatum Game results as more evidence that people are not selfish decision-makers, and in some cases have tried to use it to suggest that redistributionist tax policies are built into our innate sense of fairness (so such government programs should be expanded). I find this interpretation to be very odd; Ultimatum would seem to be extremely supportive of free market exchange, since third-party coercion is not required for results to be "fair." If repeated trials displayed wildly unjust results, then perhaps we could take that as support for the claim of man showing natural avarice and predatory power-abuse or whatever. Left alone, people seem to do just fine.

The game is evidence of what has been called "Machiavellian intelligence", or an enlightened cleverness that comes from being able to put oneself on another person's shoes for a moment and attempt to see a decision from his or her perspective.

From this point of view, the offer of a reasonable split of the pot to another player is less an action taken in the interests of fairness (although that may be a factor, especially if we are not able to play the game anonymously) and more of an action taken in the interests of wanting the offer to be accepted and knowing that an insulting offer will not be, since the satisfaction of making sure that a selfish person gets nothing may be greater than the satisfaction of a small gain. If we increase the stakes so that, say, $10 billion is involved instead of $10, then an "insulting" offer of just $1-$2 billion may be much more difficult to reject, as the utility tradeoffs will almost certainly have shifted again.


The next post will look at the links between behavioral economics and value investing, which are at least superficially consistent, but which do have some underlying problems. I'll also spend some time looking at value investing as a general strategy.

Wednesday, May 5, 2010

Important New Florida Legislation

My apologies for the long absence; I will have some new posts up shortly (on behavioral economics, sex and game theory, combat mindset, the fiscal crises in Europe, and a few other topics that hopefully will entertain). In the meantime, you may be as devastated as I am to learn that...

Florida moves to ban fake testicles on vehicles

By Michael Peltier, Reuters

Read more:

TALLAHASSEE, Florida (Reuters) - Senate lawmakers in Florida have voted to ban the fake bull testicles that dangle from the trailer hitches of many trucks and cars throughout the state.

Republican Sen. Cary Baker, a gun shop owner from Eustis, Florida, called the adornments offensive and proposed the ban. Motorists would be fined $60 for displaying the novelty items, which are known by brand names like "Truck Nutz" and resemble the south end of a bull moving north.

The Florida Senate voted last week to add the measure to a broader transportation bill, but it is not included in the House version.

In a sometimes testy debate laced with double entendre, Senate lawmakers questioned whether the state should curtail freedom of expression in vehicle accessories.

Critics of the ban included the Senate Rules Chairman, Sen. Jim King, a Jacksonville Republican whose truck sported a pair until his wife protested.

The bill's sponsor doubted it would succeed.

"It's probably not going to make it through the process," Baker said on Thursday. "It won't be much of story in a few days."

Read more: