Sunday, January 3, 2010

Strategies: Commitment vs. Adaptation


(Thomas Bayes)


One way to begin a discussion of strategic options is to consider two basic families of strategy: strategies of commitment and strategies of adaptation. The approach that is most appropriate for a given situation depends largely on the level of forecasting accuracy that can be harnessed to the problem.

In a commitment-based strategy, the decision-maker(s) conduct an analysis of the external environment and internal goals and then commit resources to a given plan of action. The "commitment" level indicates that opportunity costs are quite high---conceptually, the strategic team is making a large bet. Because of the large upfront investment involved, prudence demands that the team possess a well-calibrated, trustworthy forecasting model. If forecast accuracy is unreliable, a commitment-based strategy is going to be quite dangerous. It is my contention that the first step in the strategy generation process is in fact to realistically assess the predictive power of the forecasting model that is being used.

There are several advantages to strategies of commitment, assuming that accurate forecasts of future states of the world are possible. The chief advantage is that successful commitment-based strategies have the largest pay-offs: scarce resources may be deployed to optimal use and little time, energy, or money is wasted on inefficient programs. A second advantage is that typically only one, large endeavor needs to be monitored at a time. In addition, strategies of commitment can have a psychological edge: there are incentives to try very hard, since failure could be catastrophic. There is a romantic, motivational element to many commitment-strategy stories, and these appeal to our vision of the brilliant general, superior statesman, or world-beating CEO as a solitary genius who silently constructs grand plans in his head.

The downsides to strategies of commitment are that they often lead to spectacular losses, and they do require the ability to make substantially accurate predictions. Even when forecasting at that level of precision is not possible, commitment-based strategies are the darlings of the arrogant, stupid, and naive as well as they are often the favored approaches of the truly inspired and gifted.

In a strategy of adaptation, the decision-maker must admit up front that forecast reliability is low. As a result, the goal becomes to efficiently develop and maintain a portfolio of options---to keep costs low and options open and see which projects end up being winners. This can be a difficult approach to maintain in a highly politicized environment, since strategies of adaptation tend not to lend themselves to the kind of confident, populist rhetoric that wins campaigns. In fact, the admission of predictive ignorance (intellectually this humility is the most honest and best-informed stance, based on a wealth of available evidence) may be seen as the product of a weak strategic infrastructure, rather than a positive step.

This is truly unfortunate, because the history of political and economic forecasting frequently reads like something straight out of Monty Python. The accuracy levels that have been attained in these spheres are very low, often worse than random, very seldom any better than those of chimpanzees (the seminal work of Phil Tetlock will be a subject for another day).

To make things worse, commitment strategies often involve martingale betting progressions, or doubling-down on losses. Many of the most spectacular blow-ups in the investment world have occurred when traders felt that their predictive models were correct in the face of contrary evidence, and so committed more capital to losing positions in the belief that prices would "return to true value" (!) soon. A number of hedge fund failures and destructive rogue traders can be traced back to strategies of commitment combined with martingale-style betting. This is why it is vital to consider what actions will be taken to prevent ruin if a given hypothesis is in fact completely wrong .

I think that the most important questions that can be asked of a world-be strategist pushing a new initiative are: 1) the evidence the strategist has of possessing a superior forecasting capability in the arena in question; 2) how he or she plans to conduct a limited, controlled test of the proposal; 3) precisely how success will be defined and monitored; 4) what contrary evidence would lead to the proposal being terminated (i.e., how much money, how many lives, etc. would need to be lost before the hypothesis was considered falsified); and 5) how concerned parties (shareholders, taxpayers, etc.) can be guaranteed that the pre-determined stop-loss level will be obeyed and that martingale progressions will be avoided.

In the political sphere, the requirement to consider falsifiability is most important: I will avoid the most obviously problematic case (warfare) and consider a public health care effort that will supposedly cost $500 billion and that is meant to show certain improvements in health care access and/or quality of service over the next three years. Now let's say that these improvements do not materialize because the original investment thesis briefed well and sounded plausible, perhaps even exciting, but was actually wrong (despite the good intentions of its architects). As a bad idea, the program should be killed and the famous sunk cost fallacy avoided.

Alas, the ability to terminate a losing "trade" made with public funding may be completely compromised, as stakeholders in the program will eternally make the argument that the reason for the lack of results is that the program was underfunded. This argument can be difficult to formally disprove. Thus, we could expect more and more resources to be deployed in a stubborn, ultimately foolish set of martingale betting progressions, all because politicians and bureaucrats were not equipped to consider the perils of commitment strategies when coupled with the low predictive accuracy rates common to models of complex systems.

If we allow ourselves to become even more cynical, we might even come to expect the goal of many public program enthusiasts to be to just get a favored program started, using performance projections that are almost certainly supremely optimistic, since the proponents know that, once in place, the program will be vampire-like in its resistance to kill attempts. Furthermore, when it fails to meet the original projections the program may then be expanded, ad nauseam, vis-a-vis the wildly popular "underfunded...let's double-down and play again" pro-martingale betting system argument. Over time, with multiple iterations of play involved, the best-funded surviving programs may rather perversely be those that are the largest failures.

Most works on strategic decision-making ultimately do stress strategies of commitment, usually by focusing on analytical tools or processes that are meant to aid in the creation of effective strategies. However, several important recent business books---The Halo Effect by Phil Rosenzweig, Fooled by Randomness by Nassim Taleb, The Management Myth by Matthew Stewart, and The Strategy Paradox by Michael Raynor---have described how strategies of commitment can be misapplied.

For example, Raynor shows how companies that have breakout successes and companies that implode often have much in common with each other, as both cases typically involve bet-the-farm-type strategies of commitment (in one case, they worked out; in the other, they did not). Taleb and Rozenzweig describe how we typically commit the fundamental attribution error---believing that our successes were the result of our genius or hard work, while our failures were due to bad luck or hostile conspiracies---when reviewing our own results, and how an entire pop-management publishing phenomenon has developed around the study of the winners and the construction of ex post explanatory narratives that breathlessly attempt to describe how the favorable outcomes were generated from skillful leadership decisions (when in fact the results were often the results of prediction-blind, quite risky strategies of commitment randomly happening to encounter fair winds). Stewart's critique is perhaps the most devastating, as it describes the false precision, inane technical jargon, and pseudoscience that describes much of the strategy-peddling profession, and how these can lead to unwarranted levels of confidence and large investments in committed strategies that are inappropriate for the true uncertainty landscape.

Adaptive strategies involve experimentation and small-scale trial-and-error processes, bottom-up tinkering and searching rather than top-down central planning or grand strategic designs developed by an elite intellectual cabal during a corporate retreat. The key to establishing an adaptation-friendly environment is to allow decentralized risk-taking and entrepreneurship on the one hand, but to have a feedback mechanism, a source of selection pressure, in place to assist in the efficient allocation of resources towards the good ideas, as they emerge, and away from the bad ones.

In other words, the architects of adaptive strategies should be primarily concerned with giving away decision-making power and unleashing independent "searchers" (to use development economist Bill Easterly's phrase) to develop and execute ideas that sound promising. Many of these ideas, perhaps the majority, will end up failing, a few will break even, and a very small number will be hugely successful (more than paying for the losers, provided that the losses on losing ideas are kept contained and martingale "reward the bad ideas" policies are avoided).

For a decision analyst, the key quantitative component of this strategy is, in my opinion, a fairly detailed understanding of something called Bayes' Theorem. Bayes' Theorem will be the subject of a series of upcoming posts.

No comments:

Post a Comment