Sunday, November 28, 2010

Gnarls Barkley on Violin

Definitely one of the coolest things I've seen in awhile...

Thursday, September 23, 2010

Muppet Violence and "Greater Fool" Theory



An interesting philosophical question emerges from this. Who is the true idiot in this clip? On initial review, the predatory monster would seem to be an imbecile for thinking that a bunny disguise would be even remotely convincing (he's approx. 10x the mass of the bunny prey items, has horns, is clearly an obligate carnivore, carries an enormous killing club, unblinking and greed-crazed eyes, etc.). However, his tactics seem to be highly effective in practice. Is he a lucky fool who simply stumbled across an even more ignorant form of life to predate upon, or a clever game theorist who predicted this outcome? Perhaps the song lulled the unsuspecting rabbits into the illusion that this freak was interested in inter-species camaraderie??

Wednesday, August 18, 2010

Phoenix Program 2.0, Kilcullen's Way, Convexity, Paramilitary Hedge Funds


(grainy photo of SAS-trained Army Special Forces operator Richard Meadows preparing to lead the assault team of the Son Tay Raid---note early SinglePoint Occluded Eye Gunsite on his CAR-15. One of the greatest soldiers America has ever produced, Meadows spent his entire career in Special Forces and Ranger units. He served with MACV-SOG in Vietnam, was a founding member of the Army's national-level Special Mission Unit, and conducted still-classified paramilitary operations for the CIA both in Vietnam and after his retirement from the Army in 1977. A key element in the man's success was his mastery of the prisoner-snatch operation)

Quick Summary of Today's Key Point: Traditional raid theory seeks to minimize time on target in order to lower the raider unit's vulnerability to larger enemy forces. Counternetwork raids that target isolated insurgent cells are less vulnerable to this threat, but must also develop a more thorough and skilled post-hit Sensitive Site Exploitation (SSE) capability in order to collect intelligence that can provide a lead to the next cell or to a highly networked individual (money-launderer, opium- or weapons-transportation hub, etc.). Quantitative trading and hedge fund strategies may provide some conceptual insights into how a campaign of exploitation missions could be designed.

Find, Fix, Finish, Exploit, Assess (F3EA)

The modern incarnation of the Vietnam War Phoenix Program (described briefly in the last post, in case you missed it and are not familiar with the program) might be the “F3EA” framework now being employed by Joint Special Operations Task Force members in combat theaters overseas. F3EA developed from the ad hoc manhunting programs that had targeted enemy VIPs. The idea behind it is to embed “Exploit” and “Assess” components in the typical mission profile, with the Assess phase feeding the next mission cycle directly. As with Phoenix and unilateral SEAL platoon operations in Vietnam, F3EA prioritizes speed and agility and organic intelligence-gathering efforts over high-level planning and resources.

Decomposing F3EA into constituent building blocks, we see that “Find, Fix, and Finish” describes the typical military goal of pinning an enemy force against the tactical equivalent of an immovable object (the “anvil”)and then smashing it with overwhelming combat power (the “hammer”). The “Exploit” and “Assess” components are really what differentiates F3EA from traditional direct action operations.

“Exploitation” is a buzzword for efforts that gather actionable intelligence. “Assess” is the process of analysis and development of that intelligence to form a new mission concept---the military equivalent of what scientists mean by a properly specified hypothesis. The innovation of F3EA is to house the direct action components and the intelligence components under one roof, with the expectation that successful missions will be those that are able to generate spin-off missions.

One of the better descriptions of F3EA comes from David Kilcullen:

But make no mistake, counterinsurgency is war, and war is inherently violent. Killing the enemy is, and always will be, a key part of guerrilla warfare. Some insurgents at the irreconcilable extremes simply cannot be co-opted or won over; they must be hunted down, killed, or captured, and this is necessarily a ruthless practice conducted with the utmost energy that the laws of war permit. In Iraq and Afghanistan since 9/11, we have experienced major success against terrorists and insurgent groups through a rapid twenty-four-hour cycle of intelligence-led strikes, described as "counternetwork operations," that focuses on the most senior leaders. This cycle, known as "Find, Fix, Finish, Exploit, Assess" (F3EA) has proven highly successful in taking networks apart, and convincing senior enemy figures that they simply cannot achieve their objectives by continued fighting. This approach fuses operations and intelligence and, though costly and resource intensive, can generate a lethal momentum that causes insurgent networks to collapse catastrophically.

Kilcullen's Way


(Australian counterinsurgency guru David Kilcullen)

One of the true leading intellectual lights in the counterinsurgency literature, former Australian Army officer David Kilcullen has served as an senior advisor to General Petraeus in Iraq and General McChrystal in Afghanistan, and as an adjunct professor at Johns Hopkins AIS, one of the most prestigious international studies/security studies programs in the United States.

In the books The Accidental Guerrilla and Counterinsurgency, Killcullen provides an engaging and very practical account of the lessons he learned in East Timor, but also takes a more conceptual view of the counterinsurgency process and the analytical frameworks that can be brought to bear to aid in understanding. Killcullen explains how a global insurgency movement requires new tools:

Although counterinsurgency is more appropriate than counterterrorism in this conflict, traditional counterinsurgency techniques from the 1960s cannot simply be applied to today’s problems in simplistic or mechanistic fashion. This is because counterinsurgency, in its “classical” form, is optimized to defeat insurgency in one country, not to fight a global insurgency. ..

At the global level, no world government exists with the power to integrate the actions of independent nations to the extremely close degree required by traditional counterinsurgency theory; nor can regional counterinsurgency programs be closely enough aligned to block all insurgent maneuver. This is particularly true when the enemy---as in this case---is not a Maoist-style mass urban movement but a largely urban-based insurgency operating in small cells and teams with an extremely low tactical signature in the urban clutter of globalized cities. In today’s international system, a unified global approach---even only in those areas directly affected by Al Qa’eda-sponsored jihad---would be intensely problematic. It would demand cooperation far beyond anything yet achieved between diverse states.



(this well-funded, very highly trained counternetwork warfare specialist has no jurisdictional limitations, but Kilcullen is concerned that, in the real world, most operators do)

Kilcullen recommends that sophisticated counterinsurgency professionals study Complex Adaptive Systems and networks. He notes that systems have transcendent, emergent properties that are not visible when studies are limited to their component parts. A good COIN systems analyst, like a good hedge fund manager, will have the ability to view the situation from both macro and micro scales, often generating deeper insights and understanding from the "gestalt switch" which can occur as a problem is viewed at multiple scales and through the prism of multiple disciplines.

Kilcullen:

As shown, the global jihad is a series of nested interactions---insurgencies within insurgencies and networks within networks. So it is important to understand which is the "system in focus": an individual group, a localized insurgency, a regional jihad, or a global insurgency as a whole. Most analysis of Iraq treats the problem in terms of single-country classical counterinsurgency. That is, the "system in focus" for most analysts is the Iraqi theater, and links to the broader Middle Eastern jihad or global insurgency are secondary. Lacking a complex systems perspective, some analysts appear to assume that the "system in focus" is all that exists, whereas (as shown) the true danger of individual jihad theaters is their aggregated effect at the systemic level as a global counterinsurgency.

Looking for solutions, Kilcullen invokes the Vietnam War era's CORDS and Phoenix Programs as a useful model for disrupting the nodes and energy flows of the insurgent system:

De-linking would result in actions to target the insurgent infrastructure that would resemble the unfairly maligned (but highly effective) Vietnam-era CORDS program. Contrary to popular mythology, this was largely a civilian aid and development program, supported by targeted military pacification operations and intelligence activity to disrupt the Viet Cong infrastructure. A global CORDS program (including the other key elements that formed part of the successful Vietnam CORDS system) would provide a useful starting point for considering how disaggregation would develop in practice.

Tactical Implications: Relative Superiority in the Counternetwork Warfare and "Manhunting" Contexts

There are a series of tactical implications that would come with a policy that really pursued decentralized counternetwork operations. One such implication concerns how the assault force views the temporal aspects of a prisoner-snatch mission---the urgency with which the team exfiltrates the target area after conducting the mission. As typically taught in elite light infantry and special operations schools, the unilateral supermission approach relies heavily on established procedures for conducting raids, many of which were originally developed for sabotage missions and direct action missions during WWII. It generally seeks to minimize the raiding force's "time on target".

Raiding procedures typically are based on the assumption that the raiding force is operating behind enemy lines. The team has a temporary advantage due to surprise and agility, but this advantage suffers from serious time decay: a small commando unit is not equipped to hold terrain, so it cannot remain in place on the target for long once contact with the enemy had been made. The advantages of speed, surprise, and sudden violence---referred to by Bill McRaven and others as “relative superiority” (RS)--- must be gained early in the mission and used to accomplish whatever the team’s objective is. RS confers a fleeting tactical advantage that must be exploited before it dissipates and the outcomes of the engagement are decided by “absolute superiority” (AS).

The Mathematics of War: Attrition and the Lanchester Combat Model

AS can be modeled using the famous Lanchester combat equations, a set of differential equations that reveals how, in attrition-style fights of both shock battle and ranged weapons, an initial asymmetry in force strength can quickly turn into a bloodbath for the smaller force. For ancient combat, the Lanchester model is often referred to as "Lanchester's Linear Law", while the modern combat version is the "Lanchester Square Law."

For a contemporary military operator, the critical insight of the Lanchester equation is the exponential nature of casualties suffered by the smaller force when forces of roughly equal technological sophistication, equal situational awareness, and unequal size clash head-to-head (there are many more sophisticated variations of the Lanchester model that attempt to take into account differences in technology---this is where the oft-abused term "force multiplier" actually comes into practice, as a superior technology is converted into a numerical equivalent and then used as an input feed to the original model. For example, a force of 10 men that has a technological coefficient of 1.5 would, per Lanchester, be equivalent to a force of 15 that was equipped with baseline technology).

Note that Lanchester equations are really for straightforward attrition fights; they are not meant for modeling scenarios in which one side continually stalks and ambushes another. Particularly in situations of protracted guerrilla warfare, modified versions of the Lotka-Volterra predator-prey model from ecology may be more appropriate as starting points, with Lanchester being reserved for a contingency in which the ambushed force is able to effectively fight back.

The major goal of a smaller force is to escape from a Lanchester-governed engagement at all costs, particularly as a unit must operate at some minimum size in order to be able to evacuate casualties without suffering an enormous maneuver penalty (normally four men are needed to move one seriously wounded man if distances and speeds of any tactical significance are to be sustainable). Thus there are tipping points in an engagement at which a smaller force, having sustained casualties, must either abandon some wounded comrades, move out with all hands and face a significant maneuver disadvantage, or remain fixed in place and suffer the full brunt of the Lanchester's cold, mathematical onslaught.

There is an equivalent to Lanchester in economics and game theory (modeled using von Stackelberg and Cournot duoplies), and it reveals how competition can cause corporate profit margins to approach zero, leaving the lowest-cost, highest-volume producer---the numerically superior combatant in Lanchester terms---eventually seizing market share by underpricing all competitors. Strategy guru Michael Porter created his “Five Forces Model”, now taught in virtually every MBA program, to provide a framework by which corporate strategists could attempt to avoid a head-on competitive clash with other companies and the typical collapse in profit margins that would ensue from commoditization of products and intense competition for market share (and which would ultimately be good for consumers, by the way).


(Michael Porter's "5 Forces" framework examines the economic forces that can affect a company's ability to impose monopoly pricing power on consumers. His first book was amusing called Competitive Strategy because publishers balked at his intended, more honest original title: Anti-Competitive Strategy. In a sense, Porter's approach is to try to thwart the natural selection pressures of the free market ecosystem and to find a protected niche in which economic rents can be enjoyed because competition has broken down)

In much the same as Porter looks to avoid the collapse of profit margins that results from economic competition, the relative superiority approach to warfare that is articulated by McRaven is an attempt to systematically avoid the attrition trap of the Lanchester combat equations. Attrition-style warfare is correctly identified as posing a number of extremely perilous scenarios for small units.


(McRaven's relative superiority graph---the shaded region represents a commando unit's vulnerability. The graph begins at 0,0 with contact with the enemy, and depicts a drag race between time, the probability of mission completion, and the SOF team's establishment of relative superiority ((RS)). RS is depicted as a horizonal line across the middle of the rectangle; once RS is achieved, the SOF team's chances of mission success shoot up vertically. As one moves to the right, representing the passage of time, the unit's failure to achieve RS would cause its effectiveness to decay and the unit to becomes more and more vulnerable. The shaded region would expand to fill the space and could be thought of as a crushing weight that would ultimately suffocate the commando unit).

McRaven:

...An inherent weakness in special forces is their lack of firepower relative to a large conventional force. Consequently when they lose relative superiority, they lose the initiative, and the stronger form of warfare generally prevails.

The key to a special operations mission is to gain relative superiority early in the engagement. The longer and engagement continues, the more likely the outcome will be affected by the will of the enemy, chance, and uncertainty, the factors that comprise the frictions of war.

Simply put, the chaos that a small force can create through speed, surprise, and violence of action gives it an advantage in the early stages of contact. As time goes on, the enemy force is able to develop situational awareness and to start to employ its numerical advantage to overwhelm the special warfare unit. This assumption is perhaps exemplified by the heavy use of stopwatches during mission rehearsals: a designated operator, often a senior NCO, calls out the "time on target" at regular intervals so that the team is aware that they need to finish the job and leave the area as quickly as possible. There is little opportunity for “Exploitation and Assessment” to be conducted at the target, and this poses the central tension for F3EA-type counternetwork operations because we know from criminal forensics in the law-enforcement world that detailed crime-scene work is crucial for developing evidence and leads, particularly when multinational criminal syndicates and complex enterprises are involved.

The minimal time on target philosophy is optimized for missions that involve a short, sharp, one-sided combat operation followed immediately by the setting of explosives to destroy an enemy facility or infrastructure; when it meets missions that involve nuanced and detailed intelligence gathering, the result can be an unpleasant "smash & grab" style that emulates despised death squads and risks alienating members of the local populace. Because there is no time for nuance, large groups of prisoners and witnesses may all be rounded up (the membership of which could be largely innocent) and whisked away to a support facility, at which point they are handed off to a group of professional interrogators.

I run the risk of presenting an oversimplified, cartoon version of the standard RS interpretation, but I think the important point is that much of commando raiding philosophy is built on the assumption that the commando force has absolute inferiority---largely based on numbers---to a conventional military opponent, and thus the commando force cannot linger in a fixed position once enemy contact has been made. In other words, commando teams would be advanced representations of insurgency tactics.

In contrast, a commando team operating in a counter-insurgency role may in fact reliably have numerical superiority over the small insurgent/terrorist cells that it targets.

Relative Superiority and Air Combat Maneuvering


(one of my heroes: Triple-Ace and professional badass Robin Olds, then a MiG-killing colonel in Vietnam, was perhaps America's premier fighter pilot---259 combat missions, 16 air-to-air combat victories---and the only man inducted into both the National Aviation Hall of Fame and the College Football Hall of Fame. Olds defied regulations in many ways, from his flamboyant handlebar mustache to his hard-drinking social life and penchant for singing ribald fighter pilot songs. He had an extremely advanced tactical mind, was a student of decision science, and, interestingly enough, was not a great fan of the theoretical work of fellow USAF officer and dogfighting enthusiast John Boyd)

When larger units, even entire campaigns, are involved, relative superiority is typically examined via maneuver warfare theory. The goals of maneuver warfare are much the same: avoid static, attrition-style fights and bloody frontal attacks on prepared enemy positions and look to rapidly exploit pockets of weakness that appear---areas of “relative superiority”---in order to converge on the enemy’s headquarters apparatus. The colorful descriptive phrases that are associated with maneuver warfare---“cutting off the head of the snake”, etc.---reinforce this goal of targeting the centralized decision-making capability of the enemy organization.

As we discussed in the last post, the assumption that the enemy force is organized with a coherent management structure in place can become a dangerously convenient conceit and can result in mission priorities that target leaders who in fact have little or no control of the larger organization.

Much of the theoretical work on maneuver warfare and mental agility theory in this country have started by focusing on attention-decision-action loops that take place within a single, discrete engagement (air-to-air combat being the prototypical engagement and John Boyd’s OODA being the prototypical model), and then developing first principles for dominance that can then be scaled-up for application to much larger units.


(John Boyd's famous OODA loop. While various critics have complained that OODA is, like Freudian psychology, guilty of failure to make falsifiable predictions, it is still generally regarded as a useful starting place because of its accessibility as a standard model for battlefield decision-making)

As theorists move from the micro to the macro, the analysis has tended to become more conceptual in nature and has described ways to enhance the exploitation/OODA qualities of a military force by decentralizing combat leadership responsibilities and empowering small maneuver units. Small units in the field, having immediate access to information and being less vulnerable to Clausewitzian fog, can make rapid decisions and act to exploit opportunities as they appear.

However, the decentralization of the decision-making process has generally been limited to the tactical side: higher, centralized command and control entities generally still provide mission tasking, review mission plans (via Briefback or some similar management review process), provide force lists of available assets (helicopters, gunship support, etc.), and conduct after-action debriefs.

Sophisticated counternetwork operations may have to take this a step further, and may increase the decentralization and empowerment of the man on the field by further integrating mission tasking, mission planning, and after-action reporting into his sphere of control. This would require extremely entrepreneurial individuals, qualified in many disciplines, and a national command authority which is prepared to take a hands-off approach to certain aspects of global counterinsurgency.

Key Issue: Time on Target

In practical terms, the network model of raiding may initially entail the same tactics, techniques, and procedures as the supermission, but things diverge once the target has been cleared safe. Rather than emphasizing a speedy withdrawal from the target, a counternetwork team may have to secure the area and switch to a detailed intelligence-gathering mode. In some cases, interrogations may take place on the target or in an area that is adjacent to it. Computers and documents would be seized for immediate exploitation.

The counternetwork team is thus less concerned with getting away before a stronger enemy force arrives and the commando team gets fixed in a Lanchester death trap (the central preoccupation of McRaven’s model), and more concerned with occupation of the target, perhaps for a dangerously extended period by traditional time-on-target standards, in order to get information that can lead to the next cell. This shift occurs because counterinsurgency involves isolated cells of enemy personnel rather than engagements with enemy main force units. In a counterinsurgency paradigm, enemy main force units have virtually no chance of escaping detection by US telemetry assets, and even less chance of successfully engaging US combat forces in direct combat. The threats posed by an insurgency are those of elusiveness and speed.

Gamma, Mission Performance, and Vulnerability to Extreme Events

One interesting aspect of the RS framework is the implication of the small team’s exposure to “tail risk”, or extreme events. The idealized RS-based mission has the team prepare perfectly in advance and then using its advantages in specific preparation, situational awareness, and surprise to run through a sleepy and complacent enemy force. The payoffs to the commando team come up front in the form of relative superiority and can deteriorate if the team remains too long on the target---the payoff function resembles that of a gang of bankrobbers.

Viewing this from behind the prism of options theory, we might say that McRaven's minimal time-on-target RS approach has a mathematical property called negative gamma, while a more specifically developed network approach would have positive gamma.

Gamma is a term from quantitative finance that refers to the way that an option’s riskiness (really an option's delta, but let's keep things straightforward) responds to changes in the underlying asset (say, the option is based on U.S. S&P 500 index futures). An easy, admittedly overly simplistic way to look at it is to say that a positive gamma, or long gamma, position becomes MORE valuable as volatility increases. Positive gamma trades tend to be more valuable when there is more time available.

A negative or short gamma position entails an option that becomes less valuable as volatility increases (maintaining short gamma positions costs you more under more volatile conditions). Short gamma traders are looking to get paid upfront---in a sense, to follow McRaven’s dictum and achieve RS early in the engagement---and then they watch the clock very carefully. They hope to be in and out of the market before an extreme event occurs, as their losses can be very large. In contrast, the long gamma trader wants more and more time on his side and hopes for an extreme event to happen on his watch, because he will tend to make money in a wild-volatility environment. The difficulty for the long-gamma position is that it can be expensive to maintain one for long periods of time if market volatility ends up being low.

To conceptualize a short gamma business model, imagine an insurance company that specializes in writing homeowner's insurance policies for coastal Florida. If the company was able to somehow shorten the length of the Atlantic hurricane season, it almost certainly would love to do so, since the hurricane season leaves the company vulnerable to catastrophic losses.

For long gamma, think of a venture capital group that has made numerous small investments in promising start-up companies. The company has limited risks in any one investment. It knows that it can take awhile for things to work out, for connections to be made, for start-ups to gain knowledge through trial-and-error experimentation. Start-ups tend not to have advantages in the beginning---the successful ones will gain traction as they mature. The more time these small, vulnerable companies can stay alive, the better the chances that one of them will make it big and serve as the wildly profitable, breakout extreme event that the venture capital firm depends on seeing every now and then.

Gamma and Sensitive Site Exploitation (SSE)

The long gamma mission concept is one that has quite limited objectives in terms of pre-mission planning, but which contains an embedded capacity for very rapid exploitation, analysis, and new mission planning if the unit captures a lucky break.

In the current global counterinsurgency campaign, examples of these tail events are numerous. Gretchen Peters describes one of them in her study of the links between the opium trade and insurgents in Afghanistan and Pakistan:

U.S. counternarcotics agents raided a drug smugglers' lair in Kabul, where they confiscated a satellite telephone. When the CIA ran numbers stored in its memory, they discovered the telephone had been used repeatedly to call suspected terrorist cells in western Europe, Turkey, and the Balkans.

In the context of a global counterinsurgency campaign, a positive-gamma portfolio may entail having units in the arsenal that can operate in disturbing ways. This could entail a number of privileges that have historically been subject to abuse, including the authority to employ bribes, favors, and prostitutes to create effective local agent networks ; the ability to conduct “vigorous and time-sensitive” field interrogations on the scene and to break any number of international laws in the hot pursuit of cells and individuals ; access to quasi-legal or illegal intelligence collection sources (cell phones, e-mails); an assassination capability; UNODIR mission-planning freedoms; and the expensive and permanent attachment of transportation assets for conducting paramilitary expeditions. Suffice to say that the potential risks involves in creating such a dangerous attack dog and then unleashing it on foreign shores may be difficult for policymakers to willingly take on.

Partnering with Pirates: Can Hedge Funds Help?

Systems theory, Complexity, long-tailed distributions, and power laws have become virtual buzzwords among national security academics and bloggers, and I lost track of the number of times I heard the phrase "black swan" used at a terrorism think-tankers conference I attended a few years ago. One popular term for the systems-rich environment is "VUCA"---"Volatility, Uncertainty, Complexity, and Ambiguity".

I rather like VUCA and think that it is useful term. However, it is one thing to be aware of these things on an abstract, conceptual level, and another to be able to determine how these phenomena must be monetized, tactically, in the real world. One will frequently find that think-tank academics and public policy types will want to use all the right vocabulary words and appear to be knowledgeable about the latest scientific-sounding analytical tools and frameworks, but they will be unwilling to make any statements that could lead to testable predictions or to take a hard position regarding the many trade-offs and concerns that come with campaigning in the VUCA environment.

In this regard, I agree with Joan Johnson-Freese of the Naval War College that some percentage of military officers “should be trained to think like hedge fund managers”. Obviously what Joan means by “hedge fund thinking” needs to be fully defined, but I think her general opinion is that good hedge funds specialize in finding profitable trading opportunities in the particular markets and asset classes that they cover.

That said, it is important to note that many hedge fund guys run portfolios based on convergence plays---they make money from normalcy and lose money in extreme events. In a power law environment, that strategy eventually runs into trouble. In the case of a counterinsurgency campaign, that strategy would not be poised to quickly take advantage of unexpected exploitation windfalls.

I believe that the members of the hedge fund community that have some things to offer the policy academics and national security brain trusts will tend to be found in the directional global macro strategy space, where markets are viewed in holistic terms as ecosystems with many complex, interdependent parts (as opposed to, say, a very detail-oriented stock-picking strategy that seeks to isolate a single company and place it in a strategic vacuum). Although few of them have any operational qualifications to speak of, global macro managers tend to look to many different disciplines for inspiration and techniques, so they probably are better equipped to meet the operator and security think-tank communities halfway and to offer ideas that may be relevant to COIN.

(The possibility of cross-pollination between the worlds of trading and military affairs is not a unique view: Marine Corps senior officers have spent time with New York Mercantile Exchange traders (courtesy of decision theorist Gary Klein). Nassim Taleb has participated in intellectual exchanges with military and intelligence community professionals, and so on).

Illustration of Global Macro Thinking

To get a further sense of how hedge fund managers may try to construct trades that will profit from extreme events, here is a short trade discussion video featuring Taleb, who probably needs no introduction here, and the enfant terrible (but very smart) Scottish global macro fund manager Hugh Hendry. The two men have very different views on economic conditions and emerging risks and opportunities, but both want to retain extreme event-profit exposure in their portfolios:


(at the risk of taking an overly long, rambling detour from the subject at hand, I will go into a bit more detail on how a global macro hedge fund manager like Hendry is setting up a big trade. I believe that the mysterious low risk/high payoff trading opportunity that Hendry alludes to at the end of the clip involves buying credit default swaps on Japanese corporations, particularly utility companies, that have high rollover-debt requirements.

(At almost 200%, Japan's debt/GDP ratio is the second highest in the world---after Zimbabwe's---and the government has been able to finance this debt at extremely cheap rates because of "special" access to massive Japanese domestic savings. The situation in Japan has now become dire---tax receipts have dropped about 15% and expenses are double receipts. Worse, demographic realities and years of zero-percent interest rates are driving Japan's savings rate down to very low levels, which will eventually force Japan to have to finance its debts in the international bond markets. Higher interest rates could cause Japan's debt service payments alone to converge on, perhaps even exceed, its revenues, which would probably force a restructuring that would have terrifying implications for investors and institutions all over the world.

(An advantage that the Japanese government has over, say, the United States is that much of Japanese government debt was issued in longer-term maturities. However, many Japanese corporations have short-term debt that must be rolled over at the same time that the voracious Japanese government is trying to desperately sell large amount of its own debt to cover its structural deficits. Hendry believes that this will increase the borrowing costs of Japanese corporations and so he has purchased credit default swaps that will pay off in the event that the corporations run into financing trouble.)

Convexity

In simplest terms, developing the ability to exploit fat-tailed statistical distributions entails becoming skilled at constructing and maintaining a portfolio of long option positions. Long options present positive convexity, which means that extreme events will tend to represent limited-liability profit opportunities rather than catastrophic periods of large losses. However, being able to capture extremes in a stochastic environment usually means that prediction is impossible, so tactics, techniques, and procedures that give operators the ability to recognize and exploit unexpected opportunities---trendfollowing, as described in an early post on disaster prediction and trading systems---becomes critical.



Imagine that, on each of the above graphs, the vertical axis depicts profitability of a hedge fund strategy and the horizontal axis depicts a continuum of market price changes, starting with very negative price changes on the far left, zero change in the middle, and then moving to very positive price changes on the far right. The concave profit function depicts a strategy that makes money when market prices essentially do nothing---the profit peaks at zero change, the mid-point of the price change continuum. The convex profit function has two peaks, because it makes money at the positive and negative extremes, when markets rise or fall in large, divergent moves.

Living on Kurtosis: How Crisis Hunting Hedge Funds Shoot Moving Targets



(top 3gun competitive shooters like Bennie Cooley have three different techniques for engaging moving targets)

Casual and abstract connections are easy to make, but linking Black Swan hedge funds and counternetwork operations in a meaningful way means trying to identify how the fund managers make money from extreme events and how these practices can somehow be extended to the very different world of commando units and vicious urban fighting. To attempt to describe how hedge funds and Commodity Trading Advisors (CTAs) try to systematically profit from tail events in the markets, I will run the risk of the strained analogy and recruit some ideas and terms from yet a third disparate group---the competitive shooting community. I have had the opportunity to train with a few great competitive riflemen, skeet- and trap-shooters, including Bennie Cooley and King Heiple, and was taught that there are three ways for a rifleman or shotgunner to engage moving targets: the ambush, the sustained lead, and the swing-through. Interestingly, these techniques seem to correspond to those used by hedge fund managers to capture profits from extreme events.

-In the ambush technique, the shooter sets his sights on a location that the target must move through in the future. As the target moves into this space, the shooter engages it. An obvious problem with the ambush is that a fast-moving target can suddenly appear in the shooter’s sights and force a sloppy trigger pull. For this reason, the ambush technique is best used on slow-moving targets at long ranges; the advantage of the ambush is that the shooting position can be prepared to a very high degree.

The ambush equivalent in the trading world would be the type of strategies that are employed by options traders like Nassim Taleb. Taleb uses deep out-of-the-money straddles---option strategies that involve purchasing a call and a put and which pay off if market prices move violently up or down---to profit from extreme directional moves either up or down. He made his reputation on one such trade: Nassim had purchased a large number of deeply out-of-the-money (and very cheap) options on Eurodollar futures contracts prior to the spectacular market crash of October 1987, and these options provided the equivalent of years of profitable trading when Eurodollars spiked during the crisis.

-The sustained-lead technique has the shooter tracking a moving target with his muzzle and keeping slightly ahead as he smoothly depresses the trigger. The natural pause of the movement as the trigger breaks causes the sights to converge on the target as the weapon fires. This helps to avoid the major problem that shooters face when engaging moving targets: the tendency to get on the trigger too late and to shoot behind the mover.

CTAs that employ certain quantitative “signal vs. noise” trendfollowing strategies use something similar to the sustained lead. Some strategies involve setting statistical “tripwires” above and below the current average price of the market. The sensitivity of the tripwires is a function of the volatility of the market: if prices are moving wildly on a daily basis, the trading algorithms will set entry tripwires that are some distance from current prices in order to stay out of the way of the white noise. If prices are barely moving at all, the tripwires can be set fairly close because a relatively small divergence from current prices could have significant meaning.

Sustained lead has similarities to ambush trading and both tend to make their major profits at the same time, but there are important differences as well. I note that an estimated 95% of high-level skeet shooters use the sustained lead approach, and our firm employs it as our primary trading strategy because we feel it has some important advantages over the other systematic kurtosis-exploitation methods (but they all can work very well and there is an inescapable subjective element here, since the trading strategy must be appropriate for the psychological attributes---the emotional command system settings described by our old friend Jaak Panksepp---of the trader).

-Swing-through techniques have the shooter starting with his muzzle behind the target, and then accelerating to overtake the target as the trigger is depressed. As in the sustained lead, the technique attempts to mitigate the tendency of most shooters to fire too late.

The trading equivalent of the swing-through is the use of simple momentum models that buy and sell when prices exceed or fall below a computed moving average. Unlike the sustained lead, no filter is used to try to differentiate between signal and noise in market price action. These systems worked very well with commodity and financial futures in the 1970s and a simple, robust commercial trading system called "Aberration" that uses/used this type of algorithm was consistently ranked as one of the top trading systems available based on real-time "paper-trading" results. Swing-through systems, which are sometimes also called "reversal" systems, may come back to wild profitability again, but many of the original proprietary trading teams that used them have since moved towards sustained lead models as markets have become choppier.

I believe that most F3EA missions are going to be conceptually similar to the swing-through approach to momentum trading, and will share some of the same strengths and weaknesses. The major strength of the swing-through is that it is always in the market looking for trends to exploit; the major weakness is that it can be "tricked" or faked into following false leads because it has no filter for distinguishing between noise and signal.

In an operational context, this may mean that so much intel is gathered from grassroots efforts---local missions to exploit prisoners and conduct SSE---that analytical capacities become overtaxed. Over time, the whole decision system could become inundated with stimulus, causing it to slow down, causing the ability to originate high-quality missions to hit a bottleneck, and causing the F3EA organization to lose the agility that is its primary weapon against insurgents.

A possible solution would be to gradually raise the inhouse analytical capabilities of the field teams, giving them a greater discretion and precision when it comes to intelligence collection on the target.

Thursday, August 12, 2010

Global Counterinsurgency, Complex Systems, Phoenix Program, F3EA


Counterinsurgency: M-Form Corporations vs. Networked Systems

Think-tankers and other policy analysts are in general agreement that the “Global War on Terror” is better conceived as a long-term, international counterinsurgency campaign (punctuated by occasional direct action counterterrorist operations) than as a single, large-scale counterterrorist operation. Because counterinsurgency (COIN) involves a multidisciplinary mix of “soft” (development economics) and “hard” (kinetic) capabilities, COIN theorists and practitioners look to many different sources of expertise in order to find new tools.

An impediment to effective counterinsurgency work is the adoption of the wrong model for the enemy’s organizational architecture, since the counterinsurgency team’s understanding of the organizational chart will probably drive mission planning and target sets. Perhaps the most dramatic question in this regard is whether or not an organization like Al-Qaeda conforms to a fixed, tiered architecture, with an executive suite that functions as the brain and clearly defined hierarchies and communications channels through which the executives can issue orders to subordinate units. This type of organization is exemplified by the multidivisional or "M-form" corporation model that was pioneered by DuPont and now commonly used as a template in business schools.

Psychological studies of why the M-form is so popular among politicos and high-ranking military officials suggest that the M-form reinforces a public official’s self-importance and provides a satisfying validation of the assumption that senior management teams play a critical role in organizational performance. Those who have long pursued leadership ambitions themselves may already lean towards the belief that leaderless, non-hierarchical structures are not particularly formidable, so they have a natural bias against decentralized systems and towards hierarchies. Anxious to avoid cognitive dissonance and to perpetuate notions of the importance of grand, integrated strategic campaigning as the best policy response to a global insurgency, senior politico-military leaders may tend to believe that behind any effective enemy organization *must* be a small enclave of high-level, Dr. Evil-type planners. Good operatic melodrama frequently makes use of the device of the grand duel: by creating a satisfying protagonist-antagonist dynamic, the elevation of the enemy’s leadership apparatus serves to also give the senior politico-military official a reflected, heroic vision of himself.


(archetypal M-form organizational chart)

Modeling Al-Qaeda using the M-form was popular a few years ago, and several books were written that spoke of a unified command structure---a “Jihad, Inc.” or “AQ, Inc.”--- in which Osama bin Laden served as a kind of malevolent CEO. More recently, however, this description has been replaced by one in which AQ is viewed as a network, an essentially leaderless organizational form that surfs the edge of chaos.

The network model of AQ depicts disparate individuals and small groups that have become radicalized and who may act from common grievances. In contrast to the corporate model, the network approach sees no central planner, no one in charge. Rather than being a CEO-led not-for-profit corporation that produces terrorist acts, Al-Qaeda is modeled as a kind of violent psychotherapy and social-networking movement, with "members" who share similar grievances, real or imagined, and find a transcendent shared purpose in performing acts that are usually otherwise incoherent. Viewed through the systems prism, Al-Qaeda is a meme, a leaderless mental virus that can hijack the operating systems of those who are united psychologically by common grievances.

For example, under the network model we can readily imagine that a graduate student who spent most of his life in, say, Hoboken could take it upon himself to set off a nail bomb in Times Square as an act of revenge against Americans for US military operations in Afghanistan. He could commit a terrorist act despite having no real connection to those who were killed, no leader who ordered him to execute the bombing, and no scheme for how his actions could benefit the purported AQ policy target of a sharia-led caliphate spanning the Middle East and parts of Europe.

Networks are simultaneously both simple and incredibly complicated. Their special properties are best described by the multidisciplinary field of Complexity science, or more specifically by the study of Complex Adaptive Systems (CAS). As we have discussed here before, CAS includes some very interesting and important research into how free markets find clearing prices and allocate scarce resources much more efficiently than command economies are able to.



(social network analysis of Al-Qaeda---open-source, unclassified)


A key to understanding networked systems is the realization that they can appear to be operating from a blueprint or design, when in fact the apparent sense of a larger, coherent purpose is a mirage. As individual elements of the system interact with one another, the result can be “emergent” behavior that creates the illusion of a greater, top-down design.

Supermissions, Manhunts, and Hot Pursuit

From the perspective of the operator tasked with capturing or killing enemy personnel, the network model brings with it a host of headaches. If the enemy fits the M-form paradigm, the art of war becomes a fairly straightforward one of maneuver: exhausting attrition-style warfare is avoided and efforts are focused on destroying the leadership, on decapitating the snake. Once the executive suite has been neutralized, the assumption is that the enemy organization will begin to implode due to the “strategic vacuum” at the top. Demoralized, disillusioned worker bees may continue to fight on in isolated pockets, but the time of major, coordinated enemy operations will be over.

Thus, a key difference between a network/systems model of enemy organization and a more traditional TO&E (Table of Organization & Equipment)-based approach is the way that target packets are constructed. As stated, the traditional model looks at enemy forces by fitting them into a coherent, tiered architecture, with orders generated from some kind of headquarters brain trust and then executed by maneuver elements. This is a set-piece strategic paradigm, like chess or football, and the goal is to achieve checkmate or sack the quarterback or kill the Hitler figure.

The most efficient application of force against a finite threat of the Al-Qaeda, Inc. type would probably be a supermission, a decisive raid that used "surgical" assets, preferably launched from an offshore location, to assassinate key enemy leaders. Talented shooters from the special operations community, as well as other instruments that promised the necessary levels of precision and agility (UAVs, etc.), could be tasked with these missions, which may go by a number of sterile, almost benign names ("target interdictions"). The forces involved would hopefully be given the opportunity to plan, rehearse, and equip at secure facilities in the United States. On a parallel track, very high-level intelligence-collection assets would be deployed to locate the enemy high-value targets (HVTs).

When a window of opportunity presented itself, the designated shooters would be immediately deployed to take the target(s) out, one way or another, and then they would return to a secure facility for de-briefing and so on. Any prisoners taken would be forwarded to interrogation specialists and processed according to whatever current policy was in place for dealing with such individuals.

The network model requires a politically riskier and messier approach, because it depends on the generation of momentum across numerous smaller, less decisive missions, and on the ability to swiftly capitalize on lucky breakthroughs. Rather than attempting to target high-value leadership personalities, the network approach assumes that the enemy organization is functioning without macro-level command and control. Disparate cells of enemy combatants may share the same grievances and tactics, but they will be connected by only the most tenuous of linkages. The network is set up to have redundancies in place, so the neutralization of any one cell will not compromise other elements.

When attacking a network, local information is prized. The enemy is confronted using grass-roots campaigns, with the understanding among friendly forces that occasionally a prize---a highly networked individual whose capture leads to information about other cells---will turn up. When this occurs, forces must be able to act on this prize immediately.

Where the M-form model consists of finding senior leadership personnel and taking them out, the network model of counterinsurgency warfare is consumed with the idea of jumping from cell to cell when information is uncovered. To do this, the network approach must mimic the decentralized structure of the enemy’s organization, and therein lies one of the major problems that theU.S. national command authority must confront.

Consider the aftermath of two successful raids, one following the M-form assumption and one following a network-centric campaign. The forces that conducted the supermissions would be expected to return to their training facility, and would turn over captured enemy personnel, if any, and intelligence materials over to a centralized processing mechanism. Details of the raid and the information that was uncovered would be carefully considered and discussed by senior political and military leaders. If another target became available as a result of this process, the raiding team might be tasked with a new operation, and a new mission planning/rehearsal cycle could be initiated.

The network approach operates from the assumption that any intelligence collected by the first raid must be acted on immediately (delays would allow the Complex Adaptive System to effectively quarantine the captured or neutralized cell). Rather than bringing his team home, the team leader would make the decision to move to the next target without hesitation, pausing only to send a quick, fragmented report up the chain of command. In pursuing elements of a global counterinsurgency, the team might not confine itself to any one theater of operations. Enormous authority would be vested in a small group of relatively junior operators who had to be trusted to make these kinds of decisions under field conditions.

Tipping Points, VIP Bad Guys, and "Persons of National Interest"

In his popular book The Tipping Point, journalist and author Malcolm Gladwell describes some of the features of a human social system. The need to create a vocabulary that distinguished between the senior leadership VIP targets that are the preferred menu item for supermission planners and the motley collection of networked individuals that can expose insurgency connections led three U.S. military officers---Majors Steve Marks, Tom Meer, and Matt Nilson---to formally make this distinction in a Naval Postgraduate School dissertation on the subject of manhunting

(interested readers may want to download the pdf from http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA435585 , and perhaps also to download the Joint Special Operations University monograph "Manhunting" by George Crawford from http://jsoupublic.socom.mil/publications/jsou/JSOU09-7crawfordManhunting_final.pdf )

Marks, Meer,and Nilson describe network-targeting policy with this statement:

The asymmetrical threats that challenge U.S. national policies are not large standing armies, but rather individuals who seek to usurp and coerce U.S. national interests. The nature of today’s threats call for the U.S. military to change from finding,fixing, and destroying the enemy’s forces to identifying, locating and capturing rogue individuals in order to destroy networks. To counter such threats, the USG will have to quickly and efficiently identify and find these targets globally.

Unfortunately, no military doctrine,framework or process currently exists for finding and apprehending these Persons of National Interest (PONIs). Since military planners and intelligence analysts are neither educated nor trained in the methods or procedures necessary to find and capture PONIs, this thesis will propose a methodology to do so.

Going further to describe the problem and prescribing a solution (a national manhunting capability), the authors state that:

The clandestine and decentralized nature of terrorist cellular networks has made it difficult for military units and intelligence agencies to identify and locate known terrorists... Identifying the fugitive’s clandestine network of support may be very difficult because relationships that develop within “small world” networks are not usually transparent to outside observers…Not only are the tasks associated with apprehending fugitives different, but the decision making process to capture fugitives may also be distinct from traditional military operations.


Historical Precedents: Phoenix Program and UNODIR SEALs

While the Son Tay Raid may present the quintessential example of the supermission concept (and its vulnerability to the rapid-obsolescence of intelligence) during the Vietnam War, versions of the decentralized, network-targeting approach to counterinsurgency were also prevalent in Vietnam. Network-centric warfare was particularly nested within the Civilian Operations and Revolutionary Development Support (CORDS), a counter-insurgency program that used development economics initiatives in rural areas of South Vietnam to attempt to win over local communities and erode the popular base of insurgent movements.

The fangs and claws of CORDS were provided by Intelligence Collection and Exploitation (ICEX), later re-named the Phoenix Program. Phoenix was a CIA-designed effort that employed various operational assets, including indigenous forces and Navy SEALs and Army Special Forces personnel assigned as advisers, to neutralize insurgent targets and act rapidly on intelligence (i.e., to attack the enemy network). There is evidence (much of it from former NVA officials speaking after the war) that CORDS and Phoenix were the most effective programs that the U.S. ran in the Southeast Asian conflict, and Phoenix has recently been reassessed by the Insurgency Group of the think-tank RAND Corporation.

Part of the success of these programs was the license that the operational elements were given to engaged in aggressive trendfollowing: if a mission happened to lead to actionable intelligence (which is always time-sensitive in nature), a follow-on mission was immediately launched to maintain the initiative, keep the element of surprise, and exploit the information (Johnson-Freese uses the term "information arbitrage" to describe this process of chasing down targets based on a truncated, decentralized information-decision-action cycle).

This practice may also have been one of the major contributors to the success of Naval Special Warfare units in Vietnam. Just after BUD/S, I and the three other U.S. officers in my class attended a very interesting mission planning course at the Naval Special Warfare Center. The course was set in the classroom and presented an opportunity to study a formal planning process that had been developed by SOCOM, and to gain insights into practical problems by being briefed by veterans of various combat operations.

One of the most fascinating aspects of the course, to me, was a briefing given by a retired mustang SEAL officer named Phillip "Moki" Martin. Martin, who had seen seven deployments to Vietnam between 1964 and 1974, was confined to a wheelchair and had to be wheeled into the classroom by his friend and assistant (ironically, the terrible injury was not combat-related: Moki Martin had been involved in the early growth of triathlon events in the San Diego area and became a quadriplegic after getting into some kind of insanely violent head-on collision with a car while he was bicycling down the Silver Strand in Coronado).


(Phillip "Moki" Martin)

As was typical for me during that time, my ability to concentrate on work was often compromised by my dedication to the more hedonistic aspects of the San Diego lifestyle (i.e., lack of sleep plus a continuous/residual blood-alcohol content level that was relatively high), but a few salient points from Martin's brief still stand out.


(swashbuckling and autonomous, Vietnam-era SEALs often had their own intelligence networks and aggressive, self-directed operational tempos. Short, sharp nature of hunting missions and typical "L ambush" mission profile led to tremendous offensive capabilities being cultivated relative to small unit size: an 8-man squad might patrol the riverine environment of the Rung Sat Special Zone---the infamous "Forest of Assassins"---with five or six belt-fed weapons. A common squad load-out might feature two M60s and three or four Stoner 63 light machine guns, as well as the more-anticipated shotgun(s), assault carbines, and grenade launchers)

Martin credited much of Naval Special Warfare’s success in Vietnam to four major precepts: 1) the largely autonomous, self-contained combination of a SEAL platoon with its own dedicated transportation assets (normally Sea Wolf UH-1 helicopters and riverine craft, both assigned with the SEAL component as a single package and available to the platoon commander on a near-continuous basis, and supplemented with fixed-wing support from long-loiter "Black Pony" OV-10A Broncos); 2) the heavy use of local intelligence networks to inform mission concepts (rather than intelligence provided from third-parties); 3) a bias towards prisoner-snatch missions and straightforward ambushes where possible; 4) an aggressive “UNODIR” policy.

Items one through three are self-explanatory---operators would uncover leads from their own intel sources and could mount missions using assets that were right there. The UNODIR is a bit more exotic and I think it serves to demonstrate how counternetwork operations empower the men in the field at the (perhaps uncomfortable for many) cost to centralized authority. UNODIR is an acronym for “Unless Otherwise Directed”, and defines a situation in which a field commander tasks his own unit with missions and then sends updates to higher headquarters stating that, “UNless Otherwise DIRected”, the unit will be conducting the mission in question according to some date-time schedule.

As one can imagine, UNODIR authority results in a tremendous decrease in the lag time between the uncovering of intelligence about a target and the launching of a mission to exploit this intelligence. In situations of fleeting windows of opportunity and elusive targets, UNODIR authority could be paired with direct access to unfiltered intelligence and organic transportation and fire support assets to create the fastest possible cycle time between information, tasking, planning, and execution of missions.

Next: Phoenix Reborn---the F3EA Model

Tuesday, August 10, 2010

Taleb on Black Swan Fitness



http://www.fooledbyrandomness.com/whyIwalk.pdf

Interesting piece by Nassim Taleb, taken from an updated edition of his modern-classic Fooled by Randomness. Taleb's friendship and general professional agreement with Chaos economist Art DeVany on the importance of using fat-tailed distributions for economic and financial modeling are well-known; more recently, Taleb has been following DeVany's evolutionary fitness approach and incorporating the concept of "convex profit functions" (systems that make money during extreme events) into his strength & conditioning program.

The use of controlled, intermittent fasting is a very good principle, in my opinion,and I think the chaotic dynamics identified in the quantitative predator/prey model that Taleb describes have a number of interesting applications (particularly to warfare). I personally don't fully agree with some other aspects of the described routine. For example, I think that long-slow distance running (i.e., heart rate kept below approx. 150 bpm) more closely simulates a prehistoric persistence hunting environment than does the walking that Taleb favors, although of course both can be punctuated by random bursts of short sprints (Taleb, amusingly, says that he uses his desire to chase down and maul former Clinton Administration Treasury secretary Bob Rubin as a motivational tool during his sprinting episodes).

To each his own... DeVany and Taleb seem to have a general dislike for distance running, and they do not feel that it is meaningful to make a distinction between, say, an elite marathoner who maintains a heart rate of 90% of max for two hours (extremely strenuous physiologically), and someone who does more gentle, long-slow distance work to build a tremendous aerobic base. I think that I would prefer it if DeVany and the other paleo-fitness subject matter experts gave rough heart rate ceilings for extended efforts, and left it up to the individual practitioner to decide what kind of pace could be maintained while staying under that target heart rate. Giving specific, objective, and measurable guidelines would also provide a feedback mechanism by which the paleo-athlete could determine his or her progress in terms of fitness for long-range cross-country movement.

(The arguments for the evolutionary pedigree of endurance running/persistence hunting in mankind's deep ancestral times are not without controversy. The basic concept is that we have physiological adaptations which allow us to maintain a pace that exploits a "sweet spot" in the movement qualities and patterns of our quadruped prey targets. Given a flat, hot, open environment, we can drive prey to exhaustion by sustaining a pace fast enough to force them into a sprint/rest cycle which will become increasingly untenable for the prey animals. For those not familiar with the argument, here is a link to one of many interesting papers: http://www.canibaisereis.com/download/liebenberg-persistence-hunting-2006.pdf )

Saturday, July 17, 2010

Fiscal Crisis Ahead?

The Congressional Budget Office is saying that the U.S. debt/GDP ratio will probably exceed 100% by 2012.

The Good News: two of the most popular doomsday scenarios, hyperinflation via the printing press and a national bankruptcy via outright default, are very unlikely.

The Bad News: the austerity measures that will be required to balance the budget, let alone to run fiscal surpluses if we intend to pay the debt down, will be very, very painful unless we enter a prolonged period of remarkably high economic growth. Under some austerity scenarios, many Americans will actually wish that the printing press and national bankruptcy were valid options.

The Worse News: the present value of the government's unfunded entitlement liabilities (Social Security, Medicare, Medicaid) is estimated to be north of $50 trillion. To fund these liabilities without distorting its balance sheet, the government would need to have $50-$70 trillion, today, sitting in a national pension fund getting market returns.

Interest Rates are Key

Here is perhaps the closest thing to a WMD that exists in macroeconomics:

s = (r-y/1+y) * d

where:

s=the primary surplus. This is the fiscal surplus the government must run in order to keep debt/GDP constant

r=interest rate on the debt

y=growth rate of GDP

d=debt/GDP ratio

There are several interesting things that happen when a government has to satisfy this term, but here is one rule of thumb that may be important in coming years: when the debt/ratio exceeds 100%, the interest rates paid on the debt cannot exceed the rate of GDP growth or the country will fall deeper and deeper into debt, even if it runs balanced budgets. The primary surplus must be achieved to offset the debt spiral. If the government cannot or will not do this, the debt dynamics will become unstable.

A very nasty problem arises when deficits, debt/GDP, and interest rates all begin to move together. This is essentially what recently happened to Greece: Greece was running a high debt/GDP (about 115%), was in a period of poor economic growth, and then proceeded to report a larger-than-anticipated deficit. Greek borrowing costs on 2-year bonds had been less than 2% in late 2009, and shot up to almost 20% within six months.

On the other hand, Japan has been able to operate with a debt/GDP ratio of almost 200% (second only to Zimbabwe's, IIRC) without being attacked by bond vigilantes. There are several reasons why Japan has been able to do this, but certainly one has been the deflationary recession that the country has been mired in for two decades. If the Japanese economy were ever to recover and interest rates were to rise, the government would have very serious fiscal problems.

The speed and violence with which the bond market can punish irresponsible governments is what makes high debt/GDP ratios perilous. Once a debt-laden country is confronted with a spike in borrowing costs that takes the interest rate substantially higher than the rate of growth, it must somehow absorb the fiscal shock. The severity of the shock is based on the size of the differential between its current deficit spending situation and the primary surplus that it must now run to keep debt/GDP constant, plus anything that is needed to credibly pay down the debt, and the urgency with which these demands must be met.

For instance, let's use an extreme example and say that a country had a debt/GDP ratio of 150% and suffered an interest rate shock that raised its cost of borrowing to 7%. It has a GDP growth rate of 3%, and was running an 8% structural deficit when it received the shock.

The primary surplus necessary to stabilize debt/GDP is now almost 6% of GDP. In order the achieve this, the government must also cut the structural deficit completely. That's another 8% of GDP. Taking the two together, the government must run fiscal surpluses equal to 14% of GDP. If the market demands that the government also lower its debt/GDP ratio to, say 60% within 10 years, then there is even more pain coming because another large surplus will be necessary to pay the debt down.

This is why it is extremely important for heavily indebted countries to have credible central banks. If it appears that the central bank does not have the will to do what it takes to fight inflation, the market can demand higher nominal interest rates on the debt. As debt/GDP ratios get higher, market participants can look at a country and see that the central bank will have a very hard time raising rates beyond the country's GDP growth because doing so will generate fiscal shocks. Thus, the central bank's inflation-fighting credentials become questionable---there seems to be a ceiling on how far the central bank can go if push comes to shove in the future.

The impact this will have on the targeted country will depend on many factors, including the maturity of the debt: the United States is quite vulnerable to an interest rate shock because we finance much of our borrowing by issuing short-term debt.

The short-term nature of the debt is also why the printing press scenario, wherein the Treasury sells debt indirectly to the Federal Reserve, which buys the debt using printed money, would not work well, at least if attempted on a large scale, for the United States: borrowers would be able to punish the practice by demanding high nominal interest rates to compensate for the high inflation created by the nuclear money printing. The U.S. economy would suffer from high inflation and even higher borrowing costs.

The best scenario for a government that wanted to try to escape its debts by debasing its own currency through the printing press would go something like this:

1) Borrow a huge amount of money upfront, enough to prevent you having to run deficits and go back to the bond markets for many years

2) Have your central bank keep rates very low when you do this, and hope that borrowers are stupid enough to be anchored to these rates

3) Issue bonds that have the longest maturities possible---20 years would be ok, 30 years would be ideal. If possible, go a step beyond and see if the buyers will accept zero-coupon structures, wherein you don't even have to make regular interest payments and can say you will just pay off the entire thing in three decades

Outright default is not an option for the U.S. unless current laws are changed. Legally, our bondholders are at the top of the capital stack and the interest on the national debt will be paid before any other programs, including Social Security and national defense. The result of austerity will be a cutting of programs at the bottom of the stack first, then more difficult decisions as priorities must be made.

Saturday, July 10, 2010

Bas

Mr. Sebastian Rutten: could he be the most entertaining human being on the planet?




One can only aspire to this level of multi-talented showmanship.

Friday, June 18, 2010

Cyborg

He looks really menacing, but he's a nice guy. Evangelista "Cyborg" Santos training at Chute Boxe. The video shows the high-octane MMA technical/conditioning circuit that is typical of the morning session (pro MMA team) at the Chute Boxe Academy.


Thursday, June 3, 2010

Value Investing, Levy Flights, and Tight-Prior Equilibrium


(Salvador Dali, The Persistence of Memory)

Today's post is the first of two that will focus on the strategy of "value investing", which typically entails a search for equities that have been mispriced by the market. The core claim of the value analyst is that he or she is in possession of superior valuation tools: the analyst can forecast future free cash flows for a given company with greater precision than the overall market can, and this creates an opportunity to "beat the market" by investing in target companies before the market realizes its mistake and corrects the mispricing.

As we will see down the road, there are strong, auto-reinforcing philosophical arguments underlying Keynesian economics, behavioral economics, and value-seeking investment strategies, and proponents of the three individual fields have tended to form alliances with one another. For brevity, I will simply term this alliance "the Trinity" here.

Today's post will comment on one of the theoretical foundations of value investing: the notion that market prices are not efficient (i.e., they fail to rationally account for existing information) and that, as a result, serial mispricings occur. Often the evidence that is presented to support this claim is the observation that market prices do not conform to a statistical straitjacket called the Gaussian random walk.

Behavioral Finance Takes on the TP Model and Market Efficiency

Let's begin with a theory of market behavior that is anathema to most members of the Trinity. To the economists of the Chicago school, free markets are "efficient"---all available information is embedded in prices. If one can equate efficiency with a sort of innocence, and equate inefficient pricing with a market that is guilty, the Chicago school advocates a particular default assumption that markets are efficient until we can establish otherwise---they are "innocent until proven guilty". This assumption has been specified as the "Tight Prior equilibrium" model: prior to any exogenous shocks, we should assume that the market clearing price was efficient and that equilibrium was achieved between supply and demand.

If supply and/or demand forces are being artificially constrained or subsidized by government interventions, then we of course have no guarantee of efficiency, since true costs and values are not being embedded in the prices that clear the market. The TP model assumes a free market pricing mechanism is being allowed to work; it is a normative model rather than a strictly descriptive one.

The Gaussian Battlefield

If one can imagine the existence of a Cold War strategic paradigm within modern economics, with Keynesians occupying the space formerly reserved for Communists and the Monetarists occupying the Western role, a logical extension would be the existence of skirmishes or proxy fights in which the major forces would clash over seemingly small, perhaps technical details. These details are actually very significant battlegrounds because of a version of the domino-effect theory that the West used to explain the risks of Communist expansion---for a dynastic antagonist, losing on one of these remote fronts can lead to a cascade of failures and, ultimately, to losing the whole war.

One such intellectual battlefield front specifically concerns the TP model. As noted previously, TP describes a default assumption about markets---markets are innocent (read that as "Pareto optimal") until proven guilty. The Pareto optimality provision means that there is no "free lunch" embedded in market prices---if prices are manipulated by an external force, someone will be made worse off in order to make someone else better off. The optimal distribution of resources in the economy will take place if markets are just left alone and allowed to freely find their own clearing price levels.

So, to give just one example, a government intervention policy that raises the costs of imported sugar in order to protect domestic sugar farmers does not really create jobs on a net basis---some jobs may be protected, but only at the cost of other jobs in the economy which can never exist because consumers and producers are paying higher prices for sugar than they otherwise would. Because these latter jobs may never get the chance to be "born", a particularly foolish or deceptive politician may claim that his protectionist policy has had a net positive effect on the unemployment rate.

TP is the sibling of another famous Chicago school intellectual tour de force, the Efficient Markets Hypothesis (EMH). Suffice to say that assuming prior equilibrium basically means that we should start any real-time analysis of a market with the anticipation that all currently available information has already been priced into its prices---any gross "irrationalities" have been removed by arbitrage traders who move in quickly to profit from such distortions (Chicago school economists frequently refer to the "no-arbitrage" condition---any free money will be quickly seized by traders).

Efficient=All (Publicly) Available Information Priced In

The equilibrium model allows for prices to move when new information becomes available and disturbs the market's current clearing price (what constitutes "new information" is quite slippery, since two people having access to the same report can leave with completely different interpretations). This has several important investment and policy implications: for instance, what some would term to be long-term, clearly irrational asset bubbles and crashes are not actually irrational. This interpretation would suggest that extreme values can occur for brief periods before arbitrageurs move in and correct the mispricings, but the distortions cannot persist unless aided by government policies that create moral hazard.

For instance, the dot.com bubble of the late 1990s and 2000 could be partially explained by the government-organized bailout of the giant hedge fund Long-Term Capital Management (LTCM) and Greenspan's surprise rate-cut in the bailout's immediate aftermath, which may have added fuel to the exuberant environment by creating the sense that the Federal Reserve would not allow a recession to take place. The Dutch Tulip Bubble---normally considered one of the top exemplars of financial folly---could be partially explained by the fact that the Dutch government had decided not to enforce futures contracts and mark-to-market margin requirements, causing speculative tulip-futures buyers to feel that they would be able to escape from obligations if the market was to collapse (and they were ultimately proven correct in this assumption).

If the market processes known information efficiently, the ability of an individual analyst working in a fair environment (i.e., without access to insider information) to spot great bargain stocks for buying or greatly overpriced stocks for short-selling is highly suspect; we should assume that bargain stocks are bargains for a good reason (risk), and that expensive stocks are expensive for a reason (strong growth prospects, niche dominance, etc.). Famous value investors like Warren Buffett have long felt insulted by the Efficient Markets Hypothesis, as it tends to explain their successes as being essentially the result of luck. In fact, it has been alleged that Buffett has been unwilling to give large donations to his alma mater, the Columbia Business School, in large part because CBS finance professors, like most financial economists, are disciples of the EMH.

The question of whether or not free markets are efficient is one of the most important in all of social science, with ramifications that can directly affect the livelihoods of billions of human beings. This question in part hinges on a particular application of statistical inference to market prices.

Efficient=Random: Enter the Gaussian Random Walk

In the natural sciences, a hypothesis must offer testable and observable predictions. A hypothesis which makes many successful predictions and which has coherent, tightly packed explanatory features may eventually be elevated to the status of a "theory." Given the Chicago school's general desire to grant the field of economics a mathematical rigor and elegance that was analogous to that achieved by physics, many have felt that the Efficient Markets Hypothesis and the TP Model should make testable predictions about market price behavior: unless we can test the claims of these arguments empirically, the EMH and TP should be considered merely ideological positions. To properly specify the EMH, the testable statement was made that prices in an optimizing equilibrium regime should be observed to follow a regular random walk.

This can seem counter-intuitive at first---we are conditioned to normally equate randomness with chaos or disorder; the idea that randomness is a produce of efficiency may seem alien at first encounter. The idea here is merely that efficient markets must have random price behavior because non-random behavior would be predictable, which would mean that participants would take advantage of the easy profits, and by doing so they would quickly push prices back to randomness again.

To illustrate this, let's pretend that the S&P 500 always rallied in the morning and then fell in the afternoon. Traders would observe this and would buy in the morning, which in turn would push the prices up and prevent the rally from happening. If they later sold in the afternoon, they would be doing so without capital gains from the morning rally and they would lose money. Soon you would have people buying low in the afternoon, after the selling took place, and then selling the next morning in the midst of the buying frenzy. Of course, everyone would try this, too. After a very short period of time, this "inefficiency" or pocket of predictability in the market would disappear because of the profit-seeking behavior of individual traders and firms.

Inside the Random Walk

This now gets very contentious, but at least a traditional, statistically-specified interpretation of the term "regular random walk" would indicate that price behavior will more or less comply with the normal distribution and a diffusion process called "Gaussian Brownian motion" (this is the "standard deviation ((sigma)) * t^.5 rule that was described in an earlier post on chaos, earthquakes, and market prices).

Using this form of random walk to model mark price behavior has the advantage of allowing for the importation of a number of useful mathematical techniques from the natural sciences, and for some well-known tools of statistical inference to be used as well. In fact, the desire to be able to use the financial analysis and portfolio management techniques that emerge if prices follow these rules is so great that even those who will cite Taleb and Mandelbrot and admit that price changes are not Gaussian will often schizophrenically want to retain the Chicago-developed Gaussian random walk armamentarium. The approach is widely taught in MBA and financial analyst training programs, and the toolkit includes the Black-Scholes Merton model for option-pricing, the Sharpe Ratio, mean-variance portfolio optimization, and many more. What you frequently find is that a few hundred pages of the popular textbooks are dedicated to tools and concepts that require a Gaussian world, and then a concluding chapter cautions the student that there is strong evidence that markets are not Gaussian animals. The student is left to deal with the dilemma that results from this on his or her own, and the author imitates Pontius Pilate and wipes his hands clean of the whole thing.

Conflict with Observed Market Price Behavior


(fat-tailed scorpions like Androctonus are among the deadliest in the world)

The EMH and TP are difficult to attack directly because they are largely conceptual, so the typical entry point is the regular random walk and the t^.5 rule. As mentioned in previous posts, market prices changes are not normally---or lognormally---distributed. Prices are far more "leptokurtic" ("fat-tailed") than they *should* be---a far greater frequency of large price moves is observed than would be even remotely conceivable if markets followed the t^.5 kind of random walk. Particularly after 2008, the question is not whether the strict random walk is actually obeyed in real-life; the real question is whether or not prices must conform to the random walk for market efficiency to be held as true. An even better question, to my mind, is whether markets can be efficient if governments continue to intervene in them in ways that create perverse incentives.

To date, there is no great consensus within the economics profession on this point. Behavioral economists have generally tried to fill the vacuum by using psychological biases in individual humans---mindless, lemming-like herding behavior---to explain these price divergences. Some see a multi-sigma one-day price drop as evidence that markets are flawed; others see it as evidence that markets work very well, and can violently adjust to new information---particularly after prices have been distorted by government policies for an extended period. There is also evidence that procyclical booms and busts are the result of homogenization within the investment products industry: the wide-scale availability of index-investing products, for example, may have increased the potential for tipping points and positive feedback loops to form that can contribute to large divergent moves in the market (more on this later).

Equity Risk Premium and Market Survivability

An obvious question that emerges from the discussion of random walks is why an investor would feel enticed to participate in the stock market if there is a 50/50 chance of an up or down move at any given point. The answer is that there is a long-term upward drift to equity market prices, and the effect of the drift is generally removed by economists before the price changes are analyzed. The random walk is meant to describe the behavior of prices net of the intrinsic return that the market must have to take into account the risk of the investment.

In one sense, then, the S&P 500 is not "random" at all: it is a Darwinian device, designed to attract and distribute capital to successful companies. Because the index is weighted by market capitalization, unsuccessful companies will have less and less of an impact on it over time, in some cases dropping out of it completely, while new companies that grow rapidly will be included.

The stock market can be thought of as an organic entity that wants to live and grow, just as you and I do. In order to live, it must pay participants---this payment is known as the "equity risk premium" and represents the competitive return that an investor must see in order to take on the risk of being the last guy in the capital stack to get paid if things go wrong. The return of the stock market obviously varies depending on the start and stop dates used as the window for calculation, but over the long term it has stabilized at approximately 7%. A concept called duration, which we will get into a bit more next time, can be useful for determining how long a simple buy-and-hold investor should anticipate having to maintain an equities index position in order to attain this fundamental return of about 7%.

Getting People to Play: Markets, Bookies, Lines, and Sports Betting

An equity market that has been around for a long time has, by definition, been successful at paying people enough to invest in it. A market that cannot attract players will die (and, historically, many markets have died). To illustrate the way that market prices move up and down in order to try to generate interest, entice trading activity, and encourage investor participation, we can consider the role played by a bookie in a sports betting operation.

The backbone of the sports betting industry and the standard NFL line bet is the "11-10 Pick 'Em". You may see that a betting line is listed as "Dophins -3 Bucs". The "11-10" descriptor indicates that you must put down $11 in order to win $10 (if you won, the bookie would have to pay you $21---your original stake of $11 plus the $10 in winnings); the "-3" means that the Dolphins must beat the Bucs by at least 3 points in order for the win to pay off.

If you wanted to pocket $100 on this game if you won, you would first need to take $100 (your payout goal) and divide by the $21 payout of a single successful bet; the answer is 4.762. So your necessary exposure to have a chance at winning $100 would be to take $11, or the amount you are required to put with the bookie in order to play 11-10 Pick 'Em, and then multiply by 4.762 (your answer being $52.38). To be able to walk away from a winning game of 11-10 Pick 'Em with $100 in your pocket, you would need to put $52.38 with the bookie. If you lost the bet, you would lose $52.38.

Many people think that the betting line that the bookie offers is designed so that he can win based on a particular outcome. That's not actually how the line is set: in an efficient gambling operation, the line is set by means of a market price discovery mechanism that is designed to find the game that would most closely represent a 50/50 chance of winning and losing.

In other words, a sports betting operation, like the stock market, will move prices up and own in order to reach the clearing level after which a random walk---50/50 odds---will be generated.

Why is this? Consider that you must be willing to put down $52.38 with the bookie in order to win $100 if the Dolphins beat the Bucs by at least 3 points. Now let's say that you want to simultaneously take the other side of the bet, so that you would win $100 if the Dolphins fail to beat the Bucs by 3 points; once again, this bet will cost you $52.38. In total, you must put down $104.76 in order to be guaranteed to win $100. You certainly won't do this, because it is clear to any rational individual player that the game would have a negative expectancy of $4.76 (this money represents the bookie's edge, which is the result of you having to risk $11 in order to try to win $10).

However, the bookie does not expect you to take both sides of the bet; the bookie merely wants to match you up with someone else who does not favor the Dolphins and who is willing to take the other side. Between the two of you, you will pay $104.76. One of you will win and the bookie will lose money on that person---the bookie will have to pay someone $100 after the game. But more than half of that money is actually coming from the $52.38 that the player put with him; the bookie is only losing $100 - $52.38 = $47.62 of his own take, and his own take is being financed by the losing player's complete write-off of $52.38.

The bookie doesn't care whether the Dolphins or the Bucs win because in this case he's making nearly 5% without risk. His fear is that he will not be able to match gambler against gambler and net out his risk exposure; if 50 people bet on the Dolphins and only 5 bet on the Bucs, the bookie could be annihilated if the Dolphins won (although he would make a killing if the Bucs did). Most bookies actually hate directional risk, so they make sure that the line---the betting odds---will move around to find the pool of greatest liquidity, or the point that will attract equal numbers on each side of the bet. If the Dolphins were more heavily favored by the majority of the sports gamblers, then the line might move to Dolphins -10 Bucs, or even higher, in order to entice people to take the side of the Bucs.

(by the way: online casinos will sometimes offer "21-20 Pick 'Em" odds, which are a bit better from the gambler's perspective. The way to make serious money in sports betting is to find two different bookies who are offering two different lines on the same game, and then to arbitrage them. This is called "middling the line").

In a free market, the "bookie" is basically economic growth under a capitalist system---when we are lured to play, we are providing risk capital to the Darwinian engine that drives growth. When a stock is currently priced at $100, it means that the last transaction that took place between buyer and seller took place at $100---$100 is what economists term the "price at the margin." The closing prices on a given day do not reflect the "average" of trades for that day; they simply reflect the final trades.

From that initial clearing price of $100, the stock's price may immediately go to $110, which would mean that the person who sold at $100 sold too low. Or perhaps the price will drift to $90, which will mean that the buyer at $100 bought too high. The point of market efficiency is that the marginal price reflects a bet condition in which a buyer and seller both felt that a price was fair enough for a voluntary transaction to take place. Barring some incredible information, the stock won't suddenly trade at $5 or $5,000 per share because no transactions could occur at such extreme levels.

When an extreme event occurs in the markets and prices move far more than they should in a Gaussian world, it is something like a sports betting line making a violent adjustment when it becomes public that a star quarterback is out because of an injury sustained in a closed practice. The U.S. equities markets are very sensitive to changes in the economic landscape because stock prices reflect anticipations of company-generated cash flows that extend literally decades into the future. Prices in the market will move to whatever clearing level is necessary to find the next marginal buyer and seller. If an analyst, behavioral economist, or pundit is bemoaning a given market's detachment from "fundamentals", he or she can always enter a trade at the margin.

Thus, in perhaps a counter-intuitive way, a healthy market is always trying to move the betting line---the price---so that a large number of eager buyers and sellers will participate and a random walk will ensue. When a major move occurs, it means that the betting line must shift in order to attract more players; where some consider this to be "market failure", I would submit that a more correct term might be "buyer failure" or "seller failure" depending on one's particular point of view (i.e., if you are a seller and upset because no one will buy your goods at the price you would like them to do, then you would call this "buyer failure"). The market only fails if prices are not able to adjust to find a level at which a deal can get done. Whether the adjustments happen gradually or violently is not really the market's fault.

Those who have already placed their bets may not approve of a shift in the betting line or a sudden move in market prices against their positions, but they would be well-advised to remember that the market, like the sports betting bookie, is not really interested in who is "right" so much as it is interested in making deals. If we don't like a particular price change, we should ask why there was no liquidity at the price we did like, and why our fellow investors did not want to participate at that time and serve as the marginal price-settlers. The fact of the matter is that, perhaps for a variety of reasons, no one wanted to do a deal, and now the market price discovery system is trying to find a price at which buyers and sellers can voluntarily come together.

The search for equilibrium does not mean that equilibrium is permanently achieved---it is a dynamic process. Uncertainty is what gives markets life, since a price structure that could not lure people to voluntary play (because buying or selling at those prices was obviously a bad deal) would cause a market to eventually die.

The Mysterious Levy Flight

I will briefly note here that the Gaussian is not the only type of random walk, and this may turn out to be an extremely important point. Another type, the so-called "Levy flight" (to appease the pedantic, I will note that there should technically be an accent over the "e" in Levy), can account for the market's tendency to occasionally generate spectacular price excursions (and to be able to do this for almost no reason at all; big moves can happen out of nowhere, on quiet news days).

When you graph the results of a Levy flight process, you do not get the familiar bell curve of the Gaussian random walk; instead, you produce a power law distribution. As a regular reader of this blog will recall, power law distributions are common in nature and are distinguished by having very long tails---extreme moves, while increasingly uncommon, are far, far less rare than they are in Gaussian bell curves.


(Gaussian bell curve next to power law distribution: note how the left and right tails of the bell curve drop off to nearly zero very quickly, while the right tail of the power law goes on and on. The tails are where the extreme events take place)

The problem with the Levy flight approach is that using it means abandoning many of the most beloved quantitative techniques and entering a far more uncertain and imprecise world. If shocks are unpredictable (i.e., random) and can cause monstrous moves away from prior "equilibrium" conditions, some currently popular investment and trading strategies would be seen as very dangerous.

Does Levy Flight Price Behavior Mean That Markets are Not Efficient?

Now things get more complex and subtle, almost to the point of representing a philosophical point more than an economic one. What we find in markets is that there are periods in which prices do fall well within the bounds of a Gaussian random walk, but they are punctuated by periods in which prices have a non-random trend component; markets can reveal persistence---positive feedback loops that reward strength with more strength and weakness with more weakness---as defined by analytical tools such as the rescaled range analysis (Hurst coefficient)that was described in an earlier post on earthquakes and markets.

The Levy flight is still a form of random walk---there is little evidence that forecasters can predict when and where markets will suddenly unleash terrifying, non-Gaussian price excursions. Many value investors have been caught on the wrong side of the downside moves, and have blamed them on "inefficient markets" or "animal spirits" (as if markets existed for their benefit and should be judged by how they reward any particular class of investment or trading strategy).

"Efficiency" in this context merely means that markets have a powerful way to react to the level of uncertainty in the environment, and of course the level of uncertainty is not stable. There is no requirement that markets cooperate by behaving in ways that are friendly to Gaussian statistical inference tools---markets do not "fail" when prices fall viciously to reflect an emerging reality, they would fail if they allowed investors to continue on the former course. Indeed, Nature itself is characterized by long periods of gradual evolutionary change punctuated by extreme events in which very aggressive selection pressures force massive changes. Natural history contains numerous mass extinction periods---"Great Dyings"---in which significant percentages of the dominant lifeforms on earth were wiped out, while former marginal players became the new big winners.



Crisis-Hunting Trading Stategies

In other words, markets are very efficient in that the transitions between the Gaussian world, which Taleb terms "Mediocristan" and which we could refer to as Regime A, and the Persistence-driven world, which Taleb terms "Extremistan" and we could refer to as Regime B, do not normally come with any warning. There is a deep unpredictability to their arrival, and attempts to find patterns or reliable indicators that would allow for the precise prediction of the timing of extreme moves have generally failed.



(billionaire hedge fund legend Julian Robertson shorted the NASDAQ bubble and, visibly shaken during a TV interview in 2000, was forced to shut down his previously successful Tiger Fund after sustaining large losses. As we all know, his investment thesis was later vindicated. Robertson is currently very bearish on the US economy and has put on a massive inflation bet by going long 2-year Treasury debt and short 10-year, thereby speculating that the yield curve will steepen sharply in response to a coming, possibly apocalyptic debasement of the US dollar. I am highly sympathetic to this macro position and will go over the mechanics of the problem in a future post)

This is why trading strategies based on "crisis hunting" or "persistence capture" are psychologically grueling: if the goal is to always make sure that the trading program is well-positioned within a breakout trend, the crisis-hunting program must treat every divergent move as if it was in fact the beginning of a breakout trend. Unfortunately, the majority of these divergent moves will be head fakes and the program will lose money (albeit small amounts of money) as Regime A dominates and prices quickly retreat back to their recent historical norms. The strings of small losers can go on for months until eventually Regime B arrives again and market behavior erupts to the upside or downside.

In an ideal world, the program would be able to tell that the markets were in Regime A and that any sudden price excursions were not going to turn into sustained trends. The program could ignore the head fakes and simply avoid trading during these quiescent periods (or even temporarily implement a naughty, pro-Gaussian counter-trend strategy), but then could jump back into the market just in time to get positioned in front of one of the monster waves of Regime B and surf it to great profits. I have never heard of a legit trading operation that has been able to pull this off repeatedly---the market's ability to switch between regimes is just too clever. In reality, the appearance of the monster waves appears to be unpredictable, and so crisis-hunting trading programs tend to have months of relatively flat, unexciting performance (characterized by small losers and winners), punctuated by dramatic periods of outsized gains. Possession of the ability to somehow systematically distinguish between the head fakes and the true breakout trends remains the Holy Grail of trading.

To summarize, the tricky part is this: the periods of divergence from historically quiescent periods come at random intervals, although both the severity and frequency of the extreme moves seems to be increasing as a result of the widespread use of some of the same innovations, strategies, and portfolio construction techniques. If a trader or strategic investor is equipped with an accurate analytical framework, then perhaps he or she can at least determine that certain environments are "primed" for big waves (relatively loaded up with latent potential for violence), but getting the timing right with any degree of precision is extremely difficult.

Efficient Markets Insult The "Elite"?

Many of us are very uncomfortable with the notion that our lives may be so governed by random processes. To someone who has aspirations of organizing and leading masses of people towards collective, political goals, the concepts of market efficiency and random walks go beyond discomfort and approach the level of true trauma, since market efficiency highlights distributed intelligence and individual decisions and sharply discounts the value of collective action, the acquisition of specialized training in forecasting techniques, and the prudence of commitment-based strategic planning (as we have discussed, a strategy based on heavy pre-commitment of resources towards a centrally-planned aim is only really rational if accurate predictions can be made regarding future states of the world).

"Progressives" who seek to harness the power of the masses for unified, centrally directed sociopolitical goals have long felt that the hands-off, essentially libertarian message of the EMH leads to a kind of anarchy and a celebration of self-interested behavior that subordinates the glories of collective action (and, of course, the attractive and intriguing "leadership" possibilities that such actions may present) to the insolent cult of the rugged individualist.

Here is Robert Shiller of Yale, one of the leading intellectuals of behavioral economics, writing on the notion of market efficiency in his widely-cited and enjoyable book Irrational Exuberance:

The theory that financial markets are very efficient, and the extensive research investigating this theory, form the leading intellectual basis for arguments against the idea that markets are vulnerable to excessive exuberance or bubbles. The efficient markets theory asserts that all financial prices accurately reflect all public information at all times. In other words, financial assets are always priced correctly, given what is publicly known, at all times. Price may appear to be too high or too low at times, but, according to the efficient markets theory, this appearance must be an illusion.

Stock prices, by this theory, approximately describe 'random walks' through time: the price changes are unpredictable since they occur only in response to genuinely new information, which by the very fact is new is unpredictable. The efficient markets theory and the random walk hypothesis have been subjected to many tests using data on stock markets, in studies published in scholarly journals of finance and economics. Although the theory has been statistically rejected many times in these publications, by some interpretations it may nevertheless be described as approximately true.


Note that Shiller's discussion of random walk rejection only applies to Gaussian random walks; as I mentioned before, non-Gaussian Levy flight random walks are in fact supported by the evidence, and they are even more vicious to the goals of elitist central planning than are their more benign and pleasant Gaussian brethren. If you tinker with a market that is prone to Levy flights and get something wrong, you can inadvertently unleash Hell.


("On my signal, unleash Hell.")

Perhaps attempting to garner support by revealing the great threat posed to central-planning enthusiast-type intellectuals by the EMH, Shiller goes on to neatly describe the social ramifications of the Chicago position on efficient markets (a ramification that creates a temporary alliance between two groups that would seem to normally be at odds with one another---government regulators and professional investors):

At its root, the efficient markets theory holds that differing abilities do not produce differing investment performance. The theory claims that the smartest people will not be able to do better than the least intelligent in terms of investment performance. They can do no better because their superior understanding is already completely incorporated into share prices.

In we accept the premise of efficient markets, not only is being smart no advantage, but it also follows immediately that being not so smart is not a disadvantage either. If not-so-smart people could lose money systematically in their trades, then this would suggest a profit opportunity for the smart money; just do the opposite of what the not-so-smart money does. Yet according to the efficient markets theory, there can be no such profit opportunity for the smart money.

Thus according to this theory, effort and intelligence mean nothing in investing. In terms of expected investment returns, one might as well pick stocks at random---the common metaphor of throwing darts at the stock market listings to choose investments.

Thus has the efficient markets hypothesis made enemies of two groups: active managers who claim that they are smarter than the market pricing mechanism (not all active managers make this claim, but those that say they have superior stock-picking skills certainly do), and governments who wish to intervene and change prices to reflect a social engineering mandate of some kind. The efficient market is above all a threat to the notion of an intellectual elite who are blessed with special insights into the correct pricing of assets.

In fact, the Efficient Markets Hypothesis (and Friedman's related concept of rational expectations) are incredibly democratizing, since they imply that the competence gulf between academic economists (and the politicians that they advise) and ordinary market participants decays to zero over time, and non-economists learn not to be tricked by inflationary policies and deficit spending programs. For those who peddle consulting and advisory gigs that, one way or another, seek to draw a sharp distinction between the illuminated individuals who can handle the truth and the dirty masses who cannot, Friedman became a devil figure, a heretic. Rational expectations was and still is seen by some as a serious blow to the prestige of the economics profession and a generator of problem children who "practice economics without a license," to use a phrase that I believe Paul Krugman has employed in the past.

................................................

Hopefully today's post has set the stage for the next, which will try to unpack the strategy of value investing and discuss its pros and cons.