By Adam Elkus
In the previous installment, I wrote about the distinctions between “failure is not an option” and “skin in the game.” Now, I will conclude by talking about the link between the two. I began in the first installment by talking about Obamacare and Clay Shirky’s feeling of shock that anyone would want to design a major sociotechnical system with the idea that the “failure is not an option” algorithm is desirable.
I have tried to argue that “failure is not an option” is a “simple” algorithm that is designed to ensure that a risky and complex venture can be carried through to completion. It does not guarantee that the venture will be successful on its own merits. In fact, it does not even address this question in the slightest. What it does do, however, is ensure that the venture can be carried through. By limiting the ability of the design to evolve in time, it ensures that purity of vision is maintained. By implementing the design with maximum force and/or velocity, it ensures that all of the necessary resources are devoted to the task. And by guaranteeing automatic consequences for failure (though, as the previous post explained, the distribution of consequence is variable), it creates a “Rubicon” effect that should motivate the organization implementing it to give full effort and not look back.
Distribution of consequence, however, is a subject that people often consider independent of the main algorithm. It is perhaps understandable that many would explain military-strategic failure by arguing the following that societal elites do not suffer consequence for failure, while a common soldier is punished for the most minute of mistakes. As a result, the following occurs:
- Objectives are ill-defined and vacuous
- Wars of choice are more common
- Indecisive wars are more common
Hence when the elites are properly incentivized in the same way that the soldiers are, the wars should be less common and more necessary, the objectives should be more clear, and the wars themselves should be fought with more decisiveness and vigor. On the surface, there is little objectionable about this. It is a creed that both the martial conservative, the centerist, and the dovish center-leftist can both get behind. But there is actually a problem lurking behind this application of the “skin in the game” concept.
American analysis of strategy suffers from some several flaws. One of which is that it is difficult to see how any American authors on strategy have rigorously defined what a “well-defined” objective looks like. It is even more vague how a war of choice can be rigorously separated from a war of necessity without bringing in subjective judgement. The only term that we have some confidence about is decisiveness – when Americans think about decisive warfare, they usually identify it with Jomini’s explicit instruction to concentrate one’s forces at the decisive point, strike hard, and relentlessly scourge the enemy until he cries uncle. Decisiveness is about speed, fires on target, and destruction.
I will make “rigid” a latent variable for “well-defined.” A well-defined objective — good or bad — will be clear enough that everyone will understand it and not seek to alter it. “Carthage must be destroyed” is an example that I often use because it is as well-defined as possible. If Carthage has not been destroyed, the objective is not complete. If some Roman politician were to stand on a ship with a “mission accomplished” barrier while Carthage still stood, its citizens had not been sold into Roman bondage, and its field remained unsalted he would be mocked in the same way Bush was after the Iraqi insurgency began. And I will also make “high-stakes” a latent variable for “necessary.” Surely if a war is deemed to be “necessary” by the body politic it must have very high stakes for the foundational pillars of that state — either ideologically (a threat to the nation’s conception of itself) or quite literally (an invading army at the doorstep). So why would a war not be well-defined, necessary, or decisive?
First, Clausewitz tells us that “policy” is the coagulation of a political process. Political preferences on all levels differ, and are aggregated in an imperfect fashion. Additionally, politicians often prefer flexibility in all matters and often would prefer to focus on domestic policy than warfare. Wars are costly and risky, and when possible they would seek to prefer some kind of way of splitting the difference — like Obama’s idea of sending aid to rebels but not bombing Syria. So when strategic objectives are well-defined, they tend to be somehow hardened against political interference and rigid. Keep in mind as well that the rigidity of a strategic end does not imply that correct ways of fulfilling it can be found that harmonize with the means available. Take Prohibition for example. The end was extremely clear — eradicate alcohol consumption as a significant American habit. And it was also so rigid that it resisted repeal long after most realized that enforcing it posed significant challenges.
Conception of an issue being high stakes tends to produce rigid (aka “well-defined” objectives). In Vietnam, American elites were convinced that supporting the tinpoint dictator Diem’s South Vietnam was necessary to prevent the “dominoes” across the region from falling. The entire Paul Nitze-influenced vision of the Cold War was a mental Rube Goldberg contraption that took the fortunes of peripheral states in the Global South as input and produced strategic consequences for the homeland as output. Hence to many elites Vietnam was certainly a war of necessity, well-worth committing American draftees. And they would not yield from this course of action for fear not only of the Communists, but also of the domestic political consequences of backing down.
Decisiveness is trickier. Whether something is executed speedily and with sufficient force depends a great deal on the constraints available. Fear of Chinese intervention constrained the obvious remedy to the Vietnam problem — destroying the military power of the North and calling it a day. So the speedy solution was out of the question. But America devoted substantial resources. I have relatives that visited Vietnam after the war and saw the gigantic craters left by the bombing. Only by ignoring the physical and human toll the US inflicted on Vietnam and its neighbors can we describe Vietnam as lacking the application of decisive force. In other areas, the US applied both speed and military decisiveness. The destruction of the Iraqi army in 2003 was both quick and rooted in the idea of precise yet overwhelming force (“shock and awe”).
There will always be some kind of inherent constraint on the use of force and the speed in which it is applied. Schlieffen’s plan was constrained in both speed and intensity by the logistics of the early 20th century, European politics, and the laws of physics. But one way that the politician can be marginally more certain that the design will be executed with martial vigor and urgency is if the requirements are rigid and the task is considered to be of high stakes. As noted before however, speeding up the application of force and throwing more resources into play is often a very risky endeavor. If you come at the king and you fail, what do you do?
What I am trying to suggest is that when “skin in the game” conditions exist, the decisionmaker is incentivized to employ the very forcing algorithm that Shirky views as so perilous and obviously counterproductive. In Shirky’s ideal world — and by proxy, the ideal world of strategic theory — the strategist is flexible, creative, and experimental. He or she does not treat the task in such a rigid, risky, and self-defeating manner, and accounts for all of the entropic difficulties that come with the design and execution of strategy. They are experimental, reflexive, and willing to abide by Moltke’s maxim that no plan survives first contact with the enemy.
Anton Strezhnev, in a eloquent critique of “skin in the game,” explains why “skin in the game” itself is unlikely to optimize the best behavior:
Sandis and Taleb’s argument is uncompromising, which perhaps makes it more appealing as an ethical claim than as a practical one. By arguing that agents are only justified in acting on behalf of principles when they have “skin-in-the-game,” they have assumed away the entire principal-agent problem. If the agent has the exact same preferences as the principal (i.e. they are exposed to the same risks), then there is no problem. The agent will always behave in the manner that the principal prescribes. …….
In the real world, agents rarely share the same preferences as their principals and principals are almost never in perfect control of their agents. Power is shared and relationships are tense. Yet delegation is a necessary aspect of nearly all human institutions. Moreover, there is rarely a single principal. Agents face conflicting pressures from a myriad of sources. Politicians do not respond to a unified “constituency” but to a diverse array of “constituents.” So when Sandis and Taleb argue that decision-makers need “skin-in-the-game,” they raise the question of “whose game are we talking about?” . ……
Principals get noisy signals of agent behavior. It is unclear whether an outcome is the result of poor decision-making or bad luck. This distinction may or may not matter, depending on the case. However, in many instances where it is difficult to observe the agent’s behavior, the optimal solution to the principal-agent problem still leaves the agent somewhat insulated from the costs of their actions.
This is the Achilles heel of “skin in the game” — particularly when, as I have noted above, the rigidity of a design does not necessarily imply how it should completely be implemented in a fluid situation. I will use the fictional example of Starcraft: Brood War‘s United Earth Directorate expedition as an example of how this can play out even when risk is shared to a degree unlikely in the “real” world except in the ideal circumstance. Admiral DuGalle and his subordinate Vice Admiral Stukov are in charge of a UED fleet that has traveled far from its logistical base into the war-torn Korprulu sector. Admiral DuGalle, the commander of the fleet, has a very clear and rigid objective: pacify the sector in which the game universe takes place. In order to do so, he and VADM Stukov must decide what to do about the Psi Emitter, a powerful device capable of controlling the Hive mindlike Zerg aliens.
Stukov discovers that the UED’s native Korprulu informant, Samir Duran, is actually the Ahmed Chalabi of the Starcraft universe. Although the UED depends on him for intelligence regarding the politics and strategy of the operational environment, he has his own agenda. Stukov decides that the Psi Emitter must be utilized. DuGalle, convinced by Duran, thought the Emitter should be destroyed. The stakes are extremely high, as the UED has a fixed number of forces and is far from home. Either they succeed or be, like the Athenians in Syracuse, stranded in a hostile land. So Stukov, fearing that Duran’s influence has blinded DuGalle to the potentially dire ramifications of destroying the Psi Emitter, decides to active it on his own.
DuGalle receives a “noisy signal” of Stukov’s failure to implement his orders. In the game, DuGalle and Stukov are presented as lifelong friends and companions, and it is clear that DuGalle is puzzled by Stukov’s sudden insubordination. One would imagine that the most optimal way for DuGalle to resolve the issue would be to first take all necessary means to stop Stukov from doing what he wanted with the Psi Emitter, and then ascertain whether his dear friend may have been correct in seeking to utilize it. Punishment would be decided by the actual information gathered about why Stukov disobeyed orders. This “optimal solution,” as Strezhnev argues, would still leave Stukov somewhat insulated from the costs of his behavior, given that he committed a drastic act of insubordination that could potentially threaten the entire success of the expedition. This act of insubordination, in the abstract, would justify a drumhead court-matial and execution (or at the very minimum the harshest non-lethal punishment available should Stukov have done it out of malice).
Instead, DuGalle regards Stukov as a traitor and orders his execution. An attack force assaults Stukov’s men at the Psi Emitter facility and kills him. It is only after the grim task is completed that DuGalle realizes that he was wrong, and that Stukov in fact had been correct. At the end of the UED campaign in Brood War, as DuGalle prepares for his own suicide (to pre-empt being killed by the victorious Zerg) he bitterly writes that his biggest regret was that his “pride” killed Stukov. However, DuGalle is being far too harsh on himself. What actually killed Stukov was the rigid and automatic application of punishment. The goal was clear, the force available was great, the speed of the UED maneuvers were as rapid as any one could expect in a strategy video game.
But even a clear policy and a clear strategy will run into difficulties in implementation, because strategy is (as Moltke noted) a game of expedients. It evolves in time. And as Strezhnev noted, the “skin in the game” concept assumes the principal can assess whether the agent executes the task and assumes away the noise and uncertainty inherent in all such relationships in the real world. We can only guess what kind of Professional Military Education (PME) is taught within UED war colleges. But had, perhaps, someone taught DuGalle about principal-agent theory, the good admiral might not have lost his best friend.
In the grand scheme of things, it’s hard to measure the impact of Stukov’s death on the ultimate strategic outcome. The UED failure was not deterministic. But certainly Clay Shirky would not have approved of a strategic plan involving the infusion of an non-renewable military force into a complex interplanetary system being contested by the Zerg, Terran Dominion, the Protoss, and Raynor’s Raiders. Clay Shirky certainly would not have also approved of the rigid design and its inability to be qualitatively altered without the drastic step of accidentally killing a high-ranking official with different ideas. Failure was not an option for DuGalle and his forces, and they were massacred to a man by the vengeful Zerg leader Infested Kerrigan (Queen of Blades) while they were desperately seeking to flee the battlefield.
This latter outcome implies something else about “skin in the game” that is very disconcerting. Punishment for failure here is equal for every UED soldier. They are all killed. But since they are all dead and floating in space somewhere, they cannot learn from experience. If they could respawn at the beginning of the campaign, knowing what they did about the consequences of their choices, they could perhaps learn an optimal strategy over repeated tries if the campaign’s parameters were held constant. But they can’t, since they are dead. And because they are dead and the specific conditions of the campaign are now dated, a different set of UED policymakers and soldiers will be tapped to undertake a different campaign should the UED choose to re-attempt the conquest.
And within this new campaign, the sample space of choices and outcomes will be wholly different. Brood War 2 will not be like Brood War 1. So “skin in the game” can only be expected to reasonably optimize behavior if we treat the world as like “Groundhog Day” — it is held constant as you try and try, and then it changes when you succeed. And when you fail, you respawn at time 0 at the beginning of the world sequence with the world parameters reset to starting point. Good for Bill Murray, not for poor Admiral DuGalle.
In sum, we know two things for sure. First, a forcing algorithm merely guarantees that a venture can be launched and carried through. DuGalle in the UED opening, asks if Stukov is prepared to go “all the way,” and the “failure is not the option” algorithm ensures that the answer to the question is affirmative. Second, regardless of whether Stukov’s death doomed the UED, it is hard to see how accidentally killing a senior leader that had good ideas and rapport with the commander of the expedition somehow optimized the UED’s prosecution of its campaign. Either way, it is also hard to see how the larger strategic failure might optimize UED behavior in the future. Perhaps the next expedition will have a better success rate, but we cannot plausibly claim anything more than a weak causal link between the failure of the first expedition and the possible success of the second.
The complexity of the “is failure an option” series of posts goes to show several unfortunate things about strategy. First, there is often too much confusion of the prescriptive with the descriptive in strategic discourse. In an ideal world, strategy would be executed in the way Shirky recommends. But our world is never ideal. Second, appealing and normatively based folk theories about responsibility and optimization of behavior can have catastrophic consequences when actually put into place. We have to be rigorous about the microfoundations behind them.
When studying strategy, we must keep in mind the constraints on strategic choice, as well as realistic microfoundations that would inform the interactions and incentivizes underneath gauzy rhetoric. Otherwise we may be conquered by the metaphorical Kerrigans that always threaten to thwart our hopes, desires, and plans.