By Adam Elkus
Clay Shirky has a must-read post on the Obamacare website fiasco. Though I approve of his overall message, I am not sure I fully agree with Shirky’s diagnosis that Obamacare is “doomed.” Co-blogger W.K. Winecoff noted on Twitter that despite the shambolic rollout, there is still a nontrivial chance that the Affordable Care Act could ultimately persist if its key provisions survive long enough to force societal adaptation to a new equilibrium. To indulge the political science cliche, path dependence matters. And there has been remarkable variation in the rollout so far. As a foreign policy and national security blogger I’m not going to speculate any further — this isn’t really the point of the post.
I’ve compiled some notes over time about the subtext of strategy theory relevant to the themes Shirky discusses here, and there is too much material for one post. Hence I’m going to break it up into three posts that touch on the theme of the difference between optimal and politically realistic design and implementation of strategy.
Why am I interested in the ACA, after all? Well, Shirky’s post is really not about Obamacare. Instead, it’s about how not to think about sociotechnical systems. And as Ezra Klein noted, the problem here is not just a website but the way the technical infrastructure intersects with the ACA’s core design choices. Shirky in diagnoses the faulty rollout as a yet another case of “waterfall” technology development methods:
The preferred method for implementing large technology projects in Washington is to write the plans up front, break them into increasingly detailed specifications, then build what the specifications call for. It’s often called the waterfall method, because on a timeline the project cascades from planning, at the top left of the chart, down to implementation, on the bottom right.
Like all organizational models, the waterfall method is mainly a theory of collaboration. By putting the most serious planning at the beginning, with subsequent work derived from the plan, the waterfall method amounts to a pledge by all parties not to learn anything while doing the actual work. Instead, the waterfall approach insists that the participants will understand best how things should work before accumulating any real-world experience, and that planners will always know more than workers.
The waterfall vs. agile design methodology dispute is old hat in technology circles. So why did the tech-savvy Obama administration ignore best practices? Shirky explains that the White House may have seen a staged or progressively scaled rollout as an boon to those seeking to repeal the bill. Hence “failure was not an option” and the White House opted for a high risk/high reward plan that minimized opportunities to painlessly patch small failures. Facing intractable political opposition and a mixed public verdict on the ACA, the WH feared that iterative improvement might also unintentionally raise the risk of Obamacare being halted altogether. So it rolled the domestic political equivalent of war’s “iron dice” and implicitly decided to risk everything. Indeed, failure was not an option. Once the decision was made to go forward, it was all or nothing and there was no turning back.
To a technologist like Shirky, this kind of logic is self-evidently ridiculous. He rails at length about Beltway perceptions of technology, refusing to build on his somewhat implicit recognition that politicians sometimes can make informed decisions on a decision calculus different from technological best practices. Indeed, as I will argue in this post, Shirky’s belief that agile design is self-evidently superior mirrors some contradictions in strategic theory.
Strategic theorists often talk about the need to ensure both a proper and well-formulated allocation of ends, ways, and means that also is flexible enough to compensate for fog and friction in war and the enemy’s ability to frustrate even the most well-planned theories of victory. Because so much in war is based on subjective assessment of the situation with incomplete information, a strategist is implicitly Bayesian in many ways. Indeed, Colin S. Gray casts strategy as an iterative process of dialogue and negotiation, a constant and often frustrating set of repeated trips across the “strategy bridge” between violence and political purpose. This does not exactly map onto Shirky’s conception of agile development, but it shares many similarities.
The paradox is that the “drums and trumpets” folk theory of strategy that audiences eat up is a story of endless military equivalents of how Shirky describes Obamacare. The Great Men of History –at least seen in the eyes of those who consume popular military history, are those who embark on high risk endeavors with rigid plans and timetables and little capacity for iterative learning and improvement. Cortes famously scuttled his ships – he and his men would either conquer Mexico or die trying. The mythological Churchill was a man possessed of superhuman will that fought virtually alone against Hitler despite the total defeat of British landpower in Europe and refused a separate peace. The wildly popular “historical” movie 300‘s interpretation of the Spartans heavily relies on the phrase “with your shield, or on it” as its motivating credo. You cannot get more “failure is not an option” than that.
It’s easy to mock all of this as so much macho posturing and militarism. But what if “failure is not an option” is actually not a credo but a forcing mechanism that a decisionmaker may invoke under certain circumstances? Indeed, perhaps it recurrence can be explained by the persistence of these environmental conditions.
- The decisionmaker has a complex and risky design they want to execute. It may be controversial, or at the very minimum involve a substantial degree of uncertainty and steep risks.
- Agile development makes the decisionmaker vulnerable to some kind of internal or external problem. Perhaps, like the Bush administration’s marginalization of the CIA and State Department during Iraq War planning, they fear being undermined by within. Like von Schlieffen croaking out last-minute instructions about keeping the right wing strong in the Western front invasion plan, they fear purity of design being compromised in devastating ways. Or, as Shirky claims re: Obamacare, fear of powerful external opposition plays a role.
- The stakes are high. Risk is perceived to be extreme and perhaps existential, but reward is seen as worth it. The deterrence theorist Scott Sagan argues that Imperial Japan was willing to risk devastation because the consequences of not fighting would be a subtraction of “Imperial” from Imperial Japan.
- In order to be successful, the design must be executed rapidly and decisively in order to surmount the substantial obstacles to success. Deviations from the time table cannot be risked, even though the scale of the plan seemingly warrants caution.
- In order to overcome the risks implied by environmental condition 1, the design must be tightly controlled, supervised, well-formulated, and rigid in character.
- In order to ensure that the venture succeeds, there must be a perceived Rubicon that — once crossed – functions as a point of no return. Those involved must either succeed together or fail together and suffer the consequences.
“Failure is not an option” is thus a simple algorithm for optimizing a complex and risky venture in the conditions specified above. It consists of three instructions, one of which (as I will note later) is also an algorithm in its own right:
- Limit the ability of the design to evolve in time as much as possible, allow only tactical adjustments.
- Implement the design with maximum force and velocity, but with little room for error as consequence.
- Guarantee automatic consequences for failure. Either the venture succeeds or a great penalty is dispensed. Make defection from the venture impossible.
In part two of this post, I’ll look at the difference and similarities between “skin in the game” and “failure is not an option.”