The Resurgent Defense of Masculinity is Awful

By Graham Peterson

I’ve taken an interest in men’s issues since I read Iron John by Robert Bly as a teenager, and presented it for my Men’s Studies class.  Yep, men’s studies.  I’ve taken an even greater interest in the last couple of years, since a surge of what’s being called Men’s Rights Activism has splashed the internet like the side of a toilet bowl.

A lot of the new public and academic debate over men’s issues focuses on women.  But it’s mostly packaged, or at least titled, as an argument in support of men.  I can’t think of a stranger way to define men’s issues and rights than by the progress (or not) of women.  And in fact, that logic is precisely the tack that Catharine MacKinnon, a very smart and very deservedly-famous Marxian feminist used to outline female sexuality and identity in the 1980s.  In her view, which at different dilutions became a lot of people’s views, femininity was defined by a reflection of men’s desires because they had all the power.  I think that idea was totally overblown, and I think it’s a huge mistake to revive that kind of logic and just reverse the roles: “boo hoo look at how the feminists are oppressing us.”

The fact of the matter is that us men, if we do need and want public advocates, can do a lot better than letting internet-warrior Men’s Rights Activists, and conservative women, do our talking for us.  Don’t get me wrong.  I’ve got love for pissed off ex husbands who deal with divorce laws that were written with Leave it To Beaver in mind.  And I’ve got love for conservative women.  I’ve even got some love for identity insecure, sexless guys who turn to Pick Up Artistry guides and evolutionary psychology — keep tryin’ guys; you’ll get it.

But these people’s definition of masculinity is a joke.  One of the more brilliant arguments to come out of feminist thought was that femininity had been rather transparently socially constructed, and at that, very recently (most of what we believe to be “naturally feminine” was as I understand it a product of a radical change in mores during say the Victorian era).  These brilliant women in the 1970s started to and continue to argue that the definition of what it is to be women, is theirs to make.  Just so.  And remade it has been, and continues to be.

But where are we at on outlining what a guy is, guys?  I’m not sure what’s more emasculating — the radical entrails of feminist thought that typified masculinity as a social wrecking ball — or having conservative women and frustrated internet warriors on body building forums do our talking for us.  One thing is for sure: this whole “let’s get back to the Marlboro man” tack isn’t going to work (the Marlboro Man is himself a sheer fabrication — Marlboro cigarettes were a struggling brand of women’s cigarette before they came up with the cowboy imagery).

Since the definition of western femininity was shaped recently, it stands to reason that the 1950s version of masculinity was as well.  And frankly, the idea of The Masculine Man as a manual laborer, farmer, warrior, and so forth is just stupid.  Do you know where those ideal types come from, guys?  About 9,700 years of human history where there were farmers, there were slaves, there were kings, and there were wars.  And that was life.  And everyone died early and lived forever next to their small town neighbors they hated.  And where just about everyone was abysmally poor.

My guess is that late 19th and early 20th century men latched on to these kinds of aristocratic ideal types precisely because the material significance of those roles were abating.  Ok, we had a couple really big wars, but you get the picture.  Farming as a share of national product was in decline as its technological efficiency improved.  People flocked to urban centers.  And technological progress opened up a myriad of new occupations that nobody had any heat-and-serve gendered identities to attach to.

So people seem to have generally made up and clung to whatever romantic story about a long-forgotten past they could come up with in order to form the archetypes of 20th century masculinity and femininity.

The idea of man-as-provider, for instance, seems to be a product of the very unique material circumstances of the early and middle 20th century where economic growth benefitted the poor and middle classes to the degree that a man could support his family on a single salary.  Such was not the case for most of human history — everybody worked and provided — including the chilluns (which is why I find the opposition to child labor among the poor to be strange).  And this is how social narratives get written: we take present circumstances and project them backwards in order to justify and explain them — “aha!  you see this was inevitable!”  Strangely then, the materially unsupportable proposition that the men have always gone off hunting, and the women have always stayed home to vacuum the long house, took hold.

This approach to defining masculinity and femininity is scientifically bankrupt, and culturally misguided.

And it’s only for a severe lack of imagination that concerned thinkers and bloggers today have resorted to calls for us men to get back in the military, and subsidize factory and construction work.  These calls serve only the daydream about masculinity that a lot of business-doing and technology-wielding guys in the 20th century cooked up in order to remake their daily activities — cooperating, chatting, team-working, thinking, and doing deals with one another — with aristocratic and agrarian window dressings.

Aristocratic and agrarian tropes may have been all they had to work with, but it’s not all we have to work with, and it’s time to get creative.

Advertisements

Where Does the Sunk Cost Fallacy Come From?

By Graham Peterson

For the uninitiated, which I don’t blame you for being considering the levels of excitement in economics education, a sunk cost is essentially the idea that once a resource has been expended, its importance is through — the gig is up — the cost is sunk.  All that matters in a strictly economic calculus are future expectations of costs and benefits — agents (and firms) pursue an activity when its benefits are larger than it’s costs.  Forget the past.

A sunk cost fallacy then, is the recognition that people persistently and ubiquitously think to the past to justify future-going investments.   “Chasing bad money with good,” is a very 1950s colloquial demonstration of the idea.  The constant investment in NASA, because we’ve already spent such enormous gabs of money, is another classic economist-as-political-curmudgeon example.

But it goes much deeper.  An astute microeconomics instructor pointed out to me once that regret itself is in the economic view a blank mystery because it is a sunk cost fallacy — what’s done is done; water under the bridge.  Buyer’s remorse in this view doesn’t exist.  Nor does the very straightforward idea that people make mistakes in their decisions at all (that is that their preferences at t_1 can and often do conflict with those those they had at t_0).

But a potential explanation for sunk cost reasoning has been missed.  In the economic view, there is no ego and ID, no me versus myself, no internal dialogue with one’s conscience or consciousness.  Agents with a given portfolio of preferences look straight out through their eyeballs at the future and decide how to act — they don’t look back within themselves and reflect.

Now this view of self reflection, an internal dialogue, and an inherent ego-alter social relationship within the individual herself is relatively noncontroversial in most other human scientific theory, going back to at least George Herbert Mead and William James.  And assuming this position starts to make sunk cost reasoning look a lot more reasonable.

Imagine that in this internally-social relationship you have with your self, you form the same kind of social commitments, and abide the same kind of reciprocity rules you do with other people.  Now we’ll reintroduce the economics.

Imagine that your self indeed does rationally weigh expected costs and benefits, and that when your self commits hence to an action, he, a principle, signs a contract with you, an agent (your conscience or executive function).  Given this contract, you, as a contract-bound agent, carry out the contract.

But of course as time goes on into t_1, you, as an agent of the principle, continue to gather information and weigh expected costs and benefits as well.  You realize that the contract you are carrying out failed to account for contingencies that you are now aware of because your self didn’t have complete information in t_0.  You thus enter into negotiations with your self in an attempt to decide whether to nullify the contact or not.

Fallacious sunk cost reasoning, in this scenario, would seem to arise from your self protesting that you had good reason at t_0 to embark on the action it contracted with you to undertake: “I put a lot of time into making this gravy because I expected to get sufficient long run benefits from making so much of it.”  “Yeah, but bro, it’s starting to spoil and now you’ve got to invest more time into reheating it so it doesn’t.”  (yup – I thought of all this while making breakfast – at 3pm)

I Have the Right To Be A Bigot!

By Graham Peterson

Yes, you do, as far as the United States Government is concerned — and that’s an incredibly important constitutional protection.  But I believe, as do most of my fellow social liberals, that encouraging tolerance and open debate is a good thing.

But informal tolerance is different from formal tolerance, your first amendment protection of speech.  Zach Ford at ThinkProgress got the point exactly right, when he blasted social conservatives about their defense of the Duck Dynasty guy’s comments.  Andy Perrin gets it right over at Scatterplot, too.  The refrain emerging from social conservatives that liberals need to tolerate intolerance is dumb.

Do I agree that my beautiful and inspired friends on the left are intolerant of things they shouldn’t be?  You bet.  I think a lot of people are intolerant of things they shouldn’t be.  Do liberals have a greater tendency to seek recourse to arguments they don’t like by leveraging official bureaucratic channels?  You bet.  That’s troubling, and it’s something we in a free society can argue over, hopefully reifying our collective belief in tolerance of speech.

But the demands of social conservatives that liberals tolerate intolerance itself, are nonsense.  The question, “does tolerance require one to be tolerant of intolerance,” is a logical absurdity, not a clever tu quoque fallacy to be leveled at social liberals in service of promoting or defending bigotry.  It’s just like the omnipotence paradox in philosophy: “can God create a stone that is so heavy he can’t lift it?”  If you accept that a premise, and its negation, are both true, then sure, you’ve got a logical contradiction before you even try to imply a conclusion.

And that might seem like a neat way to make your opponent look silly, but in the world of deductive logic, you either subscribe to tolerance, or you subscribe to intolerance, not both.  Now, humans are full of logical contradictions, so they’re not hard for a reasonably clever person to find – but it’s a trivial argument because everyone maintains degrees of mutual inconsistencies in their beliefs.

The argument here then, since we are all both tolerant and intolerant, is over whether we should attempt to become more tolerant, or less.

That’s a fight that social conservatives are losing badly, and will continue to.  Social conservatives are upset about the efforts of social liberals to redefine social taboos and guidelines, calling it “political correctness.” But social conservatives do the same thing — they just call it “politeness.”  The only difference is that social liberals are attempting to actively reconstruct social norms, and conservatives are attempting to actively maintain established norms.  Both want dominant norms.  Reasonable people can disagree over what those norms ought to be without blasting the other side for wanting norms in the first place.

In my view, social liberals win all day with a philosophy that says groups ought to be allowed their little version of politeness or political correctness as long as they don’t coerce others into subscribing to their beliefs with threats of violence.  And that’s what first amendment rights to speech prevent.  Let the firings, screaming, and reasoned persuasion continue.

The ASA’s Boycott Lacks Seriousness

By Kindred Winecoff

The Executive Committee of the American Association of Universities has issued a statement condemning the American Studies Association’s boycott of all Israeli academic institutions. The AAU’s decision makes sense, and I support it. Claiming that all academics at all Israeli institutions bear responsibility for all actions taken by the government of Israel — whatever you think of those actions — is absurd. Playing fast-and-loose with academic freedom is more than regrettable in an environment where such liberties are under increasing threat at the margin.

I find it bemusing that someone like Corey Robin would disagree, given his own institution’s recent employment of General Petraeus. Robin protested that decision, vehemently, but given that his side was unable to prevent Petraeus from teaching at CUNY I doubt he would appreciate being banned from conferences, publications, or other academic symposia because his institution hired the leader of a war many believe to have been unjust and illegal. The American Association of University Professors (sensibly) opposes blanket boycotts as a matter of principle for just this kind of reason. In this case the Palestinian government agrees. Solidarity should not just be in the mind, and one can support Palestinian self-determination (and oppose the expansion of settlements in the West Bank) without playing games of guilt by association.

Tyler Cowen argues the positive case — would the world be better if the boycotters’ demands were met? — but I think that’s the wrong way of looking at it. This is pure mood affiliation via cheap talk. If it would actually have any real world impact I doubt most of these folks would support such a boycott for precisely the reasons Cowen gives. And if they did we would easily be able to identify their moral and scientific unseriousness.

 

UPDATE: I took a closer look at the text of the ASA’s website and one of the things I wrote above is misleading if not outright wrong. Specifically, individual Israeli academics are not being boycotted; only institutions. In practice this might be a distinction without a difference… but maybe not. In any case, here is the full statement from the ASA. The relevant part:

Our resolution understands boycott as limited to a refusal on the part of the Association in its official capacities to enter into formal collaborations with Israeli academic institutions, or with scholars who are expressly serving as representatives or ambassadors of those institutions, or on behalf of the Israeli government, until Israel ceases to violate human rights and international law.

The resolution does not apply to individual Israeli scholars engaged in ordinary forms of academic exchange, including conference presentations, public lectures at campuses, or collaboration on research and publication. The Council also recognizes that individual members will act according to their convictions on these complex matters.

A Little Bit of Macroeconomy – A Lotta Bit of Sociology

By Graham Peterson

Bob Solow, the great man that he is, has written an incredibly good piece over at The New Republic, the great magazine that it is, on Alan Greenspan’s new book.  Bob’s not happy.  According to Bob, Alan made an enormous mistake in encouraging financial deregulation because of his alleged Randian ideology.  Apparently Greenspan turns economics into ethics, and poorly.

In elementary economics, workers freely enter and exit firms, turning firms into customers of workers’ services, which bids up workers’ wages to what’s called the workers’ marginal product.  That means I get paid exactly the value of the last widget I churn out at the end of the day, and Karl Marx was wrong about exactly everything.*

Greenspan puts it thusly: “Market competition ensures that [workers’] incomes equal their ‘marginal product’ share of total output, and are justly theirs” (my bold).  Solow returns the volley, reminding us that people start life with different endowments (sociology!) that affect their marginal productivity, and that we end up with an unequal distribution of consumption opportunities.  “There is nothing just about it,” says Bob.

There it is — justice.  This, I believe, is the principle driver of the last few hundred years of macroeconomic debate — cultural negotiation of the moral desserts in the economy.  It’s why I’ve never bothered to learn much macroeconomics.  Economists have possessed in the last half of the 20th century some of the most accurate national income data in the world, and some of the most sophisticated statistics anywhere in the academy.  With these data they have been able to resolutely determine that the effects of fiscal and monetary intervention are ambiguous, maybe.

So the leading figures in the discipline have taken to a public pageantry on the internet (hi, Paul!) , neatly separated into an interventionist and laissez faire camps.  Ad hominem about a lack of scientific ethics: “No, YOU’RE the ideologue,” have become a perfectly acceptable go-to in order to settle debates . . . that never get settled.

Now, I love economists.  And Solow’s growth model is one of the main reasons I left for sociology — the foundations of economic innovation, and economic growth hence — appear to be sociological.  But some candor in these debates would be refreshing.  And frankly that would begin with macroeconomists (and the rest of social scientists for that matter) admitting that they are, roughly put, ideologues, and that there’s nothing wrong with that.  The issues here are ethical, and the self-styled positivists unselfconsciously construct elaborate mathematical and statistical arguments that are consistent with their ethical priors.  The way to stop this is to ask people to unmask their ethical and political priors instead of asking them to pretend they don’t have any when they sit down to solve for an equilibrium.

The basics of Keynesian intervention are that low aggregate demand, caused by a variety of proposed mechanisms, make businesses hesitate to invest in production.  The logic then becomes, “give people money to consume, they’ll consume, business will want to invest in greater production and hire people.”  Laissez faire types argue in turn that if you want more stuff, you have to make more stuff — employment starts and ends with saving more, so that one can invest more, and hence produce more.  Note right away the ethical arguments being made here, in the style of what George Lakoff has called the Strict Father (leave it alone!) versus Nurturing Mother (intervene!) views of government.

Keynes’ book wasn’t revolutionary because he had mind blowing and revolutionary mathematics that suggested a clear functional form to fit statistically, and inarguably well-fit data — it was revolutionary because he turned the basic tools of economics on their head in a mostly fiscally conservative political environment to suggest that the government should indeed help the damn poor, and massively.  And old Uncle Milton’s political economy and monetary theory wasn’t successful and informing of a rejuvenated fiscal conservatism because he had a mind blowing theory of the money supply — it was successful because it was all backed by claims on human freedom — strict father says the best way to parent the kids is to let them skin their knees and learn.

What’s really being fought over here are deep-felt ethical and political principles.  Economic stimulus, whether by unemployment benefits, earned income tax credits, or other social spending — to laissez faire types — is coerced charity.  Most of these people have no problem with charity in principle, as long as it’s private and voluntary.  But what really gets laissez faire types mad is Robin Hood with a Ph.D. in economics.

Financial regulation, or really all business regulation, in the laissez faire view, is not a matter of a reasoned cost benefit analysis over whether the stimulating effects of it outweigh its distortionary effects — it’s a simple matter of interference in private affairs.  You know how most sane women feel about their uteruses and their right to dispose of them as they please?  That’s how laissez faire types feel about their right to transact with whom they please.

And it goes the other direction — interventionists will sing a beautiful harmony about the supposedly value-free and measurable impacts of sensible regulation.  What ultimately motivates their analysis is a story in which bad or ignorant or negligent businesspeople do bad things to people.  “Unfettered market,” is a fancy term for a group of out-of-control assholes who, if we keep a leash on them, can give us lots of neat stuff despite themselves, but will eat us for dinner if we don’t hold a gun to their head while they do business.

These macro stories you see, are all about good and evil, and about ethical harms and justices.  The actual material impacts of macroeconomic policies are almost entirely ambiguous, and anyone with an undergraduate understanding of economics should understand as much — the little dead weight loss triangles and price movements you get by shifting supply and demand curves around are small relative to the total quantity of goods represented in a partial equilibrium diagram.  So where do macroeconomic fluctuations come from?

Some economists have begun to use the term Animal Spirits again, but per usual, this bold-faced cultural argument lacks any serious or systematic study of . . . culture.  How do you get low aggregate demand in 2013?  Scare the shit out of 317 million people with a financial panic and political scandal, undermine their trust in one another, and then keep twisting the knife on the national news with a pageantry of political punditry for the following five years.

How do you get a post WW-II boom?  Conquer a symbolic specter of evil, construct a story about how it constitutes a final victory of freedom and the promise of modernity, and get communities rallied around shared themes of opportunity, freedom and prosperity.

How do you get financial markets to respond to monetary policy despite the fact that the Fed possesses a tiny fraction of loanable funds in the world?  Construct an elaborate theory of the supply of money and institutionalize tens of thousands of people with a belief in it, putting a group of magisters at center stage who pull leavers.  It’s a sure fire way to get people to react dramatically to their own forecasts.

Now these colleagues of mine are some of the smartest women and men I know, most of them much smarter than me.  I got a B in Real Analysis.  They’ve accomplished important things: the conquering of hyper-inflations was one of the greatest victories of social science in the 20th century.  The collection of aggregate economic statistics is inviolably important — without them we’d have never found out that growth, amazingly, doesn’t really come from simple saving and investment.  Talk about a whopper.

But the idea, at bottom, that we can control the business cycle and edge unemployment up and down with statistical regressions and a big enough check book is a complete fantasy.  It’s also fashionable.  Manly men of intelligence and conscience read The Economist and get graduate degrees and have prestigious and esteemed debates (including regular name calling and mud slinging) about the subtle particularities of steering the actions of 317 million people with monetary and fiscal nudges.

And for the most part, everyone believes in them.  People are mysteriously convinced that someone must at the very least attempt to control the freight-train force of hundreds of millions of people inventing things, selling to one another, and transacting.  It is a magnificent drama to watch — Mom and Pop and Suzie and her girlfriend tune in to economic news that they barely understand in anything but its simplest moral narrative terms.  Pundits and journalists and politicians pretend to understand macroeconomic theory above the level of freshman course economics.  And graduate students all over the world and their mentors busily keep up the illusion that they’re not precisely communicating moralistic narratives to the public — all the while doing an elaborate two-step around the reality that they are, after all, telling stories about good guys and bad guys and full blood victims and the dealing of punishments and rewards.

Are we all brainwashed?  No.  Is economics not a science?  No.  Economics nor any other human science can get around telling human stories with deliberate political implications.  And politicians nor the man in the street can get around using precisely these stories in order to form their beliefs about the world and make decisions.  What we believe is real is real.  If the prospect of our culture ultimately being a relativistic fairy tale disturbs you, it should.  But the next step is to realize that that’s what we’ve got, and that only by understanding it’s mechanics, up to and especially including the way it impacts economic behavior, can we tell better stories and help one another create a more prosperous world.

Update: Paul Krugman himself has just highlighted that employers don’t cut workers’ wages during recessions (which would bring marginal workers whose skills are worth less into the job market), because employers understand that organizational loyalty and ethical reciprocity are economically efficient to production.  This is the kind of progress we need in macroeconomics — an understanding of how moral narratives shape actor’s behaviors.

*Everything about class exploitation, that is, which drives the majority of his corpus.  There are good sociological and political reasons to believe the basic economic story doesn’t always hold.  We won’t explore them here.  

Is Failure an Option? (III)

By Adam Elkus

In the previous installment, I wrote about the distinctions between “failure is not an option” and “skin in the game.” Now, I will conclude by talking about the link between the two. I began in the first installment by talking about Obamacare and Clay Shirky’s feeling of shock that anyone would want to design a major sociotechnical system with the idea that the “failure is not an option” algorithm is desirable.

I have tried to argue that “failure is not an option” is a “simple” algorithm that is designed to ensure that a risky and complex venture can be carried through to completion. It does not guarantee that the venture will be successful on its own merits. In fact, it does not even address this question in the slightest. What it does do, however, is ensure that the venture can be carried through. By limiting the ability of the design to evolve in time, it ensures that purity of vision is maintained. By implementing the design with maximum force and/or velocity, it ensures that all of the necessary resources are devoted to the task. And by guaranteeing automatic consequences for failure (though, as the previous post explained, the distribution of consequence is variable), it creates a “Rubicon” effect that should motivate the organization implementing it to give full effort and not look back.

Distribution of consequence, however, is a subject that people often consider independent of the main algorithm. It is perhaps understandable that many would explain military-strategic failure by arguing the following that societal elites do not suffer consequence for failure, while a common soldier is punished for the most minute of mistakes. As a result, the following occurs:

  • Objectives are ill-defined and vacuous
  • Wars of choice are more common
  • Indecisive wars are more common

Hence when the elites are properly incentivized in the same way that the soldiers are, the wars should be less common and more necessary, the objectives should be more clear, and the wars themselves should be fought with more decisiveness and vigor. On the surface, there is little objectionable about this. It is a creed that both the martial conservative, the centerist, and the dovish center-leftist can both get behind. But there is actually a problem lurking behind this application of the “skin in the game” concept.

American analysis of strategy suffers from some several flaws. One of which is that it is difficult to see how any American authors on strategy have rigorously defined what a “well-defined” objective looks like. It is even more  vague how a war of choice can be rigorously separated from a war of necessity without bringing in subjective judgement. The only term that we have some confidence about is decisiveness – when Americans think about decisive warfare, they usually identify it with Jomini’s explicit instruction to concentrate one’s forces at the decisive point, strike hard, and relentlessly scourge the enemy until he cries uncle. Decisiveness is about speed, fires on target, and destruction.

I will make “rigid” a latent variable for “well-defined.” A well-defined objective — good or bad — will be clear enough that everyone will understand it and not seek to alter it.  “Carthage must be destroyed” is an example that I often use because it is as well-defined as possible. If Carthage has not been destroyed, the objective is not complete. If some Roman politician were to stand on a ship with a “mission accomplished” barrier while Carthage still stood, its citizens had not been sold into Roman bondage, and its field remained unsalted he would be mocked in the same way Bush was after the Iraqi insurgency began. And I will also make “high-stakes” a latent variable for “necessary.” Surely if a war is deemed to be “necessary” by the body politic it must have very high stakes for the foundational pillars of that state — either ideologically (a threat to the nation’s conception of itself) or quite literally (an invading army at the doorstep). So why would a war not be well-defined, necessary, or decisive?

First, Clausewitz tells us that “policy” is the coagulation of a political process. Political preferences on all levels differ, and are aggregated in an imperfect fashion. Additionally, politicians often prefer flexibility in all matters and often would prefer to focus on domestic policy than warfare. Wars are costly and risky, and when possible they would seek to prefer some kind of way of splitting the difference — like Obama’s idea of sending aid to rebels but not bombing Syria. So when strategic objectives are well-defined, they tend to be somehow hardened against political interference and rigid. Keep in mind as well that the rigidity of a strategic end does not imply that correct ways of fulfilling it can be found that harmonize with the means available. Take Prohibition for example. The end was extremely clear — eradicate alcohol consumption as a significant American habit. And it was also so rigid that it resisted repeal long after most realized that enforcing it posed significant challenges.

Conception of an issue being high stakes tends to produce rigid (aka “well-defined” objectives). In Vietnam, American elites were convinced that supporting the tinpoint dictator Diem’s South Vietnam was necessary to prevent the “dominoes” across the region from falling. The entire Paul Nitze-influenced vision of the Cold War was a mental Rube Goldberg contraption that took the fortunes of peripheral states in the Global South as input and produced strategic consequences for the homeland as output. Hence to many elites Vietnam was certainly a war of necessity, well-worth committing American draftees. And they would not yield from this course of action for fear not only of the Communists, but also of the domestic political consequences of backing down.

Decisiveness is trickier. Whether something is executed speedily and with sufficient force depends a great deal on the constraints available. Fear of Chinese intervention constrained the obvious remedy to the Vietnam problem — destroying the military power of the North and calling it a day. So the speedy solution was out of the question. But America devoted substantial resources. I have relatives that visited Vietnam after the war and saw the gigantic craters left by the bombing. Only by ignoring the physical and human toll the US inflicted on Vietnam and its neighbors can we describe Vietnam as lacking the application of decisive force. In other areas, the US applied both speed and military decisiveness. The destruction of the Iraqi army in 2003 was both quick and rooted in the idea of precise yet overwhelming force (“shock and awe”).

There will always be some kind of inherent constraint on the use of force and the speed in which it is applied. Schlieffen’s plan was constrained  in both speed and intensity by the logistics of the early 20th century, European politics, and the laws of physics. But one way that the politician can be marginally more certain that the design will be executed with martial vigor and urgency is if the requirements are rigid and the task is considered to be of high stakes. As noted before however, speeding up the application of force and throwing more resources into play is often a very risky endeavor. If you come at the king and you fail, what do you do?

What I am trying to suggest is that when “skin in the game” conditions exist, the decisionmaker is incentivized to employ the very forcing algorithm that Shirky views as so perilous and obviously counterproductive. In Shirky’s ideal world — and by proxy, the ideal world of strategic theory — the strategist is flexible, creative, and experimental. He or she does not treat the task in such a rigid, risky, and self-defeating manner, and accounts for all of the entropic difficulties that come with the design and execution of strategy. They are experimental, reflexive, and willing to abide by Moltke’s maxim that no plan survives first contact with the enemy.

Anton Strezhnev, in a eloquent critique of “skin in the game,” explains why “skin in the game” itself is unlikely to optimize the best behavior:

Sandis and Taleb’s argument is uncompromising, which perhaps makes it more appealing as an ethical claim than as a practical one. By arguing that agents are only justified in acting on behalf of principles when they have “skin-in-the-game,” they have assumed away the entire principal-agent problem. If the agent has the exact same preferences as the principal (i.e. they are exposed to the same risks), then there is no problem. The agent will always behave in the manner that the principal prescribes. …….

In the real world, agents rarely share the same preferences as their principals and principals are almost never in perfect control of their agents. Power is shared and relationships are tense. Yet delegation is a necessary aspect of nearly all human institutions. Moreover, there is rarely a single principal. Agents face conflicting pressures from a myriad of sources. Politicians do not respond to a unified “constituency” but to a diverse array of “constituents.” So when Sandis and Taleb argue that decision-makers need “skin-in-the-game,” they raise the question of “whose game are we talking about?” . ……

Principals get noisy signals of agent behavior. It is unclear whether an outcome is the result of poor decision-making or bad luck. This distinction may or may not matter, depending on the case. However, in many instances where it is difficult to observe the agent’s behavior, the optimal solution to the principal-agent problem still leaves the agent somewhat insulated from the costs of their actions.

This is the Achilles heel of “skin in the game” — particularly when, as I have noted above, the rigidity of a design does not necessarily imply how it should completely be implemented in a fluid situation.  I will use the fictional example of Starcraft: Brood War‘s United Earth Directorate expedition as an example of how this can play out even when risk is shared to a degree unlikely in the “real” world except in the ideal circumstance. Admiral DuGalle and his subordinate Vice Admiral Stukov are in charge of a UED fleet that has traveled far from its logistical base into the war-torn Korprulu sector. Admiral DuGalle, the commander of the fleet, has a very clear and rigid objective: pacify the sector in which the game universe takes place. In order to do so, he and VADM Stukov must decide what to do about the Psi Emitter, a powerful device capable of controlling the Hive mindlike Zerg aliens.

Stukov discovers that the UED’s native Korprulu informant, Samir Duran, is actually the Ahmed Chalabi of the Starcraft universe. Although the UED depends on him for intelligence regarding the politics and strategy of the operational environment, he has his own agenda. Stukov decides that the Psi Emitter must be utilized. DuGalle, convinced by Duran, thought the Emitter should be destroyed.  The stakes are extremely high, as the UED has a fixed number of forces and is far from home. Either they succeed or be, like the Athenians in Syracuse, stranded in a hostile land. So Stukov, fearing that Duran’s influence has blinded DuGalle to the potentially dire ramifications of destroying the Psi Emitter, decides to active it on his own.

DuGalle receives a “noisy signal” of Stukov’s failure to implement his orders. In the game, DuGalle and Stukov are presented as lifelong friends and companions, and it is clear that DuGalle is puzzled by Stukov’s sudden insubordination. One would imagine that the most optimal way for DuGalle to resolve the issue would be to first take all necessary means to stop Stukov from doing what he wanted with the Psi Emitter, and then ascertain whether his dear friend may have been correct in seeking to utilize it. Punishment would be decided by the actual information gathered about why Stukov disobeyed orders. This “optimal solution,” as Strezhnev argues, would still leave Stukov somewhat insulated from the costs of his behavior, given that he committed a drastic act of insubordination that could potentially threaten the entire success of the expedition. This act of insubordination, in the abstract, would justify a drumhead court-matial and execution (or at the very minimum the harshest non-lethal punishment available should Stukov have done it out of malice).

Instead, DuGalle regards Stukov as a traitor and orders his execution. An attack force assaults Stukov’s men at the Psi Emitter facility and kills him.  It is only after the grim task is completed that DuGalle realizes that he was wrong, and that Stukov in fact had been correct. At the end of the UED campaign in Brood War, as DuGalle prepares for his own suicide (to pre-empt being killed by the victorious Zerg) he bitterly writes that his biggest regret was that his “pride” killed Stukov. However, DuGalle is being far too harsh on himself. What actually killed Stukov was the rigid and automatic application of punishment. The goal was clear, the force available was great, the speed of the UED maneuvers were as rapid as any one could expect in a strategy video game.

But even a clear policy and a clear strategy will run into difficulties in implementation, because strategy is (as Moltke noted) a game of expedients. It evolves in time. And as Strezhnev noted, the “skin in the game” concept assumes the principal can assess whether the agent executes the task and assumes away the noise and uncertainty inherent in all such relationships in the real world. We can only guess what kind of Professional Military Education (PME) is taught within UED war colleges. But had, perhaps, someone taught DuGalle about principal-agent theory, the good admiral might not have lost his best friend.

In the grand scheme of things, it’s hard to measure the impact of Stukov’s death on the ultimate strategic outcome. The UED failure was not deterministic. But certainly Clay Shirky would not have approved of a strategic plan involving the infusion of an non-renewable military force into a complex interplanetary system being contested by the Zerg, Terran Dominion, the Protoss, and Raynor’s Raiders. Clay Shirky certainly would not have also approved of the rigid design and its inability to be qualitatively altered without the drastic step of accidentally killing a high-ranking official with different ideas. Failure was not an option for DuGalle and his forces, and they were massacred to a man by the vengeful Zerg leader Infested Kerrigan (Queen of Blades) while they were desperately seeking to flee the battlefield.

This latter outcome implies something else about “skin in the game” that is very disconcerting. Punishment for failure here is equal for every UED soldier. They are all killed. But since they are all dead and floating in space somewhere, they cannot learn from experience. If they could respawn at the beginning of the campaign, knowing what they did about the consequences of their choices, they could perhaps learn an optimal strategy over repeated tries if the campaign’s parameters were held constant. But they can’t, since they are dead. And because they are dead and the specific conditions of the campaign are now dated, a different set of UED policymakers and soldiers will be tapped to undertake a different campaign should the UED choose to re-attempt the conquest.

And within this new campaign, the sample space of choices and outcomes will be wholly different. Brood War 2 will not be like Brood War 1.  So “skin in the game” can only be expected to reasonably optimize behavior if we treat the world as like “Groundhog Day” — it is held constant as you try and try, and then it changes when you succeed. And when you fail, you respawn at time 0 at the beginning of the world sequence with the world parameters reset to starting point. Good for Bill Murray, not for poor Admiral DuGalle.

In sum, we know two things for sure. First, a forcing algorithm merely guarantees that a venture can be launched and carried through. DuGalle in the UED opening, asks if Stukov is prepared to go “all the way,” and the “failure is not the option” algorithm ensures that the answer to the question is affirmative. Second, regardless of whether Stukov’s death doomed the UED, it is hard to see how accidentally killing a senior leader that had good ideas and rapport with the commander of the expedition somehow optimized the UED’s prosecution of its campaign. Either way, it is also hard to see how the larger strategic failure might optimize UED behavior in the future. Perhaps the next expedition will have a better success rate, but we cannot plausibly claim anything more than a weak causal link between the failure of the first expedition and the possible success of the second.

The complexity of the “is failure an option” series of posts goes to show several unfortunate things about strategy. First, there is often too much confusion of the prescriptive with the descriptive in strategic discourse. In an ideal world, strategy would be executed in the way Shirky recommends. But our world is never ideal. Second, appealing and normatively based folk theories about responsibility and optimization of behavior can have catastrophic consequences when actually put into place. We have to be rigorous about the microfoundations behind them.

When studying strategy, we must keep in mind the constraints on strategic choice, as well as realistic microfoundations that would inform the interactions  and incentivizes underneath gauzy rhetoric. Otherwise we may be conquered by the metaphorical Kerrigans that always threaten to thwart our hopes, desires, and plans.

Is Failure An Option? (II)

By Adam Elkus

In Part I last month, I discussed the origins of the “failure is not an option” mode of strategic theory. Key to my conception was the idea of “failure is not an option” as a specific algorithm for ramming through a risky, controversial idea under highly complex, difficult, and uncertain circumstances.

If the possibility exists that anything less than a rigid formulation could lead to failure, the algorithm accepts the danger of a rigid design because its very rigidity ensures survival in a hostile or otherwise difficult environment. The logic of political survival famously holds that policy stability is strongest in a tighter coalition. The concept of “strategic essentialism” holds that subaltern groups should minimize individual differences at strategic points to present a united front to outsiders, despite the fact that such debate might serve an optimizing function over the long run.

The idea of implementing a massive and complex venture rapidly and decisively (with little room for error) is essentially just a rephrasing of the familiar the pre-World War I fear of losing a mobilization race. Under some circumstances, a nuclear balance could also degenerate into a “use them or lose them” dilemma in which a state risks the entire annihilation of its strategic forces and decision nodes in one murderous enemy salvo. There also seems to be — from Niccolo Machiavelli to Nathan Bedford Forrest – a general competitive heuristic that if you are to crush your enemies, you must strike as powerfully as you can and as quickly as you can. The heuristic is even repeated in the animal kingdom: queen bees famously kill their rivals upon emergence. But as the Germans discovered after the Schlieffen Plan and The Wire‘s Omar taunted, rapid execution and massive risk only pays off when it pays off. Fail and you run the risk of embroiling yourself in a quagmire that might have been avoided with more gradual and less rigidly planned execution.

The last aspect of the “failure is not an option” algorithm, “guarantee automatic consequences for failure” is perhaps the most interesting and complex. Whereas “failure is not an option” is to designed to optimize a wide variety of potential instances of the same general problem, the idea of automatic punishment is more ambiguous. Generally, the idea of automatic and unavoidable consequences for failure is intended to incentivize a “sink or swim” mentality due to the inevitably harsh punishment upon failure. But the distribution of the consequences for failure is not inherently specified by the “guarantee automatic consequences” instruction.

In a pure instance of the “guarantee automatic consequences for failure” instruction, defection from the plan is literally impossible. Cortes the conquistador scuttling his ships is the penultimate example. Either everyone succeeds together or they all die together. However, it is difficult to engineer such a circumstance due to the fact that one must close off any real possibility of escape. That said, a leader can also engineer this by forcing his subordinates to collectively cross a metaphorical Rubicon comprised of political, ethical, or sectarian norms of appropriateness. The classic heist movie cliche of the bank robbers being forced to kill the security guard or innocent witness is a cinematic example of this. All of your hands are dirty, therefore the group must succeed together or fail together.

The problem, however, is that the actual distribution of consequence in a high-risk endeavor is extremely variable.  Consider a hypothetical (amalgamated) dictatorship at war that uses the threat of summary execution to optimize military performance. There are three possible implementations of  the”failure is not an option” algorithm’s final component, each corresponding to a different distribution of lethal consequence.

The first implementation attaches a commissar unit to the back of each tactical formation. When it is time for the general offensive to commence, the tactical commander cries “death or glory, boys” and signals for the junior officers and NCOs to lead their men over the top. Anyone who falters is shot in the back by a special team of politically reliable riflemen and machine gun crews. The second implementation punishes only senior leaders. A general who fails to defend a critical city named after the Grand Sultan is visited by political officers that take him outside his improvised winter HQ to be shot in the head. A premier who oversees a losing war commits seppuku in his office with one hand while saluting his statue of V.I. Lenin with the other.

The third implementation is known as the “skin in the game” variant of the “failure is not an option” algorithm. Here, automatic punishment is equitably distributed. The war has been lost, and the dictatorship is forced to submit to what it considers to be humiliating peace terms. The political elites determine that no one party bears responsibility for the failure – a collective societal sickness has made the dictatorship weak and vulnerable. In order to better incentivize the decadent society to fight stronger when the dictatorship inevitably re-arms, it draws up a list of those to be executed that includes representative samples of every rank responsible. Corporals, junior officers, generals, cabinet ministers, and the Supreme Leader himself are all sent to the guillotine while cheering mobs chant “liberty, fraternity, equality!”

When considering American public policy, many analysts seem to believe that “skin in the game” is the best way to ensure optimal public policy outcomes. I will use the “skin in the game–conscription” variant to illustrate a sample argument:

“Skin in the game  — conscription ” relies on the following assumptions:

  1. Imbalance in the distribution of potential consequences for failure is a major societal problem.
  2. Politicians feel free to wage indecisive, quagmire-like wars of convenience with ill-defined objectives.
  3. The burden on a few soldiers instead of the many is morally unfair and threatens collective cohesion in the larger society.
  4. Distributing potential consequence will deter politicians from waging unnecessary wars, rectify a moral error, and restrict wars fought to those of necessity and those with well-formulated political objectives.

However, as I will explain in Part III, the problem with these assumptions are that they all seem to raise the larger societal stakes. And that paradoxically seems to lead back to conditions when “failure is not an option” becomes an ideal forcing mechanism — which seems to create the very lopsided disasters that “skin in the game” at least partially is designed to prevent……