The ASA’s Boycott Lacks Seriousness

By Kindred Winecoff

The Executive Committee of the American Association of Universities has issued a statement condemning the American Studies Association’s boycott of all Israeli academic institutions. The AAU’s decision makes sense, and I support it. Claiming that all academics at all Israeli institutions bear responsibility for all actions taken by the government of Israel — whatever you think of those actions — is absurd. Playing fast-and-loose with academic freedom is more than regrettable in an environment where such liberties are under increasing threat at the margin.

I find it bemusing that someone like Corey Robin would disagree, given his own institution’s recent employment of General Petraeus. Robin protested that decision, vehemently, but given that his side was unable to prevent Petraeus from teaching at CUNY I doubt he would appreciate being banned from conferences, publications, or other academic symposia because his institution hired the leader of a war many believe to have been unjust and illegal. The American Association of University Professors (sensibly) opposes blanket boycotts as a matter of principle for just this kind of reason. In this case the Palestinian government agrees. Solidarity should not just be in the mind, and one can support Palestinian self-determination (and oppose the expansion of settlements in the West Bank) without playing games of guilt by association.

Tyler Cowen argues the positive case — would the world be better if the boycotters’ demands were met? — but I think that’s the wrong way of looking at it. This is pure mood affiliation via cheap talk. If it would actually have any real world impact I doubt most of these folks would support such a boycott for precisely the reasons Cowen gives. And if they did we would easily be able to identify their moral and scientific unseriousness.

 

UPDATE: I took a closer look at the text of the ASA’s website and one of the things I wrote above is misleading if not outright wrong. Specifically, individual Israeli academics are not being boycotted; only institutions. In practice this might be a distinction without a difference… but maybe not. In any case, here is the full statement from the ASA. The relevant part:

Our resolution understands boycott as limited to a refusal on the part of the Association in its official capacities to enter into formal collaborations with Israeli academic institutions, or with scholars who are expressly serving as representatives or ambassadors of those institutions, or on behalf of the Israeli government, until Israel ceases to violate human rights and international law.

The resolution does not apply to individual Israeli scholars engaged in ordinary forms of academic exchange, including conference presentations, public lectures at campuses, or collaboration on research and publication. The Council also recognizes that individual members will act according to their convictions on these complex matters.

Is Failure an Option? (III)

By Adam Elkus

In the previous installment, I wrote about the distinctions between “failure is not an option” and “skin in the game.” Now, I will conclude by talking about the link between the two. I began in the first installment by talking about Obamacare and Clay Shirky’s feeling of shock that anyone would want to design a major sociotechnical system with the idea that the “failure is not an option” algorithm is desirable.

I have tried to argue that “failure is not an option” is a “simple” algorithm that is designed to ensure that a risky and complex venture can be carried through to completion. It does not guarantee that the venture will be successful on its own merits. In fact, it does not even address this question in the slightest. What it does do, however, is ensure that the venture can be carried through. By limiting the ability of the design to evolve in time, it ensures that purity of vision is maintained. By implementing the design with maximum force and/or velocity, it ensures that all of the necessary resources are devoted to the task. And by guaranteeing automatic consequences for failure (though, as the previous post explained, the distribution of consequence is variable), it creates a “Rubicon” effect that should motivate the organization implementing it to give full effort and not look back.

Distribution of consequence, however, is a subject that people often consider independent of the main algorithm. It is perhaps understandable that many would explain military-strategic failure by arguing the following that societal elites do not suffer consequence for failure, while a common soldier is punished for the most minute of mistakes. As a result, the following occurs:

  • Objectives are ill-defined and vacuous
  • Wars of choice are more common
  • Indecisive wars are more common

Hence when the elites are properly incentivized in the same way that the soldiers are, the wars should be less common and more necessary, the objectives should be more clear, and the wars themselves should be fought with more decisiveness and vigor. On the surface, there is little objectionable about this. It is a creed that both the martial conservative, the centerist, and the dovish center-leftist can both get behind. But there is actually a problem lurking behind this application of the “skin in the game” concept.

American analysis of strategy suffers from some several flaws. One of which is that it is difficult to see how any American authors on strategy have rigorously defined what a “well-defined” objective looks like. It is even more  vague how a war of choice can be rigorously separated from a war of necessity without bringing in subjective judgement. The only term that we have some confidence about is decisiveness – when Americans think about decisive warfare, they usually identify it with Jomini’s explicit instruction to concentrate one’s forces at the decisive point, strike hard, and relentlessly scourge the enemy until he cries uncle. Decisiveness is about speed, fires on target, and destruction.

I will make “rigid” a latent variable for “well-defined.” A well-defined objective — good or bad — will be clear enough that everyone will understand it and not seek to alter it.  “Carthage must be destroyed” is an example that I often use because it is as well-defined as possible. If Carthage has not been destroyed, the objective is not complete. If some Roman politician were to stand on a ship with a “mission accomplished” barrier while Carthage still stood, its citizens had not been sold into Roman bondage, and its field remained unsalted he would be mocked in the same way Bush was after the Iraqi insurgency began. And I will also make “high-stakes” a latent variable for “necessary.” Surely if a war is deemed to be “necessary” by the body politic it must have very high stakes for the foundational pillars of that state — either ideologically (a threat to the nation’s conception of itself) or quite literally (an invading army at the doorstep). So why would a war not be well-defined, necessary, or decisive?

First, Clausewitz tells us that “policy” is the coagulation of a political process. Political preferences on all levels differ, and are aggregated in an imperfect fashion. Additionally, politicians often prefer flexibility in all matters and often would prefer to focus on domestic policy than warfare. Wars are costly and risky, and when possible they would seek to prefer some kind of way of splitting the difference — like Obama’s idea of sending aid to rebels but not bombing Syria. So when strategic objectives are well-defined, they tend to be somehow hardened against political interference and rigid. Keep in mind as well that the rigidity of a strategic end does not imply that correct ways of fulfilling it can be found that harmonize with the means available. Take Prohibition for example. The end was extremely clear — eradicate alcohol consumption as a significant American habit. And it was also so rigid that it resisted repeal long after most realized that enforcing it posed significant challenges.

Conception of an issue being high stakes tends to produce rigid (aka “well-defined” objectives). In Vietnam, American elites were convinced that supporting the tinpoint dictator Diem’s South Vietnam was necessary to prevent the “dominoes” across the region from falling. The entire Paul Nitze-influenced vision of the Cold War was a mental Rube Goldberg contraption that took the fortunes of peripheral states in the Global South as input and produced strategic consequences for the homeland as output. Hence to many elites Vietnam was certainly a war of necessity, well-worth committing American draftees. And they would not yield from this course of action for fear not only of the Communists, but also of the domestic political consequences of backing down.

Decisiveness is trickier. Whether something is executed speedily and with sufficient force depends a great deal on the constraints available. Fear of Chinese intervention constrained the obvious remedy to the Vietnam problem — destroying the military power of the North and calling it a day. So the speedy solution was out of the question. But America devoted substantial resources. I have relatives that visited Vietnam after the war and saw the gigantic craters left by the bombing. Only by ignoring the physical and human toll the US inflicted on Vietnam and its neighbors can we describe Vietnam as lacking the application of decisive force. In other areas, the US applied both speed and military decisiveness. The destruction of the Iraqi army in 2003 was both quick and rooted in the idea of precise yet overwhelming force (“shock and awe”).

There will always be some kind of inherent constraint on the use of force and the speed in which it is applied. Schlieffen’s plan was constrained  in both speed and intensity by the logistics of the early 20th century, European politics, and the laws of physics. But one way that the politician can be marginally more certain that the design will be executed with martial vigor and urgency is if the requirements are rigid and the task is considered to be of high stakes. As noted before however, speeding up the application of force and throwing more resources into play is often a very risky endeavor. If you come at the king and you fail, what do you do?

What I am trying to suggest is that when “skin in the game” conditions exist, the decisionmaker is incentivized to employ the very forcing algorithm that Shirky views as so perilous and obviously counterproductive. In Shirky’s ideal world — and by proxy, the ideal world of strategic theory — the strategist is flexible, creative, and experimental. He or she does not treat the task in such a rigid, risky, and self-defeating manner, and accounts for all of the entropic difficulties that come with the design and execution of strategy. They are experimental, reflexive, and willing to abide by Moltke’s maxim that no plan survives first contact with the enemy.

Anton Strezhnev, in a eloquent critique of “skin in the game,” explains why “skin in the game” itself is unlikely to optimize the best behavior:

Sandis and Taleb’s argument is uncompromising, which perhaps makes it more appealing as an ethical claim than as a practical one. By arguing that agents are only justified in acting on behalf of principles when they have “skin-in-the-game,” they have assumed away the entire principal-agent problem. If the agent has the exact same preferences as the principal (i.e. they are exposed to the same risks), then there is no problem. The agent will always behave in the manner that the principal prescribes. …….

In the real world, agents rarely share the same preferences as their principals and principals are almost never in perfect control of their agents. Power is shared and relationships are tense. Yet delegation is a necessary aspect of nearly all human institutions. Moreover, there is rarely a single principal. Agents face conflicting pressures from a myriad of sources. Politicians do not respond to a unified “constituency” but to a diverse array of “constituents.” So when Sandis and Taleb argue that decision-makers need “skin-in-the-game,” they raise the question of “whose game are we talking about?” . ……

Principals get noisy signals of agent behavior. It is unclear whether an outcome is the result of poor decision-making or bad luck. This distinction may or may not matter, depending on the case. However, in many instances where it is difficult to observe the agent’s behavior, the optimal solution to the principal-agent problem still leaves the agent somewhat insulated from the costs of their actions.

This is the Achilles heel of “skin in the game” — particularly when, as I have noted above, the rigidity of a design does not necessarily imply how it should completely be implemented in a fluid situation.  I will use the fictional example of Starcraft: Brood War‘s United Earth Directorate expedition as an example of how this can play out even when risk is shared to a degree unlikely in the “real” world except in the ideal circumstance. Admiral DuGalle and his subordinate Vice Admiral Stukov are in charge of a UED fleet that has traveled far from its logistical base into the war-torn Korprulu sector. Admiral DuGalle, the commander of the fleet, has a very clear and rigid objective: pacify the sector in which the game universe takes place. In order to do so, he and VADM Stukov must decide what to do about the Psi Emitter, a powerful device capable of controlling the Hive mindlike Zerg aliens.

Stukov discovers that the UED’s native Korprulu informant, Samir Duran, is actually the Ahmed Chalabi of the Starcraft universe. Although the UED depends on him for intelligence regarding the politics and strategy of the operational environment, he has his own agenda. Stukov decides that the Psi Emitter must be utilized. DuGalle, convinced by Duran, thought the Emitter should be destroyed.  The stakes are extremely high, as the UED has a fixed number of forces and is far from home. Either they succeed or be, like the Athenians in Syracuse, stranded in a hostile land. So Stukov, fearing that Duran’s influence has blinded DuGalle to the potentially dire ramifications of destroying the Psi Emitter, decides to active it on his own.

DuGalle receives a “noisy signal” of Stukov’s failure to implement his orders. In the game, DuGalle and Stukov are presented as lifelong friends and companions, and it is clear that DuGalle is puzzled by Stukov’s sudden insubordination. One would imagine that the most optimal way for DuGalle to resolve the issue would be to first take all necessary means to stop Stukov from doing what he wanted with the Psi Emitter, and then ascertain whether his dear friend may have been correct in seeking to utilize it. Punishment would be decided by the actual information gathered about why Stukov disobeyed orders. This “optimal solution,” as Strezhnev argues, would still leave Stukov somewhat insulated from the costs of his behavior, given that he committed a drastic act of insubordination that could potentially threaten the entire success of the expedition. This act of insubordination, in the abstract, would justify a drumhead court-matial and execution (or at the very minimum the harshest non-lethal punishment available should Stukov have done it out of malice).

Instead, DuGalle regards Stukov as a traitor and orders his execution. An attack force assaults Stukov’s men at the Psi Emitter facility and kills him.  It is only after the grim task is completed that DuGalle realizes that he was wrong, and that Stukov in fact had been correct. At the end of the UED campaign in Brood War, as DuGalle prepares for his own suicide (to pre-empt being killed by the victorious Zerg) he bitterly writes that his biggest regret was that his “pride” killed Stukov. However, DuGalle is being far too harsh on himself. What actually killed Stukov was the rigid and automatic application of punishment. The goal was clear, the force available was great, the speed of the UED maneuvers were as rapid as any one could expect in a strategy video game.

But even a clear policy and a clear strategy will run into difficulties in implementation, because strategy is (as Moltke noted) a game of expedients. It evolves in time. And as Strezhnev noted, the “skin in the game” concept assumes the principal can assess whether the agent executes the task and assumes away the noise and uncertainty inherent in all such relationships in the real world. We can only guess what kind of Professional Military Education (PME) is taught within UED war colleges. But had, perhaps, someone taught DuGalle about principal-agent theory, the good admiral might not have lost his best friend.

In the grand scheme of things, it’s hard to measure the impact of Stukov’s death on the ultimate strategic outcome. The UED failure was not deterministic. But certainly Clay Shirky would not have approved of a strategic plan involving the infusion of an non-renewable military force into a complex interplanetary system being contested by the Zerg, Terran Dominion, the Protoss, and Raynor’s Raiders. Clay Shirky certainly would not have also approved of the rigid design and its inability to be qualitatively altered without the drastic step of accidentally killing a high-ranking official with different ideas. Failure was not an option for DuGalle and his forces, and they were massacred to a man by the vengeful Zerg leader Infested Kerrigan (Queen of Blades) while they were desperately seeking to flee the battlefield.

This latter outcome implies something else about “skin in the game” that is very disconcerting. Punishment for failure here is equal for every UED soldier. They are all killed. But since they are all dead and floating in space somewhere, they cannot learn from experience. If they could respawn at the beginning of the campaign, knowing what they did about the consequences of their choices, they could perhaps learn an optimal strategy over repeated tries if the campaign’s parameters were held constant. But they can’t, since they are dead. And because they are dead and the specific conditions of the campaign are now dated, a different set of UED policymakers and soldiers will be tapped to undertake a different campaign should the UED choose to re-attempt the conquest.

And within this new campaign, the sample space of choices and outcomes will be wholly different. Brood War 2 will not be like Brood War 1.  So “skin in the game” can only be expected to reasonably optimize behavior if we treat the world as like “Groundhog Day” — it is held constant as you try and try, and then it changes when you succeed. And when you fail, you respawn at time 0 at the beginning of the world sequence with the world parameters reset to starting point. Good for Bill Murray, not for poor Admiral DuGalle.

In sum, we know two things for sure. First, a forcing algorithm merely guarantees that a venture can be launched and carried through. DuGalle in the UED opening, asks if Stukov is prepared to go “all the way,” and the “failure is not the option” algorithm ensures that the answer to the question is affirmative. Second, regardless of whether Stukov’s death doomed the UED, it is hard to see how accidentally killing a senior leader that had good ideas and rapport with the commander of the expedition somehow optimized the UED’s prosecution of its campaign. Either way, it is also hard to see how the larger strategic failure might optimize UED behavior in the future. Perhaps the next expedition will have a better success rate, but we cannot plausibly claim anything more than a weak causal link between the failure of the first expedition and the possible success of the second.

The complexity of the “is failure an option” series of posts goes to show several unfortunate things about strategy. First, there is often too much confusion of the prescriptive with the descriptive in strategic discourse. In an ideal world, strategy would be executed in the way Shirky recommends. But our world is never ideal. Second, appealing and normatively based folk theories about responsibility and optimization of behavior can have catastrophic consequences when actually put into place. We have to be rigorous about the microfoundations behind them.

When studying strategy, we must keep in mind the constraints on strategic choice, as well as realistic microfoundations that would inform the interactions  and incentivizes underneath gauzy rhetoric. Otherwise we may be conquered by the metaphorical Kerrigans that always threaten to thwart our hopes, desires, and plans.

Is Failure An Option? (II)

By Adam Elkus

In Part I last month, I discussed the origins of the “failure is not an option” mode of strategic theory. Key to my conception was the idea of “failure is not an option” as a specific algorithm for ramming through a risky, controversial idea under highly complex, difficult, and uncertain circumstances.

If the possibility exists that anything less than a rigid formulation could lead to failure, the algorithm accepts the danger of a rigid design because its very rigidity ensures survival in a hostile or otherwise difficult environment. The logic of political survival famously holds that policy stability is strongest in a tighter coalition. The concept of “strategic essentialism” holds that subaltern groups should minimize individual differences at strategic points to present a united front to outsiders, despite the fact that such debate might serve an optimizing function over the long run.

The idea of implementing a massive and complex venture rapidly and decisively (with little room for error) is essentially just a rephrasing of the familiar the pre-World War I fear of losing a mobilization race. Under some circumstances, a nuclear balance could also degenerate into a “use them or lose them” dilemma in which a state risks the entire annihilation of its strategic forces and decision nodes in one murderous enemy salvo. There also seems to be — from Niccolo Machiavelli to Nathan Bedford Forrest – a general competitive heuristic that if you are to crush your enemies, you must strike as powerfully as you can and as quickly as you can. The heuristic is even repeated in the animal kingdom: queen bees famously kill their rivals upon emergence. But as the Germans discovered after the Schlieffen Plan and The Wire‘s Omar taunted, rapid execution and massive risk only pays off when it pays off. Fail and you run the risk of embroiling yourself in a quagmire that might have been avoided with more gradual and less rigidly planned execution.

The last aspect of the “failure is not an option” algorithm, “guarantee automatic consequences for failure” is perhaps the most interesting and complex. Whereas “failure is not an option” is to designed to optimize a wide variety of potential instances of the same general problem, the idea of automatic punishment is more ambiguous. Generally, the idea of automatic and unavoidable consequences for failure is intended to incentivize a “sink or swim” mentality due to the inevitably harsh punishment upon failure. But the distribution of the consequences for failure is not inherently specified by the “guarantee automatic consequences” instruction.

In a pure instance of the “guarantee automatic consequences for failure” instruction, defection from the plan is literally impossible. Cortes the conquistador scuttling his ships is the penultimate example. Either everyone succeeds together or they all die together. However, it is difficult to engineer such a circumstance due to the fact that one must close off any real possibility of escape. That said, a leader can also engineer this by forcing his subordinates to collectively cross a metaphorical Rubicon comprised of political, ethical, or sectarian norms of appropriateness. The classic heist movie cliche of the bank robbers being forced to kill the security guard or innocent witness is a cinematic example of this. All of your hands are dirty, therefore the group must succeed together or fail together.

The problem, however, is that the actual distribution of consequence in a high-risk endeavor is extremely variable.  Consider a hypothetical (amalgamated) dictatorship at war that uses the threat of summary execution to optimize military performance. There are three possible implementations of  the”failure is not an option” algorithm’s final component, each corresponding to a different distribution of lethal consequence.

The first implementation attaches a commissar unit to the back of each tactical formation. When it is time for the general offensive to commence, the tactical commander cries “death or glory, boys” and signals for the junior officers and NCOs to lead their men over the top. Anyone who falters is shot in the back by a special team of politically reliable riflemen and machine gun crews. The second implementation punishes only senior leaders. A general who fails to defend a critical city named after the Grand Sultan is visited by political officers that take him outside his improvised winter HQ to be shot in the head. A premier who oversees a losing war commits seppuku in his office with one hand while saluting his statue of V.I. Lenin with the other.

The third implementation is known as the “skin in the game” variant of the “failure is not an option” algorithm. Here, automatic punishment is equitably distributed. The war has been lost, and the dictatorship is forced to submit to what it considers to be humiliating peace terms. The political elites determine that no one party bears responsibility for the failure – a collective societal sickness has made the dictatorship weak and vulnerable. In order to better incentivize the decadent society to fight stronger when the dictatorship inevitably re-arms, it draws up a list of those to be executed that includes representative samples of every rank responsible. Corporals, junior officers, generals, cabinet ministers, and the Supreme Leader himself are all sent to the guillotine while cheering mobs chant “liberty, fraternity, equality!”

When considering American public policy, many analysts seem to believe that “skin in the game” is the best way to ensure optimal public policy outcomes. I will use the “skin in the game–conscription” variant to illustrate a sample argument:

“Skin in the game  — conscription ” relies on the following assumptions:

  1. Imbalance in the distribution of potential consequences for failure is a major societal problem.
  2. Politicians feel free to wage indecisive, quagmire-like wars of convenience with ill-defined objectives.
  3. The burden on a few soldiers instead of the many is morally unfair and threatens collective cohesion in the larger society.
  4. Distributing potential consequence will deter politicians from waging unnecessary wars, rectify a moral error, and restrict wars fought to those of necessity and those with well-formulated political objectives.

However, as I will explain in Part III, the problem with these assumptions are that they all seem to raise the larger societal stakes. And that paradoxically seems to lead back to conditions when “failure is not an option” becomes an ideal forcing mechanism — which seems to create the very lopsided disasters that “skin in the game” at least partially is designed to prevent……

The Fair Jilt Holiday Gift Guide

By Marc Allen

In case you missed it, the British Library just released a gazillion images into the public domain. They’ve posted them on their Flickr feed (pause for moment of Louis C.K. childlike wonder at technology).

Because they’re in the public domain, you can do whatever you heart desires to these images. Forgot to get your aunt a gift? Photoshop her face into this scene, print it out, frame it, and viola!

Image taken from page 61 of '[The Lure of Venus; or, A Harlot's Progress: a heroi-comical poem [descriptive of Hogarth's prints].]'

What about that fancy friend who wears a top hat and monocle and goes around town on his twenty-five foot camel?  Maybe he’ll appreciate this 19th century print:

Image taken from page 297 of 'Pariserliv i Firserne ... Med talrige Illustrationer'

Or how about this great 1722 portrait of John Locke:

Image taken from page 8 of '[The Works of John Locke, etc. (The Remains of John Locke ... Published from his original manuscripts.-An account of the life and writings of John Locke [by J. Le Clerc]. The third edition, etc.) [With a portrait.]]'

For the pet lover in your life, give them this 1885 image of what appears to be a crane French kissing a wolf:

Image taken from page 17 of 'Illustrated Poems and Songs for Young People. Edited by Mrs. Sale Barker'

And here’s an 1894 print that I know a few of my fellow Jilters will appreciate:

Image taken from page 57 of 'Sea Trips from London to Margate, Ramsgate, Boulogne, etc'

Anyway, this collection is a staggering reflection on the last four hundred years of human history. It should be enough to get you through your work week. Happy Holidays.

A Test Designed to Provoke an Emotional Response

By Kindred Winecoff

For several years now Black Mirror has been my favorite television show despite the fact that U.S. audiences could only view it using, erm, “less-legal” methods. Apparently the show is now airing on something called the Audience Network on DirecTV and I’d encourage folks to give it a try.

Slate has a Slate-y take on the series, but here is the gist of what you need to know: each episode has a completely different cast and crew. There is no recurring plot. There are no returning characters. The writers and directors are all different from show-to-show as well. The only consistency is the techno-dystopian theme of each episode, which has some resonance in the age of Snowden and Facebook face-recognition algorithms.

In some ways Black Mirror‘s closest analogue is The Twilight Zone, but with one key difference: there is little surreal or absurdist about the premise of the episodes. The show is futuristic but just barely: the worlds in the show look functionally the same as our own, except that technology is extrapolated two or three short steps beyond where it presently is. There are no phasers or teleportation devices, just slightly better artificial intelligence. In some episodes the entire narrative is possible given existing technology. The show’s name refers both to an unpowered LCD screen and to an Arcade Fire song… tangible things that presently exist.

Refreshingly, the show also refuses to be dystopian in any one particular way. The first episode involves a terrorist plot to humiliate a head of state. Another imagines one possible future of Google Glass: the ability to revisit video of every event in your life’s past… no more need for hazy memories to settle a he-said-she-said dispute. To bear the loss of a loved one why not download a lifetime’s social network data into a replicant body? It’d be like they never left. In several cases the characters believe they have overcome part of the human condition via technology, only to realize that problems frequently require something other than a technical solution.

But that is not the fault of the technology. The show’s creator, Charlie Brooker, is an avid user of Twitter and a casual technology optimist. His chosen medium is television, not print. The takeaway from the show is not to turn off the smartphone, disconnect from Facebook, and re-learn your penmanship. The technology is never the real problem. The people are. It is a point that frequently gets lost in discussions over the relationship between technology and society. And that is why the show is such a needed interjection into the culture.

 

Beyoncé’s “Flawless” Feminism

By Amanda Grigg

tumblr_mxqhjtelWV1sk0vqbo2_250Unless you live under a rock (not that kind Bey) you know that Beyonce just surprise released a visual album featuring 14 new songs and 17 new videos. Obviously the internet came pretty near to exploding and lost the ability to even can. On track 11, “Flawless” previously released as “Bow Down,” Beyonce samples extensively from a Ted Talk, “We Should All Be Feminists” given by Nigerian author and Orange Prize winner Chimamanda Ngozi Adichie. Here’s the speech/verse, taken from Rap Genius which, no surprise, had yet to annotate it as of this post (afternoon project, anyone?).

“We teach girls to shrink themselves, to make themselves smaller. We say to girls, you can have ambition, but not too much. You should aim to be successful, but not too successful. Otherwise, you would threaten the man. Because I am female, I am expected to aspire to marriage. I am expected to make my life choices always keeping in mind that marriage is the most important. Now marriage can be a source of joy and love and mutual support but why do we teach girls to aspire to marriage and we don’t teach boys the same? We raise girls to see each other as competitors not for jobs or accomplishments, which I think can be a good thing, but for the attention of men. We teach girls that they cannot be sexual beings in the way that boys are. Feminist: the person who believes in the social, political, and economic equality of the sexes”

lanactrlaltdelreyGet. It. Chimamanda. Beyonce’s inclusion of this sample on the track is particularly interesting because the original release of the song as Bow Down/I Been On saw some serious (largely white) feminist backlash, and ranting about her anti-feminist turn from (large and white) Rush Limbaugh. In both versions of the song Beyonce repeats the line “bow down bitches” (haters to the ground) and the anti-sisterhoodness of it all was too much for some feminist critics. Defenders argued that the song was feminist, or at least not antifeminist. Here’s Sesalie Bowen of Feministing, highlighting the importance of context in interpreting the song:

And those self-affirming, self-glorifying lyrics? Those descend from a tradition of self-glorifying verses that the creators of hip hop took to in rap battles and cyphers. That is the culture of hip hop to say: I’m the shit. Respect it. Bow down to it. I can’t say it enough: Context is so important.

And here’s The Root editor Akoto Afori-Atta on Beyonce’s brand of feminism:

Also, here’s what is central to her brand of feminism: the option to play like the boys play…If men can boast about their accolades on a track, so can Bey and any other woman who chooses to. In that sense, isn’t “Bow Down” pro-women?

tumblr_mxqhxqMLTF1qma32qo2_500The criticism eventually led to interviewers asking Beyonce to make her position on the F-word clear, and Beyonce hesitatingly declaring herself a feminist in British Vogue. Beyonce has clearly had some doubts about the feminist label, which makes sense considering the history of white feminism excluding black women, the public backlash to prominent feminists and the feminist backlash to imperfectly feminist celebrities. Here’s Beyonce, after telling Vogue she’s a “feminist, I guess”:

I do believe in equality and that we have a way to go and it’s something that’s pushed aside and something that we have been conditioned to accept. … But I’m happily married. I love my husband.

anigif_enhanced-buzz-16947-1386920107-31It seems like Beyonce was working through what all feminists work (and re-work) through, which is finding a place for yourself in a feminist movement made up of a million different versions of feminism, and deciding whether you’re comfortable labeling yourself a feminist when you disagree (potentially fundamentally) with others who share that label. With the release of this song, and the sample of Adichie, it sounds like she’s found her place.

A Michigander on Michigan’s “Rape Insurance” Bill

By Amanda Grigg

The Michigan legislature just passed a bill banning all insurance plans in the state from covering abortions unless the women’s life is in danger. The bill was passed as the result of a citizen-initiated legislative petition, a process that makes me question this whole democracy thing. Because the bill was initiated by a citizen petition, it  isn’t subject to approval or veto from the Governor – this is key for proponents, because a similar bill was vetoed just last year. Thanks to the magic of citizen-initiated legislation, 315, 477 people, or about 4% of Michigan’s voters were able to circumvent the Governor’s objections by putting veto-proof legislation before lawmakers.

According to the nonprofit Guttmacher Institute Michigan is joining 23 other states that have restricted abortion coverage in plans offered through Obamacare insurance exchanges and an elite hair-pullingly-infuriating group of 8 states restricting coverage of abortion in all insurance plans offered in the state.

gaaaah.gif.pagespeed.ce.cgqtdTKzgb

As with most of the recent attacks on reproductive freedom, this bill is predicted to disproportionately impact low-income women, who are much less likely to be able to afford the $500 or $600 required to pay for the procedure out of pocket. Of course under the 1977 Hyde Amendment banning Medicaid coverage of abortion, many low-income women are already required to pay out of pocket. And even the terrible no good very bad Hyde amendment includes exceptions in cases of rape or incest.

Proponents framed the bill as an effort to ensure that those opposed to abortion would not be “paying for it.” As Right to Life of Michigan President Barbara Listing explained:

“Michigan citizens do not want to pay for someone else’s abortion with their tax dollars or health insurance premiums.

So, barring an injunction and referendum vote (which will happen, lady haters to the left) women in Michigan will have to buy a separate abortion insurance rider before they become pregnant in order to have the procedure covered. It’s like flood insurance, but for your uterus/reproductive rights/basic human agency.

If this leaves you like…

How?

You aren’t alone. In fact, you’re in the same boat as self-declared pro-lifer and avowed Republican Governor Rick Snyder, who vetoed the bill last year saying that he does not “believe it is appropriate to tell a woman who becomes pregnant due to a rape that she needed to select elective insurance coverage.” Opponents of the bill have taken a similar line in arguing against it, emphasizing the fact that abortions in case of rape are not covered so, women will have to purchase the rider to insure themselves in case they’re raped.

Calling the bill “rape insurance” is smart because it draws attention to the fact that any woman capable of conceiving is harmed by this bill, regardless of whether she might otherwise imagine herself needing an abortion. It gets around the “I’m not one of thoooose women” problem. It also focuses on what is seemingly one of the most sympathetic cases for abortion, and one for which exceptions have been carved out in many laws restricting access. Problematically, this strategic focus on rape (which is pretty common in reproductive rights battles) can reinforce the idea that there is a clear division between “good” and “bad” abortions and that the line encompassing good/acceptable/sympathetic abortions falls somewhere just beyond rape.

So, that’s what’s going on. Now, a few things the current debate and the emphasis on “rape insurance” doesn’t address but should.

First, pro-lifers think they have, and are successfully asserting, the right to control their money long after it’s no longer theirs. The justification offered by proponents of the legislation is that they don’t want their money to pay for abortions. Here’s Amanda Marcotte for Slate:

“It’s an argument that assumes if money has ever passed through the pocket of an anti-choicer, they get to retain control over it no matter who else legally has control of it now. This is not how transfer of control of funds works with anything else, as I explain in this week’s podcast. If I give you anything, it belongs to you and not me. But anti-choicers are arguing that when it comes to reproductive health coverage, that rule should be suspended. If they pay an insurance company, then they should retain control over the money that belongs to the insurance company, even when the insurance company is offering a service to another person, who also purchased coverage.”

Second, this kind of group veto of specific uses of public and private funds isn’t tenable in a multicultural democracy. Again, proponents justify the ban by saying that because they oppose abortion, they shouldn’t have to pay for it even in the most indirect ways. Sorry guys, that’s not how living in the world works. Get out of your Veruca Salt bubble and join the rest of us in the land of compromise. We simply can’t let every group with (even a deeply-founded) opposition to something refuse to put their (or any) funds towards it, especially when it’s a vital part of ensuring the health and well-being of others. This is the argument people are making when they say things like, “what if I don’t support heart transplants or blood transfusions?” It’s also part of the immortal Toby Zeigler’s argument for the NEA:

“I don’t know from where you get the idea that taxpayers shouldn’t have to pay for anything of which they disapprove. Lots of ’em don’t like tanks… even more don’t like Congress.”

This is a complicated issue which I won’t delve into here (later, I promise!), for now I’ll just say that it’s at the core of these debates and it deserves more attention.

Of course this would all be a lot less complicated if we weren’t talking about abortion, a health issue for which medical authority carries little weight. Exhibit A: Despite the fact that the vast majority of medical professionals would argue otherwise, the Right to Life Michigan President Barbara Listing feels comfortable asserting publicly that:”Abortion is not health care.”

Michigan resident Tamesha Means would probably disagree. And I would argue that the bill does acknowledge that abortion can be an important part of health care by including an exemption for the life of the mother (but please don’t tell Barbara that).

Barbara is also kind of right. Even though abortion is a medical procedure and a part of basic health care it isn’t just about physical health, as feminists have long argued and as opponent’s language of “rape insurance” implies. It’s also about dignity and autonomy and sex equality. That makes it much more complicated to discuss abortion in the context of health insurance than it would be to discuss something seen as purely an issue of health.

On the upside, only about 3% of abortions in Michigan in 2012 were covered by insurance so…nope. Forget that, no upside. Bad to worse.

You Never Give Me Your Money/ You Only Give Me Your Funny Paper

By Kindred Winecoff

After the Pope’s recent screed I wrote on Facebook that I didn’t understand why he was getting so much attention, particularly from non-devout Catholics. After all, hasn’t a very long history demonstrated that anything the Pope writes should be taken as utter horseshit until conclusively proven otherwise? The most common pull-quote in the 200+ letter is this:

[S]ome people continue to defend trickle-down theories which assume that economic growth, encouraged by a free market, will inevitably succeed in bringing about greater justice and inclusiveness in the world. This opinion, which has never been confirmed by the facts, expresses a crude and naïve trust in the goodness of those wielding economic power and in the sacralized workings of the prevailing economic system. Meanwhile, the excluded are still waiting.

Others have noted that his empirical claim is dubious at best. I’d also note that his theoretical claim (“inevitable”) is a straw man. People have always been concerned with the “goodness of those wielding economic power”. If we’re being consistent we’d also be concerned with the “goodness” of what is one of the wealthiest institutions on earth, and one of the least transparent. This is why we in the decadent West have regulatory institutions, progressive taxation, and a welfare state deployed by elected representatives of the people. No similar checks and balances in Vatican City.

Meanwhile, as Hitchens noted in his polemic against the “ghoul of Calcutta,” the Pope’s own organization has been less a friend of the poor as of poverty. The church opposes the liberation of women and the sagacity of demographic planning, which is a precondition for escaping Malthusian social dynamics. Historically the church has actively worked to promote ignorance, oppose scientific inquiry, and limit the erosion of its own prestige by rising bourgeois and working classes — the very things that have enhanced human dignity. At present it refuses to divest any of its substantial assets to improve the material lives of the suffering. If a capitalist can be defined by a logic of accumulation then there has been no greater capitalist in world history than the church in Rome. These are not actions that demonstrate concern for the least among us (and we will know them by their actions).

Until these policies and doctrines are not only abolished but thoroughly repudiated I won’t take seriously lectures from Jorge Mario Bergoglio on questions of political economy. This should be obvious to practically everyone, and I would encourage well-meaning people of the left to not accept poisoned friendships so easily.

But I hadn’t actually considered another aspect of this. Among the world’s poorest the situation is the exact opposite of what Bergoglio describes.

“All the data show households with humbler jobs and lower incomes enjoying faster income growth than those with fancier jobs and higher incomes,” observe Batson and Gatley. “China’s income inequality has been quietly getting better.”

Via Scott Sumner, who adds that the Pope should really be less Euro-centric. Indeed he should.

UPDATE: If I’d noticed this FT exposé on the financial malpractice of the Vatican that published a few days ago I would’ve worked it into this post. I didn’t, until now, so I’ll just link to it. It’s pretty bad.

Stop Praising Mandela! Keep Burying Apartheid!

Enough with the white-haired sage: I want more of this guy.
Enough with the white-haired sage: I want more of this guy.

Watching the Mandela coverage – obits, eulogies, reflections, quotes – pour over my Facebook feed yesterday, I had one single reflexive thought: “Fuck apartheid!”

To listen to the majority of the obits, you’d think Mandela’s greatest accomplishments were becoming president and meeting Bono (they conveniently leave out his and his successor’s role in crafting a devastating AIDS policy). Yes, the fact of his presidency was historic. Yes, the fact that South Africa did not become, say, Zimbabwe is near-miraculous, and Mandela’s work for peace and reconciliation before and during his presidency were part of that (I give most of the credit to the millions of South Africans he inspired).

Zimbabwe. While Mandela was in prison, Zimbabwe, then a brutal colonial state called Rhodesia, was torn apart in a bloody civil war for independence that resulted in three decades of violent, inept African dictatorship. This happened for one reason: apartheid. Apartheid violently oppresses until it inevitably foments violent resistance: it is good for nothing else. And apartheid is not receiving sufficient attention in the wake of Mandela’s death. Which is to say: apartheid is not the central focus of the coverage.

Apartheid is why Mandela is Mandela. Apartheid is why he spent nearly three decades in prison. (He was president for only five years, but that’s practically all we’re hearing about.) Apartheid is why a man who is now being compared to Washington and Lincoln was, for the majority of his life, compared to Castro (and why he, in turn, admired Castro).

Comparisons to Lincoln are particularly stomach-churning because they simultaneously overstate and understate Mandela’s achievements. They conflate or too easily link slavery and apartheid. Lincoln waged a horrific war to dismantle a slave economy, which was then replaced by an apartheid-style society[1]. Lincoln’s program was not wholly anti-racist. Mandela briefly fought, was jailed for, and came to symbolize the international fight against apartheid, which is in so many ways more difficult to exorcise than slavery (we’re stilling learning that in the United States). Mandela’s program was almost completely anti-racist.

The praise is doubly sickening because it pulls a curtain over the very recent past. Throughout the 1980s, most of the former colonial powers and the United States not only approved of the apartheid government, they supported it. In the U.S., one political party deserves the brunt of the blame. Sam Kleiner (linked above) writes:

Officially, the goal of the Reagan administration was to end apartheid. But its behind-the-scenes work revealed a startling degree of comfort with the South African regime – or at least ignorance of how apartheid worked. For a July 1986 speech to the World Affairs Council in Washington D.C., Reagan rejected a moderate State Department draft and instead instructed his speechwriter, Pat Buchanan, to draft a version arguing that Mandela’s African National Congress (ANC) employed “terrorist tactics” and “proclaims a goal of creating a communist state.” (Buchanan later dismissed Mandela as a “train-bomber” and defended the hardline position.) Reagan himself never seemed to really understand the moral repugnance of apartheid. He described the system in a 1988 interview with ABC’s Sam Donaldson as “a tribal policy more than … a racial policy.”

[…]

Some of today’s most recognizable political operatives also played a role in pushing the apartheid government’s agenda. In 1985, following his term as national chair of College Republicans, Grover Norquist was brought to South Africa for a conservative conference, where he advised a pro-apartheid student group on how to more effectively make its case to the American public. While there, he criticized anti-apartheid activists on American college campuses: Apartheid “is the one foreign policy debate that the Left can get involved in and feel that they have the moral high ground,” he said, adding that South Africa was a “complicated situation.”

The praise for Mandela has also pulled the curtain over apartheid. Its particularities, its history, and its horrors – a truly totalitarian system that imprisoned black South Africans within their own communities – have not been mentioned in the past 36 hours nearly as much as Truth and Reconciliation or Mandela’s pleas against anti-white racism. In American terms: we’re hearing too much Dr. King, not enough Malcolm X.

Further, the whole concept of apartheid is being ignored. The word, like fascism, has an historically and politically local origin, but just as governments beyond Mussolini’s Italy can reasonably be called fascist, so governments beyond pre-1990 South Africa can reasonably described as apartheid. The United States was an apartheid nation from 1863 until…well, take your pick. England ran a global apartheid empire throughout the 19th century. There are apartheid governments in the world today.

Mandela was not Gandhi. Mandela did not shy away from the option of armed revolt. Mandela believed in African autonomy. Mandela fought against apartheid, colonialism, and white supremacy, and he was jailed because the South African government feared (rightly) that he’d make good on his word. Almost the entire Western world was against him; he was against it. But to read the obituaries and eulogies, you’d think he was an activist without an enemy in the world. Enough! Mandela had enemies in 1955. He had enemies in 1962. He had enemies in 1985, 1990, and 1994. And he still has enemies today. Let’s name names.

***

[/1] History has weighed against the word “segregation,” but I don’t take a totally dim view of all segregated systems, at least not in theory. Throughout the 20th century, many black activists advocated segregated institutions to help form a separate, thriving black society, a “nation within a nation.” Jewish society was frequently cited as a model. But Southern governments would not allow such institutions to thrive, and integration became the best option in the minds of most black leaders. But even today, many scholars and students of black history believe the separatists may have been right.

What happens when the Cold War never happened?

Rep. Duncan Hunter - bombs (sort of) away!
Rep. Duncan Hunter – bombs (sort of) away!

Responses to Salon’s left-wing alarmism aren’t really Jilt-worthy, but this piece on U.S. Representative Duncan D. Hunter (R-California) caught my attention. Overall, the short summary of Hunter’s recent C-SPAN interview highlights his ambivalence on foreign policy: he wants to nuke Iran, but not intervene. He will support Israel militarily, but wants to stay out of the Middle East.

This underscores the G.O.P’s mixture of militarism and isolation that vomitous political reporters find so enthralling (it’s a paradox! a civil war! Bush’s profligate preemption doctrine vs. the Tea Party’s “fuck the world” frugality!). But this alleged rift has, in fact, been part of the Republican party’s ideological makeup since at least the 1940s, back when the archconservative Senator Robert Taft (R-Ohio) stood beside the socialist President Franklin D. Roosevelt (D-America) and vocally supported the U.S. entry into World War II. True isolationist conservatism went extinct. The final death blow came in the late 1950s, when most Republicans were left-of-center and the conservative movement was so marginalized that its agenda could be set by a group of men small enough to meet comfortably in a coat check. Around 11:36 p.m. on the evening of August 23, 1955, William F. Buckley – sitting alone on his yawl anchored off the coast of Fishers Island, Connecticut – determined that laissez-faire economics, religious nationalism, and Cold War militarism were the perfect cocktail for a cultural hegemon. It took a few years, but eventually Buckley got all conservatives and 39% of everyone else thinking his way. Sure, every now and then a Pat Buchanan waxed rhapsodic about Charlie Lindbergh or a Republican Congress opposed intervention in Kosovo. But these were fringe figures and political maneuvers. On the whole, mainstream conservatives are not isolationists. They’re militant anti-internationlists: pro-war, anti-U.N. They either lead coalitions into war or they go it alone. Kings of the world. The Tea Party will be no exception.

So the Salon article is basically pointless. Except it got me thinking about the Cold War.

how to survive an atomic bombRep. Hunter’s “tactical nuclear devices” reminded me of Henry Kissinger’s early work on limited nuclear exchanges. Salon likely finds Kissinger as distasteful as Hunter, but they’re wrong if they think Hunter represents a newfangled brand of crazy: it has always been there. Prior to Reagan’s second term, the possibility that a nuclear war would occur was constant. And as long as the possibility hung in the air, you had people defending the proliferation (and use!) of nuclear weapons. During the Truman years, a nuclear exchange seemed inevitable – a consequence of the Korean War, or maybe Berlin. Terrifying, right? Not at the time. A majority of Americans wanted General MacArthur to drop atom bombs along the Chinese coast. They were looking to pick an atomic fight.

Within a decade, things had changed. Nuclear war was synonymous with apocalypse. But nuclear weapons were also a horrific fact of life. With the anti-proliferation movement in its infancy, the majority of Americans fretted over Soviet superiority and missile gaps; they were no longer looking for a fight but, by the late ’50s, they felt desperate for protection. And so proliferation marched on…

Around this time, the idea that a nuclear exchange could be very small, limited, and practical seemed totally obvious to academics and strategists like Kissinger. Sure, the idea would have sounded bizarre and horrifying to the general public (except, maybe, to the nearly 40% of voters who supported Barry Goldwater in 1964). But in theory, you could have a little nuclear war between China and Korea, maybe a medium-sized nuclear exchange between China and Russia, without severe escalation. In 2010, Robert Kaplan practically licked his lips as he reread Kissinger’s early work, imagining little nuclear exchanges between India and Pakistan, Israel and Iran, North Korea and whoever. No biggie!

But throughout the Cold War – in the midst of unprecedented armament, extremely aggressive foreign policy, and often lax attitudes toward the prospect of thermonuclear war – eight U.S. presidents and six of their Soviet counterparts worked earnestly to decelerate and ultimately halt the production of nuclear weapons. With the possible exception of Nixon at his craziest, no Cold War president seriously considered launching their nuclear weapons or provoking the U.S.S.R. to launch theirs. The possible value of small, tactical strikes was entertained (esp. during the Korean War, the Cuban Missile Crisis, and Vietnam) but invariably rejected as politically and morally untenable.

Now we have a rising generation of political leaders who didn’t experience the Cold War. And unlike, say, the Civil War or the Great Depression, the Cold War has not left an indelible impression on American culture. (Don’t confuse the Cold War with total U.S. economic and military supremacy 1946 – 1973: that we remember. That we want back.) George W. Bush’s “War on Terror” should have looked and sounded a bit like the Cold War, but it didn’t. Sure, he threw a few anti-communist tropes around, but he was heavier on the World War II language. “Axis of evil” is the obvious example. He frequently compared our prolonged presence in Iraq not to Korea, a not-terrible analogy, but to Japan and Germany. This was partially to improve the forecast: Japan and Germany are more stable than the Korean peninsula. But he also understood that references to America’s WWII foes were more palatable. More awesome. Then the Great Recession hit, and WWII analogies were swapped for Great Depression analogies. Semi-thoughtful allusions to the recessions of the 1970s and ’80s popped up, but they never mentioned Brezhnev.

Despite the fact that the United States ruled the world throughout and because of the Cold War, its Cold War narrative is startlingly vague: something to do with communism, cowardice, spies, submarines, missiles, and a wall. Things were tense for a while, but eventually we won because we got the idea to take down the wall.

Okay, I exaggerate a little. But here’s what bothers me: we have a rising generation of political leaders like Rep. Hunter who don’t really remember the Cold War, who don’t remember arms races, and who aren’t interested in history except as a source of propaganda and rhetoric. They posses no clear concept of thermonuclear war, have never imagined what it might look like, have never felt helpless against its apparent inevitability, and yet they speak casually about a nuclear exchange: in part because they’re idiots but also because they missed the War. We can mock them for not doing their homework, but at the end of the day, they’re much closer to the button than we’ll ever be. And I guess that’s my point: there’s still a button!

Today, Israel (a nuclear power) is terrified by a potential nuclear enemy. I don’t blame them. But the United States existed for forty years with an armed, strike-ready nuclear enemy just across the pole. During most of that time, American life was uneventful; domestic issues dominated U.S. politics. But when the Cold War turned even lukewarm, the fate of modernity seemed to dangle over an abyss. It was terrifying. And very soon, the men and women with fingers on the button won’t remember a time when catastrophic nuclear warfare was a serious geopolitical concern and a part-time cultural obsession.

Of course, this may not matter in the long run. Political leaders born after 1991 may hesitate to press the button as much as Jimmy Carter would have.

But something unprecedented is happening, and at the very least it’s worth noting: within a decade or so, thermonuclear weapons will continue to exist but virtually no one will remember the conditions that created them.

I don’t.