Relationship of Command

By Adam Elkus

Over at War on the Rocks, I have a column that mixes complexity theory and recent counterterrorism analysis on problems analyzing al-Qaeda’s political-military command structure. I was struck when writing it, though, about how much the repeated application of “decentralized” to al-Qaeda is the result of a systematic selection bias in strategic studies regarding command and organization.

When we think about what the “right” style of command is, we assume a pyramid-like structure with a commander and successive building blocks of subordinate structures that execute orders with varying degrees of automaticity. What degree of autonomy subordinates may expect is often determined by context. Because of the historical difficulties of naval communication and the way naval technology and organization inherently leads to federation, a naval command system is inherently different from a ground-pounder command system. But the assumption of the system being coherent is often presumed by the idea that “unity of command” is a principle of war.

However, unity of command often assumes that the subordinate is at least theoretically obligated to accept and obey orders. If the subordinate, for example, is a noble with control over his own finances and his own pool of manpower, he must be incentivized in some shape or form to obey. Defection of entire fighting units is a realistic risk. In the Chinese novel Romance of Three Kingdoms, a commander that felt emotionally slighted could defect with his entire formation. This is not a historical problem. The Afghan Taliban rapidly collapsed in 2001 when it became clear that the US and the Northern Alliance possessed a preponderance of power. Unity of command is a collective action problem, and the coherence of any political-military organization is in theory stochastic.

At the most individual level, it is also not rational to fight as a soldier to begin with. You risk your life, and often receive no special payoff. In military history, the coherence of military organizations on the most atomic level is often a function of altruistic punishment or social and material rewards. One of the most famous examples of the former is the Victorian novel The Four Feathers, in which a British imperial soldier who refrains from deploying to the Sudan is shamed by his bride-to-be and compatriots.

In the quantitative civil war field, these considerations are not unusual. But they ought to be applied more widely to the study of strategic theory and history.

Democracy Is Not Magic

By Kindred Winecoff

Besir Ceka, a former grad student colleague of two Jilters, has a post at the European Politics and Policy blog of the London School of Economics and Political Science describing some of his research. The key thing is this:

However what’s most fascinating is that, when compared to national governments, the EU is the more trusted level of government by Europeans, and by a long stretch (see Figure 3). This has been the case for a while and it is a fact which has largely been ignored by Eurosceptics. Despite much debate about the democratic deficit in the EU, the legitimacy deficit of national governments, as measured by the level of trust that citizens put in them, is far more acute than that of the EU. Over the last decade, the “trustworthiness gap” between the EU and national governments has been as high as 25 per cent.

Bold added.

For a long time the bulk of political scientists, theorists, and many in the commentariat have associated “democracy” with “good governance”. There have always been major problems with the causal mechanism, but the European Union is a salient example: the least-democratic governance structures also enjoy the most public trust. Meanwhile, in the US public trust in the government is near its all-time low of 17%. What does that mean? If democracy aggregates the sum of private wills and the sum of private wills doesn’t care much for democratic outcomes, then… well, you tell me.

For a somewhat different (and even more skeptical) take on this dynamic see Henry Farrell in Aeon a few months back.

Obromacare or MotherbroXXX

By Amanda Grigg

Last week everyone from The Atlantic to Buzzfeed covered a new Colorado Obamacare campaign targeting “bros.” The campaign can be found at “gotinsurancecolorado.org and is part of the Thanks Obamacare campaign run by ProgressNow Colorado and the Colorado Consumer Health Initiative. You can also reach the site via “doyougotinsurance.com” which is, clearly, a more bro-friendly url.

enhanced-buzz-27185-1382458399-11
got insurance, bro?

Of course the Colorado campaign’s real aim is to get the attention of healthy, uninsured young people, a group that pretty much everyone agrees is essential to the success of the Affordable Care Act. Because they rarely use medical services these “young invincibles” are cheap to insure, and thus their enrollment is necessary to offset the costs of older, less healthy patients. It just so happens that most of the healthy, uninsured young people (57%) are male. This explains both Obromacare and the Koch brothers’ recent attempts to get bros to “opt-out” via events at campus bars offering free beer and ipad drawinngs. Unfortunately for proponents of the ACA, healthy young men without pre-existing conditions are generally thought to benefit the least from Obamacare, which makes them both vital and possibly resistant to health care form. As a result, we get to watch as everyone and their mother (literally) bro-down.

AARP_ACA_eCard7_600.imgcache.rev1379416663982Efforts to promote the health care law among young invincibles have also targeted mothers. Ads on facebook and recipe websites admonish, “Mom knows best, get insurance!” and cheeky AARP e-cards read “Get health insurance so I can stop pestering you to sign up and start pestering you to get married.” As Democratic pollster Celinda Lake explained to the Washington Post,“it will be the moms of America who are going to decide if their families get coverage…They will decide and then insist their children and husbands sign up.” Polling backs Lake up – many young uninsured people, and particularly young men, cite their mothers as their most trusted source of information about the health care law.

So, a big part of the explanation for these ads can be found in policy and the fact that high enrollment among healthy, uninsured young men is necessary to make Obamacare work. The campaigns are highly gendered because they’re targeting very specific, gendered audiences (mothers and young men). And they’re a little cheesy because bureaucrats and the Koch brothers trying to identify with the cool kids is a little like your parents trying to talk to you about Miley Cyrus. Or your great aunt publicly chastising you for posting bridesthrowingcats.com on facebook because “who would do that to a cat?”

More troubling than the seemingly inevitable pandering to bros is the misleading use of healthy young men as an exemplar of the harms of health care reform. Because of their unique relationship to the ACA, healthy 25 year old men have become the darlings of health care reform critics, who have conveniently held them up as (purportedly) randomly chosen example that illustrate how the healthcare law works and why it will raise rates (Jonathan Chait does a good job of addressing the problems with this tactic). Put simply, the 25 year old healthy male example is a poor one on which to base arguments about the ACA generally because it’s one of very few cases in which individuals may see rates go up, and because the group makes up a small portion of the total population. And as Sarah Kliff explains, the structure of the ACA makes it difficult to generalize even about this relatively small, homogenous group. Most notably, the example is misleading because it’s almost inevitable that in his lifetime this bro will benefit from Obamcare both directly and indirectly if he gets sick, becomes poor, lives past 25, cares about anyone who is or becomes sick, and let’s not even get started on how straight men benefit from the birth control mandate.

The Persistence of American Financial Power

By Kindred Winecoff

The Financial Times asks whether American financial prominence will endure. These articles recur with far too much frequency, in particular every time there’s a policy impasse. I have an essay (with the excellent Sarah Bauerle Danzman) on that topic in the most recent issue of Symposium explaining why the answer is “yes”. One key bit:

These [conventional] assessments see power as a result of the internal attributes of national economies: large economies with attractive financial sectors have power, while weaker ones do not. Accordingly, the U.S. decline in the share of global trade and income, and its domestic financial instability, should diminish its influence. But this focus fails to consider the ways in which the global financial network is, in fact, a complex and adaptive system. Power within this system does not depend solely on domestic attributes, but on the distribution of financial relationships that exists globally. In other words, the most well-connected economies, not just the biggest, are the most powerful. By extension, change within this structure does not follow a linear process, and economies that are initially more advantaged will continue to grow as the system develops.

The difference between these two approaches is significant. When we conceptualize the international financial system as a network, we see that the U.S. has become more central since 2007, not less. Rather than shift from West-to-East, global financial actors have responded to crisis by reorganizing around American capital to a remarkable extent. This is partially due to proactive responses to the crisis by policymakers such as the Federal Reserve, but it is also the result of factors outside the U.S. Above all, American capital markets remain attractive because complex networks contain strong path dependencies, which reinforce the core position of prominent countries while keeping potential challengers in the periphery. That is to say, policymakers and market players were limited in the decisions they could take because of factors that had already been locked in. As a result, the structure of the global financial system keeps the U.S. at the core and will continue to do so unless the entire network is fragmented, as it was during the 1930s when Great Britain lost its dominance.

Read the whole thing.

For those not familiar, Symposium is an exciting new magazine. It has aspirations to fill part of the void left by the dearly-missed Lingua Franca. It’s an excellent way to publicize research in a way palatable to the public. Working with them was an excellent experience, and I’d strongly recommend folks subscribe as well as considering them as a future outlet for their work.

This Is Your Brain on Books

By Seth Studer

neurolinguisticsEarlier this week, OnFiction published the results of a recent study on the biological effects of reading fiction. Researchers at Emory University used MRI scanners to track the brain activity of nineteen participants, all reading the same novel (Robert Harris’s historical thriller Pompeii). The researchers focused on “functional connectivity,” the degree to which activity in one region of the brain prompts or correlates with activity in another region. Basically, your brain’s ability to talk to itself. Participants’ brains were scanned while reading, immediately after reading, and five days after completing the novel. OnFiction described the results:

[The researchers] identified a number of brain networks that got stronger (i.e., the different regions became more closely associated) during the reading period. They also uncovered a network of brain regions that became more robust over the reading period but also appeared to persist after the participants had finished reading. Although this network got weaker over time after the reading period, the correlations between the brain regions in this network were still stronger than those observed prior to the reading period.

Conclusion? Surprise, reading makes you smarter! Or, reading helps your brain make neurological connections more briskly. Those non-adjacent neurons that light up while you’re reading Starship Troopers are potentially responsible for language and comprehension skills (kinda seems obvious, right?), but the researchers aren’t sure yet: the brain remains too dense and mysterious to definitively map. So some of those neurons might be responsible for something totally unrelated to language but related to fiction-processing. Which, for literary scholars, would be awesome to learn about.

Either way: when you read, your brain lights up.

The Emory study focuses on neurological responses to a single novel. But earlier this month, OnFiction reported another study that seemed to demonstrate a measurable difference between “literary fiction” and pulp: a difference many literary scholars spent thirty or more years dismissing. Two psychologists at the New School for Social Research gave readers a randomly assigned texts – some “highbrow,” others “lowbrow,” others nonfiction – and afterward measured the reader’s ability to empathize with others (aka “Theory of Mind”). Participants who read a highbrow text were consistently more empathetic than participants who read the lowbrow text.

In other words, if you need a ruthless hitman, don’t hire the one reading Anna Karenina.

The results of this study were published in Science and discussed on NPR’s All Things Considered. You can hear the audio clip or read the transcript here (I recommend listening to the audio, to experience the full effect of the Danielle Steele/Louise Erdrich pairing).

Gregory Burns, team leader of the first study, is a neuroscientist who has used neurological approaches to economics and sociology. Now he has his eyes on literary analysis. But lit scholars are traditionally wary of theories and methods that appear too positivist, empirical, or quantitative. (Celebrity scientists who condescend and prescribe cures for the humanities without really understanding what humanists actually do aren’t helping.) Much of this wariness comes from decades of disciplinary isolation: C.P. Snow’s “two cultures.” Some of it comes from the academic turf wars and ideological disputes of the 1980s. In the late ’90s, something like Franco Moretti’s amazing Literary Lab would’ve had to been developed slowly and with care, so as not to cause too much of a ruckus. Add a dash of quantitative reasoning in one article, use a database in another, publish a groundbreaking polemic, ensure that you already have tenure and academic fame, and now you’re ready to be semi-empirical without overwhelming backlash!

Of course, so much has changed since the early 2000s. The so-called “Digital Humanities” (a term that seems to mean everything and nothing) has made statistics ‘n’ stuff more palatable to humanists, and the pioneering work of scholars like Nicholas Dames has made science less scary. Today, you can’t go to a literature conference now without a panel on cognitive science and another on economic theory. The “two cultures” are intermingling, beginning with the social sciences, which overlap with humanist concerns more explicitly than, say, physics does. But the studies featured on OnFiction this week should not be dismissed. They aren’t perfect, but their methodologies offer rigorous and robust approaches to literary experience.

Peering through the looking glass of the criminal justice system

By Patricia Padurean

Walking into Department 5 of the Vista courthouse in California, it is hard to resist the urge to cross yourself. Visitors sit in pew-like rows of seats, looking up at a stained glass representation of the California state seal. When the judge walks in, his robe billowing behind him, everybody stands until His Honor grants us permission to be seated. Some of the penitents in the pews are new, some are visiting, most are regulars. They bow their heads. We are in the inner sanctum of the criminal justice system, but the vernacular is overwhelmingly ecclesiastical.

Generally in a church you do not have the looming presence of armed bailiffs so rotund that they have to throw their weight around just to be able to move. In court, these men and women are unavoidable. But it is precisely the ever-present menace of Bailiffs Tweedlee and Tweedledum that highlights the absurdity of the criminal justice system.

Participating in the criminal justice system, whether by choice or in handcuffs, involves stepping into a plane of existence that operates in parallel with the real world. Legal language and logic do not quite map onto normal human language and logic. At every step of the process you have to absorb a new obstacle that challenges and distorts everything you thought you knew about language and reality.

So let’s say one of your friends is a little quirky and a bit of a night owl and instead of doing the normal thing and watching Netflix until 3am, he does chores instead. One evening he decides to mow the lawn. It’s midnight, it’s dark. The neighbor has a dog whose lot in life is not easy. Rover is deaf and as he scampers across the neighborhood backyards, he does not hear the mower coming for him and he is accidentally run over. Your friend, Mr. Insomnia, is charged under your state’s animal cruelty law with killing a domestic animal.

We all know that a criminal defendant is by law considered innocent until proven guilty. We hear this mantra a lot. But if you have ever watched Nancy Grace in her full splendor, you know that the mantra is often disregarded. With stunning regularity, potential jurors admit to thinking that the defendant must have done something wrong or she would not have been arrested and the gears of bureaucracy would not have ground far enough for her to see the inside of a courtroom. The defendant is then generally guilty until proven specifically guilty of something.

If the criminal offense in question has an element of intent, which they typically all do, the jury is charged with deciding whether or not the defendant intended to commit the crime. The law breaks this down into a two-part test comprised of an objective and a subjective half.

The subjective test requires the factfinder to determine the defendant’s mental state at the time of the crime. That’s all fine and good; it rings true that any attempt to enter a person’s mind should be called subjective.

The objective test also tries to determine the defendant’s mental state but it does so by imagining a generic “reasonable man” in the same situation as the defendant. If a reasonable person would have foreseen that mowing the lawn at midnight would result in the violent death of the neighbor’s deaf dog, then obviously the defendant, for all his protesting to the contrary, was clearly out to make a dog smoothie.

The objective test, then, supposedly improves upon the subjective test by determining what was going on in someone’s mind by comparing it with what might hypothetically have been going on in someone else’s mind at the time. There is nothing objective about this; in fact the objective test is twice as subjective as the subjective test! And of course, it is possible to imagine many varieties of a reasonable person, all of whom might have foreseen different consequences of deciding to mow the lawn at midnight.

So let’s assume our defendant has been found guilty of intentionally killing his neighbor’s dog. Criminal convictions not only have the force of law, they also have the force of fact. Once you are found guilty of an offense, in future that offense will be referred to as having objectively happened. But often this is a legal fiction. In our case, Mr. Insomnia accidentally shredded a dog. He knows he didn’t intend for it to happen but from this point forward, as far as the criminal justice system is concerned, he is a willful dog killer.

This type of scenario is admittedly quite rare; however, a large proportion of criminal convictions are plea bargains in which the prosecution offers the defendant the opportunity to plead guilty to a lesser charge than the original conviction. Domestic violence becomes false imprisonment, soliciting a prostitute becomes disturbing the peace. In these cases, any lawyer, judge, jury, or employer who looks at these criminal records will say “Mr. Smith disturbed the peace” when in fact he tried to pick up a hooker or “Mr. Doe imprisoned his family in their home for a week” when in fact he was raping his wife or when perhaps his wife was hitting their children but reported her husband to the police to cover her tracks. These are all very serious issues and behaviors, yet when the justice system treats reality itself like it is fungible, it is difficult not to see criminal justice as a game that you are forced to play but can never hope to win.

Like most institutions, the criminal justice system works well some of the time, and it spends the rest of its time simply existing. To make your living in this system you have to either live in a perpetual state of denial or suspended disbelief. If you squint and tilt your head just so to try to make the two parallel worlds meet, you’ll just wind up cross-eyed and deranged. Nothing is real; Godot will never come.

Nazism, Bigamy, and the Problem of Paul de Man

By Seth Studer

It’s time to beat up on Paul de Man again.

And yes, he probably deserves it.

In Monday’s Chronicle of Higher Education, Tom Bartlett revealed the juicy details of Evelyn Barish’s new biography The Double Life of Paul de Man (due out in March 2014). Barish suggests that de Man emigrated from Belgium in 1947 to escape embezzlement charges. He was eventually convicted in absentia of stealing one million Belgian francs (roughly US$300k today) from his own publishing house. Barish also discovered that de Man never held an undergraduate degree, and that in his interactions with friends, family, and colleagues, he was sometimes a total dick.

This in addition to what we already knew: de Man was a deadbeat dad, a temporary bigamist, and the author of several blatantly anti-Semitic articles for a pro-Nazi newspaper during the German occupation of Belgium. The articles were disclosed in 1987, three years after de Man’s death. English professors across the nation responded with horror (or schadenfreude) because, throughout the 1960s and ’70s, Paul de Man led the vanguard that introduced deconstructionist theory into American universities[1]. He was a big deal.

deconstructionExcept he wasn’t.

Unlike his friend and fellow deconstructionist, Franco-Jewish philosopher Jacques Derrida, de Man’s scholarship focused narrowly on literary language. He argued that literary texts, through their own internal tensions and oppositions, effectively read themselves. (Your copy of Moby Dick is reading itself, even as it sits dusty on your shelf!) Derrida, meanwhile, wrote about everything from semiotics and political philosophy to his pet cat. Derrida’s writing was difficult, but often in a fun way – weird, cheeky, playful.

Also: if you’re a layperson, you’ve probably heard of Derrida. His obit appeared in the New York Times[2]. He was one of several influential French thinkers who emerged alongside 1960s anti-de Gaullist radicalism. You know them by their surnames: Barthes, Foucault, Lacan, Derrida. De Man was the Belgium to their France; he is virtually unknown outside literature departments. But he, more than any other figure, set the hermeneutic agenda for U.S. literature departments in the ’70s and early ’80s.

The news that de Man had authored Nazi propaganda could not have emerged at a worse time for his students (by then major scholars in their own right) or for deconstruction in general[3]. By 1987, cultural studies and politico-ethical concerns were pushing deconstruction out of the humanities. Deconstruction was too apolitical, too textocentric. This was a sideshow in the Culture Wars: as many professors adopted radical politics, ardent deconstructionists appeared reactionary and insular. Meanwhile, deconstruction’s apparent nihilism was being attacked by positivists, scientists, traditionalist lit scholars, and even social conservatives outside the academy. The de Man-Nazi revelation offered proof of what many already suspected: that deconstruction was nefariously closed-off, vapid repressive, even quasi-totalitarian. By the ’90s, deconstruction had lost its cache.

The problem is that Paul de Man was so good.

blakeDerrida was unfairly dismissed as an emperor without clothes, but he also reveled in appearing to waltz through the kingdom naked. For a certain type of student (e.g., me), de Man was much more satisfying. De Man explained heady concepts without Derridean playfulness. He wrote heavy, dense, substantive prose. He reads like a serious scholar applying a theory rather than performing or practicing it. My favorite of his essays is “The Rhetoric of Temporality,” an account of how representations of time are the basis of literary language. He describes how well-known devices – allegory, symbolism, irony – interact with time. He slowly develops an argument that slippage occurs between allegory and symbolism in Romantic poetry, despite the Romantics’ best effort to keep them separate. On this premise, he introduces two “modes” of representing time in literature: “allegory” (which partially includes symbolism) and “irony.”

Toward the end of the essay, de Man writes:

The dialectical play between the two modes [allegory and irony]…make up what is called literary history.

Deconstructionist jargon like “play” aside, de Man’s declaration is downright old-fashioned. Here is an account of literary history premised on literary analysis. When I read this in graduate school, it felt ballsy and refreshing. No hedging, no contextualizing, no whining, no kidding around, just straight-up confidence in his own system: “this is literary history.” I was floored.

So as deconstructionists went, de Man was a straight shooter, on the page if not in his life (perhaps he viewed his two wives as “two modes of dialectical play”). Unlike Derrida or even Barthes, de Man wasn’t messing with me, wasn’t trying to fool or trick me. Even if he believed (along with his intellectual kin) that “everything was a text,” he generally confined himself to literary or rhetorical analyses. I continue to find him useful, which I can’t say about most of his contemporaries. De Man’s work represented deconstruction at its best.

Fractal_swastika_(IFS)But try as I may, I can’t help but detect a bit of the Nazi in it all: the exegetical totality, the confusion (or manipulation) of text and meaning, the all-encompassing instability. And yeah, the biography.

It matters little whether a good physicist was a Nazi, because Nazism probably didn’t contaminate his work. You can kill the Nazi physicist or hire the Nazi physicist, but the physics itself will contains no traces of Nazism. This is slightly less true of a Nazi biologist, who may have covertly adopted Nazi theories of race. For a philosopher, however, the possibility of cross-contamination is so great as to warrant quarantine. Indignant defenders of de Man who separate his scholarship from his anti-Semitic writings are denying this obvious reality. (Derrida’s defense of de Man was better than most because it allowed for cross-contamination[4].)

De Man was a crook and a cheat and a Nazi collaborator. For most literary scholars today, de Man is interesting but irrelevant: deconstruction happened thirty years ago. It had a good run and probably outlasted its expiration date. Meanwhile, those who, like me, find de Man’s insights useful can argue that his political beliefs are functionally irrelevant to his scholarly work. A Chinese wall exists between the Nazism and “The Rhetoric of Temporality.” To reject or to deny? Neither option is good, and Paul de Man isn’t going anywhere, as Barish’s biography proves.

Literary scholars don’t sever Barthes or Foucault from their social, historical, and ideological roots. De Man should be no exception. It’s naive to believe that, before de Man, the humanities weren’t already poisoned by the ugliest ideologies, but it’s impossible to ignore his collaboration with Nazism. So what would it mean to accept both the scholarship and the potential evil attached to it? To not only refuse to let ourselves off the hook, but to actively get on the hook? De Man offered a compelling and useful explanation of literary language, and he also used the written word to collaborate with Nazis. Does deconstruction have Nazi roots? I don’t trust anyone who says “no” reflexively.

C’mon, let’s not be dismissive or defensive or squeamish! Let’s not be afraid of a little blood on our hands!

———

[/1]  You might think you know what “deconstruction” is, and you’re probably wrong. But you’re also probably correct, more or less. From a literary standpoint, deconstruction holds that a poem (or whatever) consists of oppositions that differ and defer to each other in a process Derrida called “play.” This play both creates and subverts the meaning of the poem (or whatever). For de Man, this meant that a poem (or whatever) is self-interpreting.

[/2] Derrida’s obituary was a minor literary event in humanities programs. I’ve seen it assigned on English syllabi, as an instance of productive misreading or something.

[/3] My favorite student of de Man is the late Barbara Johnson, who applied his theories of literary language with intelligence and clarity to topics ranging from Melville’s Billy Budd to the rhetoric of abortion. Her 1994 book The Wake of Deconstruction describes the de Man scandal.

[/4] Derrida, who as a Jewish child was persecuted by the Vichy French government, defended his friend in typical Derridian fashion: he tweaked the anti-Semitic language and found differing oppositions. The full defense is not available online as far as I can tell, but its substance can be gleaned from Jon Wiener’s intelligent, and disapproving, analysis.

Google Autocomplete and Global Sexism

By Amanda Grigg

UN Women has a new campaign that uses google autocomplete to demonstrate the scope of sexism worldwide. Ads in the series place autocomplete search results for queries like “women cannot” and “women should not” over close-ups of a diverse group of women. According to creator Christopher Hunt, “The adverts show the results of genuine searches, highlighting popular opinions across the world wide web” (more on whether Hunt is right about this below).

Autocomplete results are known to vary by location, which inspired me to do some quick google searching of my own (I also thought it was time to get the men involved) and I found a little something for the optimists/male breastfeeding proponents:

womencan

Of course I also found this (thanks patriarchy/E.L. James):

whatwomenwant

and found out that the male version of this:

women should not

is this:

men should not

Fellow Jilter (Jilted?) Graham noted that the search results of autocomplete suggestions don’t always perfectly match the sentiment of the autocomplete. So, an autocomplete of “women shouldn’t vote” for a “women shouldn’t” search conducted in New York might turn up a couple of articles about the women’s suffrage movement (my search turned up this) in addition to more recent coverage critiquing someone who opposes women’s right to vote (here and here) and not turn up much in the way of meaningful opposition to women voting. I don’t think that this makes the campaign any less powerful or accurate as a reflection of sexism, for two reasons. 1. the campaign is global and we could imagine that there are places where correspondingly sexist results would turn up and 2. as far as I can tell, autocomplete is based on popular searches not popular content, so regardless of what the search turns up, the suggestions reflect a large group of people searching for those (sexist or anti-shortsist) terms. Of course we can’t be sure of what people wanted out of their search – they could have been declaring a personal opinion or searching for arguments against women serving in combat for a term paper. So I’ll give some ground on whether all autocomplete results are direct evidence of sexism and maintain that there is ample evidence elsewhere that sexism remains a global issue.

 

The Politics of No Future

By Kindred Winecoff

When Fred Armisen paid a sort of tribute to Margaret Thatcher this Spring most took it as straight satire. Perhaps it was. But there is an argument to be made that part of punk rock’s driving force was rooted in the same dissatisfaction with postwar political economies that led to the Reagan-Thatcher neoliberal moment. The 1970s were thus a decade of economic as well as cultural transition, and as the decade wound down things were not going well. England was on an IMF bailout program and suffered from a Winter of Discontent. As part of this, trash collectors went on strike, so London was literally covered in garbage. It must have made a sort of sense to wear garbage bags as clothing in response.

Trash

In New York (which had gone bankrupt in 1975) crime was near its peak, entire neighborhoods — and practically a whole borough — had burned to the ground, and the subway looked like something out of, well, the bleak movies a coked-up Martin Scorcese was making at the time. The Keynesian consensus had not anticipated stagflation; the postwar compromise between capital and labor had calcified.

The politics of the ’77 punk movement is assumed to be of the left. This is questionable. In the 1980s the punk and hardcore communities were solidly leftist in their opposition to the Moral Majority and other neoconservative forces, but most of the 1970s groups had no politics, or their politics was obscure. In New York, punk was an extension of the avante-garde art and fashion communities, and London (at first) simply mimicked New York. The first signs of politics were anarchist when not nihilist. ’60s collectivism was as worthy of mockery as ’50s individualism. The mantra of “God Save the Queen” is not republican but simple defeatism: “no future”. This was the Blank Generation. They had no ties to the labor movement. They issued nothing like a Port Huron statement.

The Clash summed up the situation in “Career Opportunities”: there were none worth taking. The British economy was being crucified by entrenched corporations, corrupted trade organizations, and a hapless Labour Party. “Career Opportunities” was written in 1977, two years before Thatcher came to power but describing a social condition without which she never could have gained authority. The Clash saw one way out: a “White Riot” to match the civil unrest exemplified by black activists. (The Sex Pistols may have wanted anarchy in the UK, but the Clash proposed a mechanism. To the extent that punk had clear politics it was anti-racist. To the extent it had potential as a movement it was anti-statist.)

Some kind of structural transformation was needed, and had begun earlier in the decade with the final abandonment of the Bretton Woods system of pegged exchange rates. The entire edifice of postwar capitalism was evolving, and the frictions were clear. Even on the left there was some suspicion, as Christopher Hitchens put it, that on some matters Thatcher might just have had a point. When the face of American labor is Jimmy Hoffa there is not much hope in that direction.

Johnny Ramone was famously conservative, and even claimed that punk itself was fundamentally conservative. This is wrong. To the extent that punk politics have ever been articulated they are oppositionist[1]. In the late-1970s oppositionism meant pushing back against the stagnant institutions and slogans of postwar social democracy. These were controlled, at the time, by the parties of labor. From that fixed point only two roads extended: anti-democratic communism or neoliberalism, and by then even the communists were beginning to liberalize. Few punks were Thatcherites, but Thatcher was the only one who would provide the sort of destruction — the massive societal reorganization — that they were asking for.

That is, punk as a cultural force and neoliberalism as a political force are fixed in history. They were responses to the times, and each articulated a key insight: the contradictions of democratic Keynesianism had come to a breaking point. That punk emerged in the several years before Reagan and Thatcher came to power is not an historical accident; it was a warning. That neither punk nor neoliberalism exist as identifiable movements — rather than nods to fashion — in the U.S. and U.K. today demonstrates their contingency[2].

The fact that punk rapidly developed both left and right political factions — little-s soviets and Nazi punks, to give two examples — that were well beyond one standard deviation from the mean in response to Reagan and Thatcher is indicative of this. Some first-wave punk was assimilated into neoliberal political economies — we usually call it “new wave” for no clear reason — while others remained deeply oppositionist. During the 1980s the latter group included much of the emergent hardcore, the working class (often-nationalist) oi segment, and the avant-literate (Dead Kennedys, Minutemen); the former was on MTV.

Punk remains oppositionist but the political ideologies have shifted. Labor parties are the conservatives once again. Once again, the revolutionaries are on the right. The contradictions of the previous economic development model — the Reagan-Thatcher model — were made quite clear, but our political economy has so far not adapted. The rudderless nature of Occupy is illustrative of this, as is the return of vague Utopianism. As is the simple fact that in the U.S. and U.K. there is no cultural force equivalent to first-wave punk.

This could be temporary, but there are signs that it may not be. Mature political economies have gotten quite good at muddling through their contradictions; we’re six years on from the start of the crisis and we’re still muddling. Political demands are of the form “fix it!” rather than “destroy it!” Marginal tweaks appear more likely than systemic overhauls: revolution is incomprehensible, and anarchism is similarly unattractive. It’s not very fashionable but that’s the correct conclusion. The world is much better in 2013 than it was in 1977: we don’t have to wear garbage bags as clothes. And we have no need for another Thatcher.

[/1]Conservatism has periodically taken this “stand[ing] athwart history, yelling ‘Stop!'” mask, but Corey Robin’s definition of conservatism as fundamentally counterrevolutionary is, I think, a much more useful characterization of the core impulse than oppositionism.

[/2]”Neoliberalism” is a descriptor that lost its usefulness in the U.S. and U.K. well before Third Way liberalism adopted the Reagan-Thatcher platform more-or-less entirely. Any contemporary use in reference to Anglo-American political economy is usually an attempt at bullying.