Relationship of Command

By Adam Elkus

Over at War on the Rocks, I have a column that mixes complexity theory and recent counterterrorism analysis on problems analyzing al-Qaeda’s political-military command structure. I was struck when writing it, though, about how much the repeated application of “decentralized” to al-Qaeda is the result of a systematic selection bias in strategic studies regarding command and organization.

When we think about what the “right” style of command is, we assume a pyramid-like structure with a commander and successive building blocks of subordinate structures that execute orders with varying degrees of automaticity. What degree of autonomy subordinates may expect is often determined by context. Because of the historical difficulties of naval communication and the way naval technology and organization inherently leads to federation, a naval command system is inherently different from a ground-pounder command system. But the assumption of the system being coherent is often presumed by the idea that “unity of command” is a principle of war.

However, unity of command often assumes that the subordinate is at least theoretically obligated to accept and obey orders. If the subordinate, for example, is a noble with control over his own finances and his own pool of manpower, he must be incentivized in some shape or form to obey. Defection of entire fighting units is a realistic risk. In the Chinese novel Romance of Three Kingdoms, a commander that felt emotionally slighted could defect with his entire formation. This is not a historical problem. The Afghan Taliban rapidly collapsed in 2001 when it became clear that the US and the Northern Alliance possessed a preponderance of power. Unity of command is a collective action problem, and the coherence of any political-military organization is in theory stochastic.

At the most individual level, it is also not rational to fight as a soldier to begin with. You risk your life, and often receive no special payoff. In military history, the coherence of military organizations on the most atomic level is often a function of altruistic punishment or social and material rewards. One of the most famous examples of the former is the Victorian novel The Four Feathers, in which a British imperial soldier who refrains from deploying to the Sudan is shamed by his bride-to-be and compatriots.

In the quantitative civil war field, these considerations are not unusual. But they ought to be applied more widely to the study of strategic theory and history.

Democracy Is Not Magic

By Kindred Winecoff

Besir Ceka, a former grad student colleague of two Jilters, has a post at the European Politics and Policy blog of the London School of Economics and Political Science describing some of his research. The key thing is this:

However what’s most fascinating is that, when compared to national governments, the EU is the more trusted level of government by Europeans, and by a long stretch (see Figure 3). This has been the case for a while and it is a fact which has largely been ignored by Eurosceptics. Despite much debate about the democratic deficit in the EU, the legitimacy deficit of national governments, as measured by the level of trust that citizens put in them, is far more acute than that of the EU. Over the last decade, the “trustworthiness gap” between the EU and national governments has been as high as 25 per cent.

Bold added.

For a long time the bulk of political scientists, theorists, and many in the commentariat have associated “democracy” with “good governance”. There have always been major problems with the causal mechanism, but the European Union is a salient example: the least-democratic governance structures also enjoy the most public trust. Meanwhile, in the US public trust in the government is near its all-time low of 17%. What does that mean? If democracy aggregates the sum of private wills and the sum of private wills doesn’t care much for democratic outcomes, then… well, you tell me.

For a somewhat different (and even more skeptical) take on this dynamic see Henry Farrell in Aeon a few months back.

Obromacare or MotherbroXXX

By Amanda Grigg

Last week everyone from The Atlantic to Buzzfeed covered a new Colorado Obamacare campaign targeting “bros.” The campaign can be found at “gotinsurancecolorado.org and is part of the Thanks Obamacare campaign run by ProgressNow Colorado and the Colorado Consumer Health Initiative. You can also reach the site via “doyougotinsurance.com” which is, clearly, a more bro-friendly url.

enhanced-buzz-27185-1382458399-11
got insurance, bro?

Of course the Colorado campaign’s real aim is to get the attention of healthy, uninsured young people, a group that pretty much everyone agrees is essential to the success of the Affordable Care Act. Because they rarely use medical services these “young invincibles” are cheap to insure, and thus their enrollment is necessary to offset the costs of older, less healthy patients. It just so happens that most of the healthy, uninsured young people (57%) are male. This explains both Obromacare and the Koch brothers’ recent attempts to get bros to “opt-out” via events at campus bars offering free beer and ipad drawinngs. Unfortunately for proponents of the ACA, healthy young men without pre-existing conditions are generally thought to benefit the least from Obamacare, which makes them both vital and possibly resistant to health care form. As a result, we get to watch as everyone and their mother (literally) bro-down.

AARP_ACA_eCard7_600.imgcache.rev1379416663982Efforts to promote the health care law among young invincibles have also targeted mothers. Ads on facebook and recipe websites admonish, “Mom knows best, get insurance!” and cheeky AARP e-cards read “Get health insurance so I can stop pestering you to sign up and start pestering you to get married.” As Democratic pollster Celinda Lake explained to the Washington Post,“it will be the moms of America who are going to decide if their families get coverage…They will decide and then insist their children and husbands sign up.” Polling backs Lake up – many young uninsured people, and particularly young men, cite their mothers as their most trusted source of information about the health care law.

So, a big part of the explanation for these ads can be found in policy and the fact that high enrollment among healthy, uninsured young men is necessary to make Obamacare work. The campaigns are highly gendered because they’re targeting very specific, gendered audiences (mothers and young men). And they’re a little cheesy because bureaucrats and the Koch brothers trying to identify with the cool kids is a little like your parents trying to talk to you about Miley Cyrus. Or your great aunt publicly chastising you for posting bridesthrowingcats.com on facebook because “who would do that to a cat?”

More troubling than the seemingly inevitable pandering to bros is the misleading use of healthy young men as an exemplar of the harms of health care reform. Because of their unique relationship to the ACA, healthy 25 year old men have become the darlings of health care reform critics, who have conveniently held them up as (purportedly) randomly chosen example that illustrate how the healthcare law works and why it will raise rates (Jonathan Chait does a good job of addressing the problems with this tactic). Put simply, the 25 year old healthy male example is a poor one on which to base arguments about the ACA generally because it’s one of very few cases in which individuals may see rates go up, and because the group makes up a small portion of the total population. And as Sarah Kliff explains, the structure of the ACA makes it difficult to generalize even about this relatively small, homogenous group. Most notably, the example is misleading because it’s almost inevitable that in his lifetime this bro will benefit from Obamcare both directly and indirectly if he gets sick, becomes poor, lives past 25, cares about anyone who is or becomes sick, and let’s not even get started on how straight men benefit from the birth control mandate.

To Freelance or Not

By Graham Peterson

Go look up “freelance” in the OED. The etymology is (surprise!) militaristic. Literally, he who freelanced was a lance-for-hire, a mercenary. What is connoted here then in a modern context? An employee of a modern company (note the militaristic origin of that phrase: company) demonstrates her loyalty to The Group — whereas a freelancer is merely a hired gun.

Interestingly, where do we see the phrase freelancer most often? The arts. Freelance graphic designer. Freelance journalist. The arts were once almost solely sponsored by and disseminated for the religious and political aristocracy’s purposes.  So, in markets which the Clerisy has traditionally held on to, aristocratic language is common (art gallery owners apparently refer to their collection of artists as their “stable” — blessed is ye who keeps thine agrarian estate in order).

Modern evolution of this rhetoric brings us to the case of new Independent Industries. Independent films. Independent music. They’re not “independent” — they’re just small businesses. The only way these markets differ materially from other businesses, and from traditional film and music firms, is that they leverage cheapened technology to self-publish and cut-out intermediaries which were able to extract rents in the early and mid 20th century.  But the rhetoric of rebellion is important, hugely, when it comes to promoting deviance and entrepreneurship in a society which hopes to benefit from market innovation and economic growth.

We see all the way down the history here, how important it is for agents to redefine the rhetoric of arts-industries in order to redefine the appropriateness and value of making arts — not for the consumption of elites — by the people for the people. This is a good thing.

But the amount of leftover militaristic language which agents still use to convey their ethical frames to one another in the market place is not.  Consumers and business people both still talk about “getting a steal, making a killing, cornering the market.”  Without a new vocabulary which reflects that Trade is not War, as people’s ye olde zero-sum intuitions tell them, tastes for markets will continue to remain precarious, and hence economic growth and increasing welfare too.

The Persistence of American Financial Power

By Kindred Winecoff

The Financial Times asks whether American financial prominence will endure. These articles recur with far too much frequency, in particular every time there’s a policy impasse. I have an essay (with the excellent Sarah Bauerle Danzman) on that topic in the most recent issue of Symposium explaining why the answer is “yes”. One key bit:

These [conventional] assessments see power as a result of the internal attributes of national economies: large economies with attractive financial sectors have power, while weaker ones do not. Accordingly, the U.S. decline in the share of global trade and income, and its domestic financial instability, should diminish its influence. But this focus fails to consider the ways in which the global financial network is, in fact, a complex and adaptive system. Power within this system does not depend solely on domestic attributes, but on the distribution of financial relationships that exists globally. In other words, the most well-connected economies, not just the biggest, are the most powerful. By extension, change within this structure does not follow a linear process, and economies that are initially more advantaged will continue to grow as the system develops.

The difference between these two approaches is significant. When we conceptualize the international financial system as a network, we see that the U.S. has become more central since 2007, not less. Rather than shift from West-to-East, global financial actors have responded to crisis by reorganizing around American capital to a remarkable extent. This is partially due to proactive responses to the crisis by policymakers such as the Federal Reserve, but it is also the result of factors outside the U.S. Above all, American capital markets remain attractive because complex networks contain strong path dependencies, which reinforce the core position of prominent countries while keeping potential challengers in the periphery. That is to say, policymakers and market players were limited in the decisions they could take because of factors that had already been locked in. As a result, the structure of the global financial system keeps the U.S. at the core and will continue to do so unless the entire network is fragmented, as it was during the 1930s when Great Britain lost its dominance.

Read the whole thing.

For those not familiar, Symposium is an exciting new magazine. It has aspirations to fill part of the void left by the dearly-missed Lingua Franca. It’s an excellent way to publicize research in a way palatable to the public. Working with them was an excellent experience, and I’d strongly recommend folks subscribe as well as considering them as a future outlet for their work.

This Is Your Brain on Books

By Seth Studer

neurolinguisticsEarlier this week, OnFiction published the results of a recent study on the biological effects of reading fiction. Researchers at Emory University used MRI scanners to track the brain activity of nineteen participants, all reading the same novel (Robert Harris’s historical thriller Pompeii). The researchers focused on “functional connectivity,” the degree to which activity in one region of the brain prompts or correlates with activity in another region. Basically, your brain’s ability to talk to itself. Participants’ brains were scanned while reading, immediately after reading, and five days after completing the novel. OnFiction described the results:

[The researchers] identified a number of brain networks that got stronger (i.e., the different regions became more closely associated) during the reading period. They also uncovered a network of brain regions that became more robust over the reading period but also appeared to persist after the participants had finished reading. Although this network got weaker over time after the reading period, the correlations between the brain regions in this network were still stronger than those observed prior to the reading period.

Conclusion? Surprise, reading makes you smarter! Or, reading helps your brain make neurological connections more briskly. Those non-adjacent neurons that light up while you’re reading Starship Troopers are potentially responsible for language and comprehension skills (kinda seems obvious, right?), but the researchers aren’t sure yet: the brain remains too dense and mysterious to definitively map. So some of those neurons might be responsible for something totally unrelated to language but related to fiction-processing. Which, for literary scholars, would be awesome to learn about.

Either way: when you read, your brain lights up.

The Emory study focuses on neurological responses to a single novel. But earlier this month, OnFiction reported another study that seemed to demonstrate a measurable difference between “literary fiction” and pulp: a difference many literary scholars spent thirty or more years dismissing. Two psychologists at the New School for Social Research gave readers a randomly assigned texts – some “highbrow,” others “lowbrow,” others nonfiction – and afterward measured the reader’s ability to empathize with others (aka “Theory of Mind”). Participants who read a highbrow text were consistently more empathetic than participants who read the lowbrow text.

In other words, if you need a ruthless hitman, don’t hire the one reading Anna Karenina.

The results of this study were published in Science and discussed on NPR’s All Things Considered. You can hear the audio clip or read the transcript here (I recommend listening to the audio, to experience the full effect of the Danielle Steele/Louise Erdrich pairing).

Gregory Burns, team leader of the first study, is a neuroscientist who has used neurological approaches to economics and sociology. Now he has his eyes on literary analysis. But lit scholars are traditionally wary of theories and methods that appear too positivist, empirical, or quantitative. (Celebrity scientists who condescend and prescribe cures for the humanities without really understanding what humanists actually do aren’t helping.) Much of this wariness comes from decades of disciplinary isolation: C.P. Snow’s “two cultures.” Some of it comes from the academic turf wars and ideological disputes of the 1980s. In the late ’90s, something like Franco Moretti’s amazing Literary Lab would’ve had to been developed slowly and with care, so as not to cause too much of a ruckus. Add a dash of quantitative reasoning in one article, use a database in another, publish a groundbreaking polemic, ensure that you already have tenure and academic fame, and now you’re ready to be semi-empirical without overwhelming backlash!

Of course, so much has changed since the early 2000s. The so-called “Digital Humanities” (a term that seems to mean everything and nothing) has made statistics ‘n’ stuff more palatable to humanists, and the pioneering work of scholars like Nicholas Dames has made science less scary. Today, you can’t go to a literature conference now without a panel on cognitive science and another on economic theory. The “two cultures” are intermingling, beginning with the social sciences, which overlap with humanist concerns more explicitly than, say, physics does. But the studies featured on OnFiction this week should not be dismissed. They aren’t perfect, but their methodologies offer rigorous and robust approaches to literary experience.

Peering through the looking glass of the criminal justice system

By Patricia Padurean

Walking into Department 5 of the Vista courthouse in California, it is hard to resist the urge to cross yourself. Visitors sit in pew-like rows of seats, looking up at a stained glass representation of the California state seal. When the judge walks in, his robe billowing behind him, everybody stands until His Honor grants us permission to be seated. Some of the penitents in the pews are new, some are visiting, most are regulars. They bow their heads. We are in the inner sanctum of the criminal justice system, but the vernacular is overwhelmingly ecclesiastical.

Generally in a church you do not have the looming presence of armed bailiffs so rotund that they have to throw their weight around just to be able to move. In court, these men and women are unavoidable. But it is precisely the ever-present menace of Bailiffs Tweedlee and Tweedledum that highlights the absurdity of the criminal justice system.

Participating in the criminal justice system, whether by choice or in handcuffs, involves stepping into a plane of existence that operates in parallel with the real world. Legal language and logic do not quite map onto normal human language and logic. At every step of the process you have to absorb a new obstacle that challenges and distorts everything you thought you knew about language and reality.

So let’s say one of your friends is a little quirky and a bit of a night owl and instead of doing the normal thing and watching Netflix until 3am, he does chores instead. One evening he decides to mow the lawn. It’s midnight, it’s dark. The neighbor has a dog whose lot in life is not easy. Rover is deaf and as he scampers across the neighborhood backyards, he does not hear the mower coming for him and he is accidentally run over. Your friend, Mr. Insomnia, is charged under your state’s animal cruelty law with killing a domestic animal.

We all know that a criminal defendant is by law considered innocent until proven guilty. We hear this mantra a lot. But if you have ever watched Nancy Grace in her full splendor, you know that the mantra is often disregarded. With stunning regularity, potential jurors admit to thinking that the defendant must have done something wrong or she would not have been arrested and the gears of bureaucracy would not have ground far enough for her to see the inside of a courtroom. The defendant is then generally guilty until proven specifically guilty of something.

If the criminal offense in question has an element of intent, which they typically all do, the jury is charged with deciding whether or not the defendant intended to commit the crime. The law breaks this down into a two-part test comprised of an objective and a subjective half.

The subjective test requires the factfinder to determine the defendant’s mental state at the time of the crime. That’s all fine and good; it rings true that any attempt to enter a person’s mind should be called subjective.

The objective test also tries to determine the defendant’s mental state but it does so by imagining a generic “reasonable man” in the same situation as the defendant. If a reasonable person would have foreseen that mowing the lawn at midnight would result in the violent death of the neighbor’s deaf dog, then obviously the defendant, for all his protesting to the contrary, was clearly out to make a dog smoothie.

The objective test, then, supposedly improves upon the subjective test by determining what was going on in someone’s mind by comparing it with what might hypothetically have been going on in someone else’s mind at the time. There is nothing objective about this; in fact the objective test is twice as subjective as the subjective test! And of course, it is possible to imagine many varieties of a reasonable person, all of whom might have foreseen different consequences of deciding to mow the lawn at midnight.

So let’s assume our defendant has been found guilty of intentionally killing his neighbor’s dog. Criminal convictions not only have the force of law, they also have the force of fact. Once you are found guilty of an offense, in future that offense will be referred to as having objectively happened. But often this is a legal fiction. In our case, Mr. Insomnia accidentally shredded a dog. He knows he didn’t intend for it to happen but from this point forward, as far as the criminal justice system is concerned, he is a willful dog killer.

This type of scenario is admittedly quite rare; however, a large proportion of criminal convictions are plea bargains in which the prosecution offers the defendant the opportunity to plead guilty to a lesser charge than the original conviction. Domestic violence becomes false imprisonment, soliciting a prostitute becomes disturbing the peace. In these cases, any lawyer, judge, jury, or employer who looks at these criminal records will say “Mr. Smith disturbed the peace” when in fact he tried to pick up a hooker or “Mr. Doe imprisoned his family in their home for a week” when in fact he was raping his wife or when perhaps his wife was hitting their children but reported her husband to the police to cover her tracks. These are all very serious issues and behaviors, yet when the justice system treats reality itself like it is fungible, it is difficult not to see criminal justice as a game that you are forced to play but can never hope to win.

Like most institutions, the criminal justice system works well some of the time, and it spends the rest of its time simply existing. To make your living in this system you have to either live in a perpetual state of denial or suspended disbelief. If you squint and tilt your head just so to try to make the two parallel worlds meet, you’ll just wind up cross-eyed and deranged. Nothing is real; Godot will never come.