You don’t hijack the electoral college; the electoral college hijacks you

By Seth Studer

Mother Jones reported today the answer to a headline it published two years ago – “Who’s Paying for the GOP’s Plan to Hijack the 2012 Election?” The answer? The Koch Brothers! Once again, the plot leads back to the Koch Brothers – the Moriarty to the activist left’s, uh, Sherlock? (Better analogy: the Mr. Burns to the activist left’s Lisa). But the plot takes a rather circuitous route through various nonprofits and a few election-year pop-up organizations. The link to the Kochs is not as clear-cut as MJ’s headlines suggest. And even when it is, the indictment isn’t exactly stirring: “Charles and David Koch,” MJ reporter Mariah Blake writes, “footed at least some of the bill.”

The backstory: in mid-2011, conservative groups invested roughly $300,000 and applied conservative group pressure to the Republican-controlled Wisconsin and Pennsylvania legislatures, pressuring them to join what I’ll call the Divide-by-District Club.

Here are some facts about the Divide-by-District (DBD) Club:

Membership requirements: be a U.S. state.

Current members: Maine and Nebraska.

Club Rules: you must award your electoral votes by congressional district, not as a winner-take-all state. Luckily, each state has one vote per district (plus two extra votes for those wise old senators in Washington – in electoral math, one wise old senator = more than 700,000 normal people). After the state has parceled out a vote to each district, the districts award their single vote to the presidential candidate who won their district. Every ten years, your state gets the chance to acquire or lose districts.

Club Benefits: your state basically owns your districts, and so the majority party in your state’s congress controls nearly all decisions regarding districts, including their area and constitution. In 2011, Republicans were the majority party in Wisconsin and Pennsylvania.

Associates: other states are essentially members of the DBD Club, not because they chose to be but because their respective populations would barely constitute a medium-sized city. For them, “state” and “congressional district” are one and the same: Alaska, Delaware, Montana, North Dakota, South Dakota, Vermont, and Wyoming. Rhode Island will almost certainly join the Club after the next census; Hawaii might, too. (The District of Columbia’s membership status is…difficult to determine.)


So Maine and Nebraska divide their electoral votes by district. Barack Obama made history in 2008 as the first modern candidate to electorally divide a state. He won Nebraska’s 2nd congressional district (which is essentially coterminous with Omaha and its suburbs), despite getting walloped by John McCain in the rest of the Cornhusker State (41% to 56%). But beyond an election nerd’s delight watching Omaha – a tiny area on the state’s eastern edge – “split away” from the rest of Nebraska, the 2nd district meant squat. That extra point didn’t push health care reform through Congress or kill Osama bin Laden (or, if you like, heartlessly murder four people in Benghazi). It only meant that Omaha is, like all cities, more liberal than the state it inhabits.

Nebraska currently has five electoral votes, three of which can be split between candidates. Maine has four, two to split. Wisconsin, meanwhile, has 10. Pennsylvania, 20. If the Koch’s Plot had succeeded, these Republican legislators would have given Mitt Romney three of Wisconsin’s 10 votes and 14 of Pennsylvania’s 20 votes. That’s 17 extra electoral votes for Romney.

Obama smiles, for good reason, even after Romney defeated him in 14 of Pennsylvania's 18 districts.
Obama smiles, for good reason, even after Romney defeated him in 14 of Pennsylvania’s 18 districts. Districts don’t count.

BTW: how did Romney win 14 of Pennsylvania’s 18 districts but lose the state’s popular vote? Three of Obama’s four (only four!district victories were massive landslides (he captured nearly 90% in one district and still underperformed his 2008 tallies). Romney’s wins were narrow by comparison (usually below 55%). So despite winning a meager 22% of the districts, Obama got more votes. Under the DBD system, however, Obama would have been creamed – Romney’s victory in Pennsylvania would have been a landslide.

So what does it all add up to? If the Koch Brothers’ alleged surrogates had been successful in Wisconsin and Pennsylvania, Obama would have won 315 (instead of 332) electoral votes on November 6. Romney would have won a respectable 223 (instead of 206), losing the presidency and returning to life in his father’s shadow.

In other words, the Koch Brothers would have changed history forever.

No seriously – they might have. And they might have swung the election to Romney. But eventually, invariably, they’d regret their decision.

[ASIDE: $300,000 is essentially pennies to the Koch Brothers, so to view this scheme as a major investment is misguided; if they were actually involved, I don’t think they took the idea very seriously. Another Mother Jones article referred to all these Republican plans to split their states as “shenanigans,” not serious talk.]

First, the Koch Brothers were funding DBD legislation in Wisconsin and Pennsylvania way back in 2011, when an Obama defeat seemed possible, and a narrow Obama victory seemed even more possible. In such situations, 17 electoral votes can mean a lot. (The majority of states have fewer than 10 electoral votes.) If the Kochs had successfully changed election laws in Wisconsin and Pennsylvania, they would have essentially given Romney a superstate with the population of North Carolina and more electoral votes than Michigan or Georgia. (In case you don’t know, those are big states.)

The 2012 election: one of those maps that CORRECTLY indicates "Democrat" with red and "Republican" with blue - the colors that historically indicate "left" and "right" in democratic nations.
The 2012 election: one of those maps that CORRECTLY indicates “Democrat” with red and “Republican” with blue – the colors that historically indicate “left” and “right” in democratic nations.

Even if Obama still won the election, they’ve made Generic Republican’s path to victory in 2016 a little easier. But they’ve also effectively created a monster. Obama won the popular vote in Pennsylvania (52% to 46%), but in the Koch system, Romney would have essentially won the state (14 to 6!). And this would happen again and again, in every presidential cycle: a handful of little Bush v. Gores every four years.

But that leads to point two: why is a state better than a district? If anything, a congressional district is a truer expression of the people’s will vis a vis the federal government. What’s so special about states? Historically, the winner of the national popular vote almost always wins the electoral college, even if you retroactively apply the DBD rules. As many political historians noted after the uproar over the 2000 election, the system works 98% of the time. 

Of course, a popular vote would work 100% of the time, if full representation is what you’re aiming for. But let’s assume we’re stuck with the electoral college. Dividing by districts is, arguably, a better way to distribute electoral votes. People whose voices have historically been ignored – large political minorities in California, Texas, and Illinois – would suddenly appear on the magic election night map.

Getting a jump on those STEM fields!
Romney, getting a jump on those STEM fields

And to be fair, Mother Jones‘s alarm at conservative meddling is probably separate from its writers’ feelings about a district-based electoral college. What troubles Mother Jones is that powerful conservatives used (and, more horrifically, funded) political mechanizations to unseat Barack Obama. And if those powerful conservatives had been victorious – if those 17 votes somehow lodged Romney into the White House – half the country would be screaming “foul!”

But even if efforts to divide up Pennsylvania and Wisconsin were funded by shadowy Republican billionaires, it’s still a (potentially) good idea. Do the Koch Brothers care about the integrity of American elections? Maybe not. But, motives aside, they’re advocating legitimate democratic reforms with cash. This, and almost everything else unseemly about the 2012 election, is far less appalling than the tactics employed by Bush allies in South Carolina and Florida in 2000. Apart from their persistent desire to hide most of their political contributions (I guess if money is speech, they’re the equivalent of Anonymous), the Koch Brothers have nothing to be ashamed of – they were funding a noble cause! And in the end, Obama would still be president. Plus we’d have a 17-electoral vote monster, one that would inevitably turn on its creators (the Brothers Koch, Republicans, conservatives, whoever). Because if history is any indicator, U.S. political parties are especially susceptible to the law of unintended consequences. Cases in point:

  • Republicans in the 1940s supported the presidential term limit. They foresaw a long cascade of New Deal presidents queueing up into the 21st century. They couldn’t envision Republican majorities springing at record pace from the wombs of all those veterans’ wives, all of them voting to cut income taxes by divisions of three.
  • The post-1968 primary reforms, overseen by George McGovern, helped Richard Nixon manipulate the Democratic primary process, effectively choose his candidate (McGovern), and then eviscerate him. In the long run, these reforms have arguably resulted in longer, uglier, more costly primary races. (This almost certainly cost ardent McGovern supporter Hillary Clinton the White House in 2008.)
  • Democrats wailed about defending the integrity of the electoral college back in October 2000, when polls indicated that George W. Bush might win the popular vote but not capture 270 electoral votes.

– Citizens United, the apparent fruition of years of conservative efforts to conflate speech and money, helped Obama win the White House a second time, and will invariably help elect future leftist lawmakers who devise genuine campaign finance reform.

Radical changes implemented for political gain invariably backfire, and sooner rather than later. Conservatives who yearn for those Republican districts in California will feel the sting of those large, mostly Hispanic Democratic districts in Texas. They’ll certainly discover how many blue districts you can squeeze into all those dense, red-state cities (“they look so small on the map!”). And Democrats will discover how many large, rural districts stretch out across New York and Illinois. Why would anyone spend money for that? Two hundred twenty-three years of political meddling should have taught them this much: you don’t hijack the American political system. It hijacks you.

What last meals can teach us about the death penalty

By Amanda Grigg

After reading hundreds of articles about the trials, appeals and executions of criminals for my research assistantship I’ve become depressingly familiar with the tradition of reporting on an individual’s last meal. In the US, most states offer individuals on death row the opportunity to choose their last meal. The details of these requests appear in almost every article covering an execution, sometimes incorporated into the article, and surprisingly often as an afterthought, “He was pronounced dead at 12:17 am, following 15 years of appeals and an unwavering assertion of his innocence. In his final words he expressed his love and gratitude to his family. Oh, and his last meal was pecan pie.” I knew this practice existed, I’ve seen it in news coverage before, but reading mentions of last meals back to back to back was different. It made me realize just how weird, and contradictory and depressing the practice is.

Photo from photo essay by Celia Shapiro
Photo from photo essay of prisoner’s last meals by Celia Shapiro

Brent Cunningham has a great essay on last meals in which (among many other things) he traces the tradition back to ancient Greece and Rome, specifically to Roman gladiators who were fed lavish meals before their day in the Coliseum. The public obsession with last meals is much more recent, and probably stems from the shift away from public executions in the US – which has left the public with less opportunity to view executions but no less interest in them. And the media are well aware of this interest. CBS News coverage of last meals describes them pretty accurately as “an enduring, if morbid, source of fascination.” The Huffington Post, covering a website dedicated to last meals, describes them as “fascinating yet creepy.”

Blogs and crime tv website coverage of last meals trends towards morbid curiosity and frivolity. TruTv’s slideshow features mugshots and below, urges viewers to “also check out: hot celebs pretending to eat.” Headline News’ gallery is titled “Gatorade to Lobsters: Serial Killer’s Last Meals” and more disturbingly, features a smug Nancy Grace staring out from the page banner.

James Reynolds for Amnesty International. Text: “This was Ruben Cantu’s last meal. Executed in 1993. Proved Innocent in 2010.”

There is also work on last meals that is reverent and striking, including Celia A. Shapiro and Mat Collishaw‘s photo essays featured in MotherJones and Time respectively. Recognizing the power of the idea (and images) of the last meal, Amnesty International recently commissioned artist James Reynolds to recreate the last meals of men who were later proven innocent. The meals featured in an anti-death penalty campaign alongside the dates individuals were executed and presumed or proven innocent.

The Last Last Meal

In 2011, Texas, the state with by far the highest number of executions, ended this tradition following the execution of a man who did not eat any of the enormous meal he had requested (it included over ten items, one of which was a pound of bbq). Notably, the inmate in question was Lawrence Brewer, a white supremacist sentenced to death for the gruesome, racially motivated murder of James Byrd Jr. – a murder which motivated the passage of a Texas hate crime law and the Federal Hate Crimes Prevention Act.  Not surprisingly, Brewers final act outraged many, including State Senator John Whitmire, who called on the executive director of the Texas prison agency to end the practice of last meals. Within hours, the prison agency’s executive director had terminated the policy, effective immediately. The New York Times spoke to Whitmire about his opposition, which he said had little to do with cost and state budgets:

“He never gave his victim an opportunity for a last meal…Why in the world are you going to treat him like a celebrity two hours before you execute him? It’s wrong to treat a vicious murderer in this fashion. Let him eat the same meal on the chow line as the others.”

Whitmire was right not to worry about cost, since last meals are rarely as extravagant as they seem. In fact, the last meals published are generally what is requested, not what prisoners actually get. In most states there are limitations on what can be provided. In Florida, last meals can cost no more than $40 and all ingredients must be local. California provides last meals costing up to $50 and Oklahoma (the state with the third most executions) budgets just $15 for last meal provisions. Following the change in Texas policy, Timothy Williams of the Times interviewed a Brian D. Price, a former Texas death row chef who description of his efforts to fulfill last meal wishes is worth quoting in full:

“The Texas Department of Corrections has a policy that no matter what the request, it has to be prepared from items that’s in the prison kitchen commissary. And, like if they requested lobster, they’d get a piece of frozen pollock. Just like they would normally get on a Friday, but what I’d do is wash the breading off, cut it diagonally and dip it in a batter so that it looked something like at Long John Silver’s — something from the free world, something they thought they were getting, but it wasn’t. They quit serving steaks in 1994, so whenever anyone would request a steak, I would do a hamburger steak with brown gravy and grilled onions, you know, stuff like that. The press would get it as they requested it, but I would get their handwritten last meal request about three days ahead of time and I’d take it to my captain and say, “Well, what do you want me to do?” And she’d lay it out for me. I tried to do the best I could with what I had. Amazingly, we did pretty well with what we did have. They are served two hours before they are executed and it is no longer a burger and fries or a bacon, lettuce and tomato sandwich or whatever they requested. All it is, two hours later, is stomach content on an autopsy report.”

As Price’s experience suggests, the tradition of the last meal is often misrepresented and is inherently counter intuitive. The “choice” of steak or lobster in reality amounts to a choice of reimagined prison staples. And two hours later the privilege of a personalized and (we imagine) comforting last meal is “stomach content on an autopsy report.”

Why Last Meals?

Velma Barfield
From Mat Collishaw’s “Last Meal on Death Row” series

All of this brings  us to the question  of the purpose of last meals. Susan Jacobson reported on the ritual of the last meal following the Texas policy change, asking death penalty scholars and Florida prison officials about the purpose of the last meal. Explanations of the last meal emphasized treating the pre-executed with dignity, and demonstrating the absence of malice on the part of the state. According to Florida prison official Jessica Cary:

Last meals are a way to provide humane treatment in a dignified death-penalty procedure

U of Florida law professor Bob Dekle explained,

“the last meal is part of the process to demonstrate there is no malice on the part of the people who carry out the execution.”

Asked about Texas’s decision to revoke individualized last meals, former death row chef Brian Price echoed these justification:

“No, these people don’t deserve a last meal request, but we as a society have to show that softer side, that compassion. It’s bad enough that we have the death penalty, it’s so archaic, but then to turn around and say, “No, we’re not going to feed you,” just out of pure meanness or something. I don’t know. We have to show that we are not distorting that justice with revenge.”

As these quotes suggest, the mythology surrounding last meals lends an air of dignity to the proceedings, and absolves the state and the people of any appearance of malice, both of which help to clearly set us apart from the individuals we execute. In an article in Law and Society Daniel LaChance argues that allowing (and publicizing) last meals and last words is part of the state’s effort to demonstrate prisoner’s individuality and agency in the face of a penal system that achieves complete control over inmates (think of how rarely inmates physically resist execution). This is necessary if executions are to be retributive because it is difficult to feel catharsis, relief or social solidarity at the execution of someone we had already rendered completely docile and powerless. The last words and last meal re-establish a prisoner’s agency and individuality, demonstrating that they are “self-made monsters” who have chosen the path that led them to execution, and allowing executions to serve their emotional, social purpose.

Photo from Henry Hargreaves' photo series "No Seconds"
Photo from Henry Hargreaves’ photo series “No Seconds”

As LaChance notes, most people facing death row are well aware that their last meal choices will be reported to the media. In participating in these rituals of agency they are thus simultaneously demonstrating individuality (choosing foods that represent them, symbolize the familiar or in the case of the many lobster requests, status) and participating in the process of their own execution (performing an act – requesting their meal – that is vital to the ritual of execution in the United States). Some might actually take some individual pleasure out of their final meal but reports suggest that many find themselves without any appetite. A rare few exercise their limited agency by treating the last meal as an opportunity for religious or political expression – Jonathan Nobles requested the Eucharist. Odell Barnes Jr. asked for “Justice, Equality, World Peace” and Robert Madden asked that his last meal be provided to a homeless person. But of course these requests went unfulfilled.

According to Gallup polling, support for the death penalty is at the lowest it’s been since the Supreme Court instituted a four year moratorium on executions in 1972. It’s still a majority (about 60%), but support seems to be steadily (if slowly) declining. A closer look at the practice of last meals suggests that despite overwhelming historic support for the death penalty, executions must be conducted in a very particular way, must strike a delicate balance, in order to satisfy the public. They must be retributive but not malicious, must simultaneously demonstrate and revoke the agency of the executed, must exhibit mercy but not too much mercy.

The response to recent shortages in the drug cocktail used in lethal injections also reflect the balancing act at work in implementing the death penalty. In this case, the difficulty is one of walking the line between execution and cruel and unusual punishment. Ohio recently executed a man using a new, untested cocktail and according to witnesses the results were horrifying. Andrew Johnson of the Columbus Dispatch reported on what was one of the longest executions in Ohio’s history:

At about 10:33 a.m., McGuire started struggling and gasping loudly for air, making snorting and choking sounds that lasted for at least 10 minutes, with his chest heaving and his fist clenched. Deep, rattling sounds eminated from his mouth. For the last several moments before he was pronounced dead, he was still.

The execution got significant press coverage, and the New York Times claimed that it had renewed the debate over lethal injections.  Because we can kill individuals with a lethal and potentially painful cocktail of drugs, as long as we don’t see them express their pain (or too much pain), as long as their death is quick and quiet. Of course there will always be those who could care less about dignity or mercy, and for them the death penalty doesn’t need to strike any balance other than that of an eye for an eye. In response to the controversy over the Ohio execution Kent Scheidegger, legal director for the Criminal Justice Legal Foundation told the New York Times “O.K., I’ve made snoring noises. What’s not disputed is he got a large dose of sedative. We’ve gotten namby-pamby to the point that we give murderers sedatives before we kill them.”


The Benefits of Being Unrealistic

By Adam Elkus

As a PhD student in computational social science at George Mason University, I have biases that are understandable. I find that traditional social science methodologies downplay complexity, interaction, path dependence, randomness, network effects, emergence, nested social relations, multiple equilibria, cycles, and heterogeneity. They ignore bounded rationality and cognitive realism. The list goes on. However, I’m also going to speak up here about the benefits of being “unrealistic” — and why social science modelers ought to maintain a diverse toolkit.

The phrase “all models are wrong” and “models are maps” (not the territory) come to mind, but we can do better than that. Ultimately, what using a model as a map implies is that we are using the model as a stylized representation of reality. I’m reminded here about the controversy over Michael Chwe’s recent book on Jane Austen and game theory — Jane Austen didn’t intend to have her characters behave like John Nash or R.A. Fisher. It’s convenient for us to represent Jane Austen books as games, much in the same way its convenient to represent Viking societies with graph theory. It’s an imposition of our own, and we can at least hope that we don’t do too much violence to what we’re trying to represent. Often times, however, we do exactly that:

In settings like biology, medicine — or even more ambitious: social sciences — there is no underlying analytic theory. Although we might call some parameters of the model by the same name as some things we measure experimentally, the theory we use to inform our measurement is not related to the theory used to generate our model. These sort of models are heuristics. This means that when a heuristic model is wrong, we don’t really know why it is wrong, how to quantify how wrong it will be (apart from trial and error), or how to fix it. Further, even when heuristic models do predict accurate outcomes, we have no reason to believe that the hidden mechanism of the model is reflective of reality. Usually the model has so many free-parameters, often in the form of researcher degrees of freedom, that it could have been an accidental fit. This is often a concern in ecological modeling where we have to worry about overdetermination.

So we can dispense with the idea that a model that makes more realistic assumptions in some areas we know about (cognition, rationality, social interaction, heterogeneity, feedback) is necessarily going to be more useful to us than an older model that makes less realistic assumptions in those same areas (such as representing model agents as aggregates in system dynamics). We are usually making other unrealistic assumptions in areas of equal importance merely by creating a model in the first place. There is nothing inherent in iterating the Prisoner’s Dilemma and adding genetic algorithms with tournament-style selection, for example, that necessarily makes it more realistic than a traditional one-shot PD.

It’s obvious that the question you want to answer and the kind of answer you want to get should dictate choice of methods. But it’s also less obvious how this should impact choice of how fine-grained a model you need. Such a distinction often leads to the charge of “___ is unrealistic!” (usually referring to some kind of classical game theory or choice-theoretic assumptions) followed by trotting out of the “models are maps” and “all models are wrong” cliches. While true in a banal sense, defenders of more “unrealistic” assumptions about older modeling techniques surely can make better arguments.

Consider artificial intelligence and formal computer science as disciplines. AI and algorithms have been designed to perform tasks traditionally characteristic of humans — but in ways humans obviously do not solve problems. As Scott Page points out, humans use heuristics to deal with computational problems that scale poorly when analyzed by computers or find “good enough” solutions. But understanding that a problem’s worst-case computational complexity is so daunting in the first place is useful — without it we might not understand why heuristics are being employed or why they work.

This holds true beyond the worlds of AI and analysis of algorithms. We might conclude that a true Weberian monopoly of force is logically impossible — when we say “the state” has a monopoly on force, we ignore the fact that this “monopoly” could be eroded easily if enough elites with control over the means of violence decide it is no longer in their interest to contribute to it. The “state” is, after all, an aggregation of those elites. And in many parts of the world, achieving a preponderance of force (as opposed to a monopoly) is more realistic and historically accurate. But the “ideal type” is useful for modeling (verbally) what a state with true license over lawful organized violence resembles — and measuring how far the empirical world falls from ideal case still is theoretically meaningful.

Second, as a behavioral economics textbook pointed out, merely including more “behavioral” assumptions does not necessarily ensure that an behavioral economics model is going to outperform a standard economic model. The same holds for many other related disciplines. Realistic insights about individual cognition and decision may not scale up well to social aggregates — norms, for example, are far more situational in character for the average individual than they are in society as a whole.

Finally, any subject of interest is bound to be multi-layered. We are not likely to find a way to effectively link all of those layers without either making our model either too complex to be used and properly communicated or too simplistic to be truly comprehensive.

In my own field of strategic theory, military theorists are beginning to understand that there are multiple models of strategic reasoning — because the strategist’s task itself is heterogeneous. Figuring out how to get a large mass of men, machines, and supplies across the Atlantic in time to resist a Soviet invasion is strategy. Using mathematical programming to optimize military production is strategy. Game-theoretic models of coercion and compellence are also strategy. But so are the difficult tasks of managing coalitions, sense making and problem framing in ambiguous situations, and intersubjective learning about the enemy and civilian population.

Historians that can examine a given campaign or operation in granular detail get to weave all of these together into a coherent narrative. But when we begin to talk more generally, we start to face the problem of representation. In my time as an MA student at the Georgetown Security Studies program, I took a class from Kenneth Pollack on Military Analysis. We studied World War II’s European Theater of Operation (ETO), a broad area that concerns campaigns and operations in Europe, Northern Africa, the British Islands, Russia, the Atlantic, and the Strategic Bomber Offensive.

Our task in making an counterfactual analysis of a campaign or operation was to weave together a number of levels and contexts. One had to take into account the role of coalition pressures, industrial and economic supply and production, military training and organization, terrain, feedback effects from other campaigns and operations, and leadership styles to do justice to the topic. But feeding all of that into a model is a recipe for overfitting and poor out of sample predictive value.

Parsimony may be a virtue, but it is also a cruel mistress. All generalization — including qualitative case work — involves parsimony. What “models as maps” rhetoric misses is that the social scientist is actually carrying a suitcase of maps. Instead of one big and coarse map he or she is using to find their way around a metro, they have a dozen maps of varying size, detail, and specification they have to pull out each time they change subway lines. Sometimes one map being more “wrong” will be the key to another being more “right.”

QOTD: From Summers to Keynes to Marx in One Step

By Kindred Winecoff

Today we’re calling that idea “secular stagnation”. Which of course sounds more impressive than plain old “abundance” and new enough to be able to distance itself from Marxist economics.

I know no one cares what I think, but I think Izabella Kaminska is one of the few intriguing writers on political economy these days. I know no on cares about political economy these days, even though they should.

Economics Undergraduates Should Switch Disciplines

By Graham Peterson

Noah Smith is right, a Ph.D. in economics is, for all careerist intents and purposes, the best move among the social sciences.  And this is why economics Ph.D. programs get two to three times as many applications as, for instance, sociology programs do.

Like it or not, smart people who are interested in policy and changing the world substitute towards economics because of its higher prestige and career outcomes in both the academic and private sectors.  The situation is what we in sociology call a Matthew Effect, named after a verse from the gospel of Mathew which talked about the rich getting richer.

This would be an unfair situation to scoff at, if its mechanics did not work so consistently within-group, as well as the here-discussed between-group effects.  The rich get richer among sociologists just like they do between sociologists and economists.

The rich get richer in academics because time and attention are finite, and humans are computationally constrained.  So again, like it or not, academics constantly use unthinking heuristics to judge the quality of work on the front end, that is, institutional and peer status affiliations.

The status signaling at top programs creates a group where knowledge sharing “spills over” among peers, intensifying the advantage of clusters of top students and researchers.  The same is the reason smart and creative people migrate to large cities, and why smart and creative people from developing countries go get degrees in developed countries and never return.

But not so fast, gals and guys.  Is this situation optimal?  No.  All of the smart people end up jammed into economics programs, and lots and lots of these smart people end up taking jobs whose focus is not primary research (likely what got them excited about graduate school in the first place).

Consider some basic economic intuition: given a homogenous opportunity (primary research professorship), and relatively cheap information about programs and disciplines (it is if you look), social science Ph.D. applicants should substitute across disciplines given price (queue) competition until the prices (queues) are the same.  That’s not happening, so where is the friction in the market for Ph.D. applicants coming from?

Now consider some basic sociological intuition: scientists are no less human than anyone else, and use precisely the same in-group and out-group determinations, constructing rituals of purity and danger, that other people do.  Research methods, which otherwise do have extraordinarily sophisticated theoretical justifications, get reduced to totems of fashion and group identification.

So, economists, even given their inarguable prestige and scientific accomplishments, police the boundaries of their discipline by actively denigrating The Other, and there’s your friction.

Here are some common arguments you will hear as an economics undergraduate if you consider switching disciplines for the Ph.D.

1. “Once you ‘get the tools’ in economics you can study whatever you want.”  This statement supposes that the ‘tools’ in economics are generic and applicable to any problem.  But no, econometrics focuses on a particular subset of statistical procedures and corrections  that are tailored to traditional macroeconomic and microeconomic policy situations.

If those are all that interest you, that’s great.  Know that you will have an exceedingly difficult time getting that top job by doing just one more measurement of the returns to human capital investment, and unless you’re particularly clever about finding data, you’re going to have a hard time searching out the next mind-blowing natural experiment or statistical instrument, much less finding money to conduct a randomized controlled trial.

In contrast, there are a variety of social situations that sociologists, anthropologists, and political scientists find interesting, where you will be free to learn about and employ any of the advanced metrics you will learn in economics, in addition to a variety of methods which receive no attention in economics, and of which you will likely find yourself persuaded have merit.

Your choices empirically are not a binary operator sorting you into “robust causal identification” and “chilling out and making up whatever story you want.”  Methodological discussion is alive and well in other social sciences.

2. “There are no jobs over there.”  Now this just isn’t true.  There are fewer jobs, in absolute terms.  But you should already know to be thinking in terms of relative margins and your statistical expectation of payoff, dummy.  When applying to other social science Ph.D. programs, you will be statistically much more likely to place higher than you will in economics.  Beyond that, there are academic jobs in other social sciences, and of course you’re more likely to obtain one coming from a higher ranked program.

The costs you will bear will likely accrue in terms of letting go of the warm glow of basking in MIT’s shadow, and in potentially some tension between you and your peers and faculty as you hesitate about the sometimes shocking differences in theory making and empirical observation.

I can report from my own experience, though, that those costs are short run, and the benefits are long run.  It doesn’t take long to get over your daydream about being a famous economist — and that was something you were going to have to do while teaching a 3-3 at Boise State anyway.

Moreover, we do place people in private industry.  Companies are increasingly interested in hiring ethnographers, content (text) analyzers, and all types of researchers in their efforts to persuade and manage one another in the knowledge economy.  Government has traditionally always hired demographers and other quantitative analysts from other social sciences to work at various evaluation and research arms.

But you wouldn’t know that if your concept of “private industry after Ph.D.” was limited to “the Fed, government, or financial consultancy.”

3.  “Their theories are ad hoc and inconsistent; it’s a mess.”  Well, no, they’re not.  Theorists in other social sciences are just as concerned with consistency and empirical verification as anywhere else.  If your idea is that there is economics, and then there is literary criticism and postmodern bullshit artistry, and nothing in between, you’re wrong (and you’re abandoning one of the theoretical precepts of your own training — that is that people prefer convex combinations of goods, and that agents maximize over relatively continuous bundles of them).

There are reasonable compromises between economic and other social scientific theories, and many leading researchers within economics and the other social sciences are actively working on them.  The animus you’re likely familiar with is a cultural hold-over from the political and thence academic fallout of the 1960s, which culminated into the conservatism of the 1980s (and academic backlash to it), that ended up pitting Economics Against The World.

Most people on both sides of the isle agree that a lot of energy poured into such screaming matches, and finally the agreement to disagree and just stop talking, was a mistake.  Don’t make the mistake of clinging to an academic-cultural anachronism.

4.  “You won’t be able to teach or discuss economics.”  This again is not true.  You would be amazed at how interested people in other social sciences are in economics, if only because it’s “that mystery taboo over there.”  Many, and I do mean many, social scientific theories take into account economic reasoning or start with it as a baseline, at least to the degree it often does in economic work.

Should you stay in economics, you will find yourself quickly disabused of the idea that everything in economics starts with a rigorous foundation ala Mas-Colell, Whinston, and Green.

Moreover, should you switch, you will have the opportunity to critically engage economic models and thinking in teaching a variety of undergraduate courses, where competing economic explanations arise with more traditional political, social, and anthropological.

There is a lot of low-hanging fruit in other disciplines.  I don’t say this to denigrate my colleagues in sociology or anyone else — I believe that they have missed XYZ causal phenomenon and gotten siloed into their own theoretical and empirical hobby-horses for the same reason economists have — because of the above-described sociological inevitabilities.

The thing is, though, that we can only buck against the unproductive social forces I outlined above if students themselves are willing to consider critically the institution they confront and substitute toward interesting questions and methods instead of into the herd.  If you want to be a high-impact researcher, you have to believe that you’re strong enough to escape the cloistered shelter of economics without becoming totally lost and led astray.

If you’re a top candidate and going to land several top 20 placements in economics, this article doesn’t apply to you.  But there are thousands upon thousands of you who will not, but still want to make a scientific difference.  And there are better ways to do that than getting sorted into a low, or no-ranked university in economics, and teaching supply and demand diagrams to people who don’t remember high-school algebra.

*Edit: I originally claimed that economics undergraduate degrees are the most populated majors on campuses, which it has been masterfully point out to me is not even close to true.

**Update: A commenter pointed out that median salaries are lower in other social sciences than in economics.  That is true, but the comparison is not meaningful — one must compare the right tail of other social science salaries to the median in economics.  I for instance likely would have ended up at a top 40 economics program, at very best, and likely much lower — and received offers from two top 5 programs in sociology.  Right tailed salaries in other social sciences are likely competitive with the median in economics.

Flying Is A Libertarian Nightmare – But Not For Long

By Graham Peterson

Flying is a libertarian nightmare.  Being crammed into a commercial airplane should make any remotely aware libertarian conscious of the unholy impacts of private monopolies (there are all of two commercial jet manufacturers globally), externalities in social interaction (does the middle seat get to use both arm-rests or just one?), the necessity of government infrastructure (airports), and the inevitability of government interference (because security theater).

But all of those woes will melt away because of the progress of technology — which is to say — because of economic growth.  The only reason any of these problems arise is because of the current state of aeronautic technology.

Air travel requires an amazing amount of jet fuel.  It is so expensive for airlines, that they negotiate and buy enormous blocks of fuel on future contract years in advance.  That means that airlines need to get a lot of people on one vehicle in order to cover the fixed cost of getting from New York to Dallas.  And that requires a monumental vehicle.

Since the vehicle needs to be so monumental, the people who make them, too, incur enormous fixed costs in order to build and coordinate monumental manufacturing concerns.  And in order to coordinate the taking off and landing, one needs an incredible infrastructure: airports.

Getting the land for an airport together is a mess.  For some reason, it seems, when one needs an amazing amount of land  or other resource that’s a free gift from nature (airspace, say, or bandwidth, or the land freshly cleared by genocide on the American frontier), our intuition has been, historically, that a small, armed cartel (the State) should claim arbitrary ownership and allocate it — in the interest of fairness.

So.  Big jets require lots of fuel, because the state of the technology and the physics is such that planes are only so efficient at converting oil into forward thrust.  The situation compels us to bring together bazillions of customers and workers to coordinate the massive mess that relatively inefficient planes create.  Is there any hope of cleaning this mess up?  Sure.

Our current mess with planes is precisely the same mess we had in converting coal-fired steam into forward thrust — railroad.  The historical economic pattern is general.

We start with an inefficient but fantastic technology, which creates natural monopolies and economies of scale, and we end up with the network effects of a few central nodes (firms) and small number of edges between them.  Whenever we must channel bazilions of people through such a network — someone will inevitably capture the power the network conveys.

Never fear — economic growth is here!

As quickly as railroads came — and created a great fuss over the (technologically determined) natural monopoly that arose in them, and thence the demonstrably artificial monopoly where producers colluded on prices — their profits and monopoly got chewed up by passenger automobiles and commercial trucks.  Such will be the fate of the Boeing 747, of political strong arming to get airports built, the tacit arm-rest fight on your flight, and the ghastly TSA.

Cars are already flying.  There is a technological bottleneck in terms of energy source, but that situation is improving rapidly.  We have GPS technology that will be able to map and coordinate zillions of flying cars very soon.  It is really not a question of if, but when, the skies will look like a scene from Back to the Future.

The lesson here is not “mmglaven, flying cars wow cool.”  The lesson is political, economic, and social.

Technologies that create limited-path networks with few hubs and spokes create ideal access points for the state to violate your liberties.  To take a currently-in-operation example: there are few places where you have fewer constitutional liberties than in your own car, because you drive on “everyone’s” (the state’s) roads.  The opportunity to harass, seize, search, and extort you is obvious, and well exercised.  And that harassment increases as a function of the concentration of the network — welcome to the Transportation Security Administration — where you have no constitutional rights at all.

But the government need not be involved in our roadways nor our airways.  The only reason there is a “public good” in roads, is because it has been prohibitively expensive to have tolls everywhere.  Before we even see flying cars, we could see privatized roads emerge, with electronic tolls buried underneath them, and EZ-Pass sensors in cars drawing our checking accounts as we drive.  In the sky, a similar system will charge us for automated flight plans on a GPS grid, just like we currently pay to use WiFi hotspots.  Such a system destroys the network centrality of airports and the big planes that require everyone to link into them.

What do these technological developments get us?  They get us more distributed networks (e.g. cars got us more distributed networks than railroads), which gets us more competition, fewer opportunities for exploitation and powerful monopolies, more convenient travel (computers already fly our planes), no arm-rest fights, no time wasted waiting for flights, and cheaper goods that don’t have to route into congested transportation networks to get to us.

As people become less limited by geographic constraints, they’re more able to select into different clusters (or cliques) of belief and tastes.  That means a greater diversity of cultural forms and competition among them, limiting the possibilities of exploitation that for instance the Frankfurt School was worried about — the Mass Culture that brainwashes you and reduces your menu of tastes and opinions to a lowest common denominator.*

As technologies get cheaper and more accessible, more people get to enjoy them, and in turn one another.  And that is a very good thing.

A number of readers will protest: “but what about inequality?”  I don’t know what to say to that complaint, other than that the debate over the ethics of such a system will look rather strange when the latest moral outrage concerns the bottom decile of earners who lack the latest version of “basic needs,” a flying car.

*But that mass culture, yet again, was originally a product of the incredible expense, natural monopoly, and network centrality of early 20th century communication and information network technologies.  Now we have Youtube and independent films.

What Does the President Say?

By Seth Studer

I originally intended to write a post on the 50th anniversary of 1964, one of the most interesting years in American politics. But it morphed into a reflection on presidential speech. Reflections on ’64 will come later. For now, heads up: PBS is airing a documentary, 1964, tomorrow night (Tuesday January 14) at 9 pm EST. Check it out. 

“These are the most hopeful times since Christ was born in Bethlehem.”

johnsonFifty years ago, Lyndon Baines Johnson’s public personality was an enthusiastic gusher. He exuded more outlandish, unbridled optimism than any American president since Theodore Roosevelt’s personal mania cut a rather fine Stick Swinger in Chief. It’s ironic that TR’s best known quote is “speak softly and….,” when he was succeeded by a veritable Sunday buffet of soft speakers, culminating with Calvin Coolidge, who on average spoke in one day the number of words TR could squeeze into a hypomanic minute and at significantly lower decibels. All presidents have artificially constructed personas, but TR and LBJ were as close as we got, I suspect, to a perfect coincidence of political maneuvering, public persona, and a genuine, often unfiltered personality. These guys were nuts. These guys were talkers. These guys drove their handlers insane.

One needn’t naively romanticize the past to argue that, in their capacities as (symbolic) leaders of the cultural and public spheres, presidents today speak poorly and with great reservation when compared to presidents of even a few decades ago. I remember seeing a chart in the early 2000s, probably in Time or Newsweek or Us Weekly, that listed the “reading level” (a concept for which I have nothing but contempt) of historical presidential debates. “When Lincoln and Douglas debated,” they exclaimed, “they spoke at a 12th grade reading level! [Never mind that the Lincoln-Douglass debates were not presidential] When Bush and Gore debated, they spoke at a third grade reading level!!

This was apparently a cause for alarm.

Of course, when Paul Wolfowitz, Condoleezza Rice, and Colin Powell spoke to each other and to the president, I’m sure the “reading level” was raised a notch. And I’m sure the armchair sociologists who study speeches like tea leaves, who believe that presidential oratory actually makes things happen (before Kennedy gave that speech, nobody had thought of sending Americans to the moon!), who cringed whenever George W. Bush mispronounced a word that whole regions of the nation mispronounce, and who wish that President Obama’s State of the Union addresses possessed the cadence and sweep of his 2008 stump speeches, who want vision and eloquence and old-fashioned oratory from their Chief Executives…I guarantee you these armchair sociologists would die of boredom if they heard the genuinely powerful oratory of those late 19th-century Roast Beef presidents. Men like James Garfield and William McKinley were renowned for the force and eloquence of their speeches. No 20th-century president has anything on the oratorical prowess of Benjamin Harrison. But imagine these genuinely gifted speakers addressing the nation on the CNN. By hour three, our armchair sociologist would have long since switched over to Modern Family.

Eloquence I can take or leave. Obama takes (at least when he’s campaigning). Bush leaves (except for his fiery second inaugural address). Ronald Reagan regularly delivered speeches at something like a third grade reading level, but almost nobody complained about that because Reagan possessed personality, he played a character, and that’s what we enjoy most in modern presidents. George W. Bush’s character – “I’m a guy who owns a ranch” – was never wholly convincing, even though he actually was a guy who owned a ranch! He didn’t play the character well: the costume didn’t fit, you could see the strings. The pressures of the post-Patriot Act presidency seemed to constipate his speech (in Texas, he could actually deliver a damn fine speech), a condition that worsened throughout his presidency. (One of the exceptions, again, was that remarkable second inaugural address.)

We don’t want our presidents to speak well. We want them to sound good. But we’ve now had two presidents whose personalities seem suffocated by the office (was it September 11? the Clinton impeachment? there is a new caution), and the NPR class tends to blame either the stupidity of the Chief Executive, some new level of dishonesty in politics, or a general decline of intelligent, substantive American discourse (our handwriting is getting worse, too!).

I sympathize with our amateur sociologists, who are eternally pessimistic about the culture, who believe that the president is “in charge,” who believe that presidential speeches matter, and who believe that presidential speech is in a bad way. Because presidential speeches do mater…but not for the reason our man in the armchair believes. Presidential speeches don’t build spaceships or create laws. They don’t end recessions or lead armies to victory. But presidential speeches help reveal the current scope, shape, and limits of American public discourse, American rhetoric, and American language. They let us know whether or not the goalposts have moved. They don’t create the discourse or set its limits – Americans do that – but presidential speeches do something almost as good: they reveal what the President of the United States can say on TV.

That is valuable information.

Obviously political speeches have political functions. Their content is limited by the president’s political agenda as much as by cultural norms. When Obama, whose presidency has been an exercise in becoming as verbally constipated as his predecessor, refers to “non-believers” in his inauguration (he can’t say “atheist”: that is valuable information) or acknowledges his support for gay marriage, we know the motive is political. But it also reveals what the president can say on TV.

Consider Lyndon Johnson. “These are the most hopeful times since Christ was born in Bethlehem.” The fact that no president in the past two decades would make such a casual reference to the birth of Jesus in relationship to a massive government program is, if nothing else, interesting information. More interesting, however, is Johnson’s famous declaration (spoken with gravitas as the optimism of 1964 had waned), “We shall overcome.” Johnson was already beginning to realize the severity of the coming storm, and he might have considered different words, tempering his celebration of Civil Rights legislation. But instead he quoted the anthem of the Civil Rights movement. This was shocking. Granted, it made political sense: Johnson needed to consolidate liberal support (he rightly feared a challenge from the left, led by Bobby Kennedy, as much as he feared the defection of the South). He was also sending an ideological message to Democrats: we’re burning the ships. There’s no going back. This is our agenda.

But he accomplished at least one other thing, and it’s the reason we remember that speech: Johnson established that yes, the President of the United States can quote a liberal folk song. Yes, the President can say that on TV.

Get.On.Her.Level. – Beyoncé the Feminist Blogger

By Amanda Grigg

Poverty politics are in right now. By which I mean we’re embarking on a national conversation about the strategies we should use to tackle the problem of poverty and how we define poverty, and whether poverty rates have declined, and what counts as welfare and whether that matters. And by we I mean me, you, and Beyoncé. Oh, and members of both of the nation’s major political parties.

Now, here’s your feminism 101 lesson for the day: poverty is a women’s issue. Women in America earn less than men, are more likely to bear the costs of raising children than men, are more likely to be poor than men, and Black and Latina women face particularly high rates of poverty (source for those interested).

Thankfully in the midst of all of this recent Congressional poverty politicking, Maria Shriver and the Center for American Progress have released an exhaustive report that demonstrates the gendered nature of poverty in the U.S. and emphasizes the value of putting women at the forefront of efforts to address poverty. Here’s Maria Shriver from the report’s introduction (which is worth reading in full):

This nation cannot have sustained economic prosperity and well-being until women’s new, central role is recognized and women’s economic health is used as a measure-perhaps it should be the measure-to shape common-sense policies and priorities for the 21st century. In other words, leave out the women, and you don’t have a full and robust economy. Lead with the women, and you do.

In my mind this is where Maria Shriver drops the mic and yells, “Half the sky MF’ers.”

The report also features work by scholars like Carol Gilligan and Barabara Ehrenreich, and essays by celebrities including Beyoncé, Eva Longoria, Jennifer Garner, Jada Pinkett-Smith. Beyoncé’s brief essay takes on the myth of gender equality. That’s right. Beyoncé. On the myth of gender equality – if you listen carefully you can hear the sound of a million feminist fangirls (fanpeople? fanminists?) swooning. Here are the highlights:

tumblr_mxweutQN7B1qlf1v0o1_500We need to stop buying into the myth of gender equality. It isn’t a reality yet. Today, women make up half of the U.S. workforce, but the average working woman earns only 77 percent of what the average working man makes.

Equality will be achieved when men and women are granted equal pay and equal respect.

I read this as Beyoncé pretty much begging me to write about the links between income inequality and social and political inequality – which have gone a bit under-addressed so far in discussions of gendered poverty. If you insist, Bey.

In general, poor voters face a number of barriers to participation including limited time/inability to take time off of work to vote, felony disenfranchisement, and bureaucratic barriers to voting. This has meant that the poor have traditionally had lower turnout than the rest of the population. Recent research by Joe Soss and Lawrence Jacobs suggests that the links between economic inequality and political inequality are actually increasing  – barriers like voter id laws are more common, income inequality is increasing, as is class-segregation, and government programs aimed at the poor are increasingly punitive (think drug-testing and limits on what can be purchased with SNAP), which makes the poor less likely to feel connected to or “heard” by the government.

According to Soss’s past research, recipients of entitlement programs like Social Security Disability Insurance (SSDI) are far more likely to view the government as open, democratic, and responsive to their needs than recipients of traditional “welfare” programs like Temporary Assistance to Needy Families. They are also more likely to believe that their individual participation is effective, that collective movements could be effective and that the government listens to people like themselves. In Soss’s work, entitlement program recipients linked government unresponsiveness to the common feeling that the government is “out of touch,” while a majority of welfare recipients specifically linked it to their status as welfare clients. As one woman explained to Soss, public officials “would listen even less because I’m in this group of people that they’re trying to – that they have these stereotypes against…I’m looked at totally differently because of the fact that I‘m a recipient.”

Ange-Marie Hancock has a great book on this phenomenon called The Politics of Disgust which demonstrates how public myths and stereotypes about “welfare queens” marginalize poor women of color, discourage political leaders and the general public from considering them legitimate authorities on their own experiences and needs and thus discourage poor women of color from participating in the political process.

All of this is to say that the negative effects of poverty reach far beyond economic well-being and that, as Beyoncé and the Shriver Report remind us, these burdens disproportionately fall on the shoulders of women and children. So anyone interested in tackling these burdens (I’m looking at you, Republicans) had better make sure their proposals directly address the needs of women.



GotD: The Dead in WWII

By Kindred Winecoff



Click for larger version. At first glance I’d say that most of the belligerents — Italy especially — got off rather lightly in relative terms. And who knew that Latvia and Lithuania lost more than 10% of their population, or that civilian deaths in China substantially more than in the USSR? Or that, in per capita terms, Poland got the worst of the war?

This was à propos of nothing… just thought it was interesting to see all these data plotted together. Not the prettiest image (in more than one sense), but useful.

Via the generally-excellent @HistoricalPics


No Work Makes Jack A Malcontented Boy

By Kindred Winecoff

In “Economic Possibilities for Our Grandchildren” John Maynard Keynes wrote that by 2030 or so humans could spend most of their time pursuing leisure:

For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!

In many respects this echoed Marx nearly eighty-five years earlier, in The German Ideology:

For as soon as the distribution of labour comes into being, each man has a particular, exclusive sphere of activity, which is forced upon him and from which he cannot escape. He is a hunter, a fisherman, a herdsman, or a critical critic, and must remain so if he does not want to lose his means of livelihood; while in communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.

For contemporary treatments of similar ideas see John Quiggin and Ronald Dworkin. (Both of these are well worth reading in full.)

You may accept these goals or dismiss them. I would just like to note that we’ve basically achieved them, at the societal level. The Bureau of Labor Statistics reports that the average American spends 3.19 hours per day working. Obviously this mostly means that the distribution of working hours is highly unequal as is the renumeration from work. And the U.S. is hardly the world in this respect.

Still, if you squint hard enough from a high enough perch, we might be working about as much as we should be from a Utopian perspective. Even if you tack on the 1.74 hours per day we spend on “household activities” — from food preparation to lawn care — we’re basically in the realm that Marx envisioned. We spend 2.83 hours per day watching television. Marx really was a 19th century thinker whose outlook does not map easily onto 21st century realities but again: it’s worth knowing where we stand.

Our biggest crisis remains a jobs crisis, locally and globally. People seem to want to work even if their most basic needs are met. They want to work even if it means they would have to forego hunting in the morning or fishing in the afternoon or blogging in the evening. They seem to want to acquire and consume and improve their lives ever more. Keynes viewed this as avarice — a bit strange for him to say, given his relatively luxurious lifestyle — but maybe it isn’t. And if it isn’t then some basic planks of Utopian political theory might need re-thinking.