Understanding the Preferences of Finance

By Kindred Winecoff

Paul Krugman [1, 2] and Steve Randy Waldman are having an interesting exchange on why the wealthy support tighter monetary policy despite the fact that expansionary economic policy is good for them. This is often expressed as an aversion to lower central bank interest rates, quantitative easing programs, or other activist monetary actions. Krugman sums up the puzzle nicely:

I get why creditors should hate inflation, but aggressive monetary responses to the Lesser Depression have been good for asset prices, and hence for the wealthy. Why, then, the vociferous protests?

Krugman believes that this is false consciousness: “rentiers” oppose policies that benefit them because they adhere to a model of the world — in which loose monetary policy will lead to runaway inflation that will erode the value of their capital — that does not apply in our current circumstances. (Krugman does not mention that one reason why rentiers might believe this is because Keynesians like Krugman have been advocating for higher inflation partially for this reason for some time.) Waldman portrays this as simple risk-aversion: expansionary monetary policy will change something, and because recent circumstances have been favorable to rentiers that something is likely to negatively impact their station.

I prefer Kaleckian accounts that emphasize the general relationship between capital and labor. In Kalecki’s world, full employment gives bargaining power to workers because they have easy exit options. Conversely, underemployment gives bargaining power to capital. I believe that both Krugman and Waldman are sympathetic to this framework as well.

But I want to highlight another possibility that situates the U.S. macroeconomy within the context of the world economy. The simple Mundell-Fleming macroeconomic model, when combined with a Ricardo-Viner sectoral approach, tells us that when international capital mobility is high (as it is today) financial capital benefits from an exchange rate that is high and stable, while fixed capital and labor benefit from monetary policy flexibility and (often) a lower exchange rate. This relationship is discussed in detail in Jeffry Frieden’s 1991 International Organization article “Invested interests: the politics of national economic policies in a world of global finance”, from which the table below is taken:

FriedenTable

 

The section of the article that begins on pg. 442 is especially relevant. There are several things to note. First, the preferences of financial capital diverge from those of fixed capital, which are divided in turn by whether it is engaged in export-oriented, import-competing, or nontradeable production. Second, the preferences of labor within these sectors will tend to side with capital within the same sector, and oppose capital (and labor) in other sectors. Third, the interests of financial capital will diverge from everyone else.

Why is this? Frieden notes that the interests of capital depend on how strongly tied that capital is for its specific current use. Financial capital is much more liquid and adaptable than an industrial plant. It can be deployed globally while fixed capital is must remain local. For this reason, exchange rate movements create an additional source of risk: a depreciation will negatively impact the value of local assets vis a vis foreign assets, while an appreciation will negatively impact the value of foreign assets vis a vis local assets. The point is that any exchange rate movement from the status quo will benefit some and negatively impact other status quo investments, which is why the interests of fixed capital are divided. But for financial capital, exchange rate movements are always bad for their status quo portfolio, at least inasmuch as an alternative portfolio created that anticipated the future exchange rate movement could have been constructed.

Why should finance support a higher exchange rate level in addition to low volatility when capital is mobile globally? Because, all else equal, a higher value in the local currency will increase purchasing power globally. This is particularly true if you have easy access to that currency via one’s central bank. It is probably true that U.S. banks have had greater access to dollar liquidity over the past five years than at any point in economic history; given that, they would prefer those dollars to be more valuable in exchange rather than less.

Frieden notes in his article that the distributional implications of the battle over exchange rate stability and interest rate levels would be especially severe among the European countries that were then debating joining a common currency, with finance preferring a high and stable exchange rate and low monetary policy flexibility. I would suggest that this expectation has been borne out exceptionally well, as the ECB has engaged in quite restrictive monetary actions despite suffering from a regional economic collapse that has few historical parallels. The story is a bit different for the U.S. because of its n-1 privileges, but it is unclear whether anyone in the U.S. — financial firms or even the Federal Reserve — really understands this. Even still the basic story works: high and stable exchange rates are better for finance capital than low and volatile exchange rates.

So from the perspective of financial capital the great risk of expansionary monetary policy is that it will impact exchange rates rather than interest rates, growth, employment, or even asset prices. Thus the Krugman-Waldman puzzle is not a puzzle at all. Financial capital wants restrictive monetary policy because it benefits them more than the alternatives.

Against Neil deGrasse Tyson: a Longer Polemic

By Seth Studer

In her recent Atlantic review of two new books on atheism, Emma Green brilliantly demarcates what is missing from the now decade-long insurgency of anti-ideological atheism. I use the term “anti-ideological atheism” instead of “neo-atheism” or “new atheism” or the obnoxious, self-applied moniker “noes” because opposition to ideology – to ideational constructions – is one of the major recurring threads among these varied atheist identities (a frightening mixture of elitism and populism is another). Green illustrates this point when she notes the incongruity between Peter Watson’s new history of post-Enlightenment atheism, Age of Atheists, and the kind of atheism most vocally espoused in the 21st century. The central figure in Watson’s study, Friedrich Nietzsche, is almost never cited by Richard Dawkins or Samuel Harris or Neil deGrasse Tyson. Nor, for that matter, are Nietzsche’s atheistic precursors or his atheistic descendants…all diverse in thought, all of whom would have been essential reading for any atheist prior to, well, now.

The most famous atheist, the one whose most famous quote – GOD IS DEAD – your scrawled with a sharpie on the inside door of your junior high locker, is almost persona non grata among our most prominent living atheists. His near-contemporary, Charles Darwin (hardly anyone’s idea of a model atheist), is the belle of the bellicose non-believer’s ball.  

Green also notes that the other famous 19th century atheist – Karl Marx, whose account of religious belief vis a vis human consciousness is still convincing, at least more than Nietzsche’s – is likewise incited by our popular atheists. The reason may be simple: invocations of Marx don’t score popularity points anymore, and the business of anti-ideological atheism is nothing if not a business.

But there is, I believe, a larger reason for the absence of Nietzsche, Marx, and almost all other important atheists from today’s anti-ideological atheism. As fellow Jilter Graham Peterson recently said to me, these popular atheists need a dose of humanities: liberal inquiry and a sense that truth is hard, not dispensable in easy little bits like Pez candies. I would expand on that: they need a more dynamic discursivity, they need more contentiousness, they need more classical, humanist-style debate. They need the kind of thinking that frequently accompanies or produces ideology.

But of course, most of them don’t want that. They resist Nietzsche’s ideological critiques. They resist Marx who, despite his inherent materialism, is more systematically ideological than, say, Darwin. Sigmund Freud (who dedicated an entire tract to atheism and who is central to its 20th century development) is never mentioned, along with a host of other names.

And they do not invite new critiques – except, apparently, from Young Earth Creationists.

The title of Green’s review is pitch perfect: “The Intellectual Snobbery of Conspicuous Atheism: Beyond the argument that faith in God is irrational—and therefore illegitimate.” Contrary to what Richard Dawkins and others might claim, atheists are not a persecuted minority in the West (any group consisting mostly of white men is always eager to squeeze and contort their way into “persecuted minority” status, even as persecuted minorities struggle to push out). Anti-ideological atheism is declared conspicuously, a badge of honor and a sign of intellect. Green quotes Adam Gopnik, who introduces the nauseating term “noes,”

What the noes, whatever their numbers, really have now … is a monopoly on legitimate forms of knowledge about the natural world. They have this monopoly for the same reason that computer manufacturers have an edge over crystal-ball makers: The advantages of having an actual explanation of things and processes are self-evident.

In this respect, the “noes” have “an actual explanation of things” in greater abundance than did Nietzsche or Marx or (especially) the atheists of antiquity. In this respect, the atheists of yore and religious believers have more in common with each other than with the “noes” of today.

In my last post, I shared my thoughts about the meteoric rise of Neil deGrasse Tyson (do meteors rise? I’m sure deGrasse Tyson would have something to say about that bit of rhetorical infactitude). It may seem unfair to pick on deGrasse Tyson when, in reality, I’m bemoaning a phenomenon that began back when George W. Bush used vaguely messianio-Methodist language to frame the invasion of Iraq, an event that, whatever you think of its initial rationalizations, was poorly executed, quickly turned to shit, and set the “War on Terror” back at least a decades. In/around 2004, Richard Dawkins (who is still the author of the best popular overview of natural history ever written) realized that conditions existed for a profitable career shift.

Widespread discontent with politico-religious language was in the United States – where right-wing militarists decried the brand of fundamentalist Islam that obliterated lower Manhattan and anti-war leftists decried the (pascificst-by-comparison) brand of fundamentalist Christianity that influenced U.S. policy – coincided with fear of religious extremism in Europe, where the vexed term “Islamophobia” retained some usefulness: legitimate anxieties about theocratic terrorism (e.g., violent anti-Western responses to the deliberately provocative Mohammad cartoons and then the public slaughter of Theo van Gogh) mingled with old-fashioned European xenophobia, which was never a perfect analogue to American xenophobia. And between the U.S. and Europe lies England, where political and public responses to Islamic terrorism less often involved blustery American gun-slinging or shrill continental nativism but rather stern appeals to “common sense.” Since the collapse of British colonialism, intellectuals in England are less apt to use the term civilization than are their cousins across the Channel or their cousins across the Pond (where the term has been historically deployed by cultural warriors, a la Alan Bloom, in order to give anti-colonial leftists the willies).

The term civilized, on the other hand, is still relevant in English public discourse: not with regard to other societies, but to English society. The concept of civilized discourse (or civilised, if you will) doesn’t seem to carry the same ideological freight as civilization. But when Dawkins mocks post-positivist socio-humanist* analyses of, say, indigenous Amazonian cultures who explain natural phenomena (e.g., how the jaguar get its spots) with traditional tales, his arguments carry the epistemological heft of a suburban Thatcherite scanning his daughter’s contemporary philosophy textbook, throwing his hands in the air, and exclaiming “Oh come on!” In other words, Dawkins belongs to the long line of British “common sense” thinkers. Born in Kenya, raised in Africa, and a fan of Kipling, Dawkins has been criticized for possessing a colonial bent to his thought.

And there’s something to be said for common sense, even common sense colonialism; George Orwell, of all people, joined Rudyard Kipling (one of the most misunderstood writers in the English canon) to defend British colonialism in England on the reasonable (if depressing) grounds that, had the English let India be, the Russians would have colonized the subcontinent. This hardly excuses British crimes against India and its people, but even a cursory overview of Russian colonial atrocities forces one to sigh a very troubled and uncomfortable sigh of – what, relief? – that the British Raj was the guilty party.

Richard Dawkins
Richard Dawkins

But common sense is not fact, much less knowledge, and Dawkins has made a career of playing fast and loose with these concepts. In Unweaving the Rainbow (1998), Dawkins defended science not against the pious but against the epistemological excesses of cultural studies. In one chapter, he wrote that an Amazonian tribesman who is convinced that airplanes are fueled by magic (Dawkins’ examples often play off colonial tropes) and the the socio-humanist (usually an American cultural studies professor or graduate student in English whose dress and hygiene or dubious and who write with incomprehensible jargon) who respects the Amazonian’s conviction are both reprehensible, especially the professor, who is an enabler: he could give the ignorant native a cursory lesson in physics, but instead paints a scholarly veneer over so much tribal mumbo-jumbo. Why not explain the real source of wonder and disabuse the native of his false notions: that beautiful physics can explain how people fly!

Despite its best efforts, Unweaving the Rainbow was Dawkins’ first foray into the “Debbie Downer” genre of popular science writing. This genre pits the explanatory power of “scientific knowledge” (more about that term in a moment) against religion, superstition, homeopathy, most of Western philosophy, and pretty much any knowledge acquired or unverified by non-quantitative methods.

The “Debbie Downer” genre can be useful, especially when turned on the practice of science itself: Dawkins and his allies have successfully debunked the dogmatism that led Stephen Jay Gould’s career astray. The atrocities of Nazi and Soviet science were exposed and explained with both rigorous science and common sense. The genre can also be used  to wildly miss the point of things. I have friends who are ardent Calvinists or ex-Calvinists, who are incapable of reading Paul’s epistles without a Calvinist interpretation. They read Paul, but all they see is Calvinism. Likewise with fundamentalists and anti-ideological atheists who read Genesis but only see cosmology. Yet Paul was not a Calvinist, and Genesis is not cosmology. In some sense, the same principle applies to deGrasse Tyson and Gravity. Is this a question of knowing too much or thinking too little?  

In Unweaving the Rainbow, Dawkins confronts charge that science takes all the fun and beauty of the world just by, y’know, ‘splainin’ it. Somewhat comically, the book’s title literalizes an instance of poetic language, a practice common among Dawkins’ bête noire: religious fundamentalists. John Keats’ playful exasperation that “charms fly/ at the touch of cold philosophy” and that the natural sciences (still embryonic in Keats’ time) “unweave the rainbow,” reducing it to “the dull catalogue of common things,” is beautifully articulated representation of a well-worn human experience, one that requires appreciation more than rebuttal. But for Dawkins, the poem demands rebuttal, and not a rebuttal that distinguishes between the uses and functions of poetic language. Unweaving the Rainbow is a treatise that, dammit, science makes the world more beautiful, not the other way round.

And Dawkins is correct. After reading his marvelous Ancestor’s Tale, I felt a profound kinship with every toad I encountered on the sidewalk and every grasshopper that attached itself to my arm, six cousinly feet twisting my skin uncomfortably. Between Unweaving the Rainbow and Ancestor’s Tale, Dawkins wrote A Devil’s Chaplin, a haphazardly organized collection of Debbie Downer essays that is probably best understood as the director ancestor of Dawkins’ most successful book, The God Delusion. The book represented a specific cultural moment, described above, when everyone was eager to read why God sucked. I don’t need to rehearse the narrative or the players (something about four horsemen, cognitive, an obnoxious and inappropriate use of the prefix “neo”). Even The God Delusion‘s harshest critics praised Dawkins for capturing the zeitgeist in a bottle. But the most prominent and widely-cited negative review, by Marxist literary theorist Terry Eagleton, did not. Eagleton captured Dawkins, his personality and his project, to near perfection in the London Review of Books:

[Dawkins’ views] are not just the views of an enraged atheist. They are the opinions of a readily identifiable kind of English middle-class liberal rationalist. Reading Dawkins, who occasionally writes as though ‘Thou still unravish’d bride of quietness’ is a mighty funny way to describe a Grecian urn, one can be reasonably certain that he would not be Europe’s greatest enthusiast for Foucault, psychoanalysis, agitprop, Dadaism, anarchism or separatist feminism. All of these phenomena, one imagines, would be as distasteful to his brisk, bloodless rationality as the virgin birth. Yet one can of course be an atheist and a fervent fan of them all. His God-hating, then, is by no means simply the view of a scientist admirably cleansed of prejudice. It belongs to a specific cultural context. One would not expect to muster many votes for either anarchism or the virgin birth in North Oxford. (I should point out that I use the term North Oxford in an ideological rather than geographical sense. Dawkins may be relieved to know that I don’t actually know where he lives.)

Terry Eagleton
Terry Eagleton

Eagleton’s Marxist ad hominem is amusing: he reduces Dawkins’ own self-proclaimed materialism to his class. Dawkins is a very, very identifiable type. I’m not sure whether Eagleton knew, when he quoted Keats, that Dawkins had written a book whose title misread – or at least misappropriated – the most flowery of Romantic poets.

Eagleton’s more substantial complaint – that there are many kind of atheists, not all of whom derive their views from a fetishized notion of the natural sciences’ explanatory powers – was echoed in many other reviews. It was even the basis for a two-part episode of South Park.

Another common complaint: The God Delusion engaged with religious faith very narrowly, responding to only the most extreme fundamentalist interpretations of scripture and dogma. Dawkins hadn’t boned up on his Tillich. He’s a scientist stumbling clumsily through the humanities, unaware that his most basic criticisms of faith have been taken seriously by religious people since the Middle Ages. Again, Eagleton:

What, one wonders, are Dawkins’s views on the epistemological differences between Aquinas and Duns Scotus? Has he read Eriugena on subjectivity, Rahner on grace or Moltmann on hope? Has he even heard of them? Or does he imagine like a bumptious young barrister that you can defeat the opposition while being complacently ignorant of its toughest case? … As far as theology goes, Dawkins has an enormous amount in common with Ian Paisley and American TV evangelists. Both parties agree pretty much on what religion is; it’s just that Dawkins rejects it while Oral Roberts and his unctuous tribe grow fat on it.

More troubling than his exclusion of Eriugena and de facto collusion with Oral Roberts is his exclusion of so many other atheists. The God Delusion was published before Christopher Hitchens’ God is Not Great, a very bad book that nevertheless engaged with atheism per sedrawing from an intellectual history that extended from Lucretius to Spinoza and Thomas Paine (a list Hitchens never tired of reciting on cable news show, grinning slyly at the thought of pot-bellied viewers on their sofas, scratching their heads: I think I’ve heard of that Payne guy, but who in the sam hill is Lew Crishus?).

If Dawkins was a scientist posing as a humanist – or, more correctly, a scientist trying to sell ideology as scientific fact – then Hitchens was a humanist posing as someone with a basic understanding of science. In reality, Hitchens knew the Bible, had spent his career admiring religious thinkers and religious poets. Near the end of the Hitchens v. Douglas Wilson documentary Collision, Hitchens recalls a conversation with Dawkins, during which Hitchens declared that, if given the power to wipe religious belief off the face of the earth, he wouldn’t do it. “Why not?!” shrieked Dawkins – Hitchens, repeating the anecdote to Wilson, does a killer imitation of Dawkins’ spine-tingling shriek. Hitchens has no answer for Dawkins. He simply can’t conceive of a world without at least one religious believer.

More on point, however, is the following passage from Eagleton’s review:

Dawkins considers that all faith is blind faith, and that Christian and Muslim children are brought up to believe unquestioningly. Not even the dim-witted clerics who knocked me about at grammar school thought that. For mainstream Christianity, reason, argument and honest doubt have always played an integral role in belief. (Where, given that he invites us at one point to question everything, is Dawkins’s own critique of science, objectivity, liberalism, atheism and the like?) Reason, to be sure, doesn’t go all the way down for believers, but it doesn’t for most sensitive, civilised non-religious types either. Even Richard Dawkins lives more by faith than by reason. We hold many beliefs that have no unimpeachably rational justification, but are nonetheless reasonable to entertain. Only positivists think that ‘rational’ means ‘scientific’. Dawkins rejects the surely reasonable case that science and religion are not in competition on the grounds that this insulates religion from rational inquiry. But this is a mistake: to claim that science and religion pose different questions to the world is not to suggest that if the bones of Jesus were discovered in Palestine, the pope should get himself down to the dole queue as fast as possible. It is rather to claim that while faith, rather like love, must involve factual knowledge, it is not reducible to it. For my claim to love you to be coherent, I must be able to explain what it is about you that justifies it; but my bank manager might agree with my dewy-eyed description of you without being in love with you himself.

Dawkins would no doubt balk at the notion that he take Eagleton’s advice and “critique” science. Science is self-critiquing, after all! Science is reasonable by its very structure. Science and reason are near synonyms in the anti-ideological atheist lexicon.

This, for me, is the most troubling aspect of Dawkins and deGrasse Tyson’s trendy, anti-ideological atheism.

Let us consider once more the subtitle of Emma Green’s Atlantic review: for  the argument that faith in God is irrational—and therefore illegitimate.” Both Green and Eagleton observe what is perhaps the most troubling aspect of popular, anti-ideological atheism: it conflates terms like “reason,” rationality,” “fact,” “science,” and “knowledge.” In fact, I believe Eagleton goes too far when he asserts that “only positivists think that ‘rational’ means ‘scientific.'” Many positivists can make the distinction. (Eagleton’s reflexive assertion to the contrary is merely a product of decades spent defending post-positivist thought to his fellow Marxists.)

The popularizers of anti-ideological atheism play very fast and loose with a specific set of words: “science,” “reason,” “(ir)rationality,”  “knowledge,” “fact,” “truth,” and “information.” It is absolutely necessary to distinguish between these words. In many contexts, it is not “irrational” to object to scientifically produced knowledge, especially if you’re objecting to the implementation of that knowledge.

If I were a public intellectual with a large platform – that is, if I were Neil deGrasse Tyson – I’d go on a speaking tour. The tour’s only goal would be the definition of some basic terms, as they ought to be used by laypersons (obviously specialists will have slightly different definitions, and that’s okay). Information is data we glean from the world through our senses and technologies. Science is a method that uses information to test ideas and produce knowledge. Ideas are organized assumptions about the world. Ideas that are verifiable using scientific methods become knowledge. Reason is a system of organizing knowledge, which allows knowledge to be used for all sorts of great things: to determine a set of ethics, to decide the best shape of government, to demarcate reasonably accurate beliefs about the world, to guide us through daily decisions, etc. Rationality is reason with a French accent.

Facts are stubborn but undeniable things, some of them unveiled by the scientific method and others revealed through our senses/technologies, which help us glean information and confirm knowledge produced by the scientific method. Truth is the ontological status of reality, which makes it a very tricky thing to define and understand, and is therefore probably best passed over in silence…at least in casual conversations or book tours. True is an elastic adjective that allows us to describe the proximity of knowledge, ideas, and impressions to reality, as we understand it via science, knowledge, reason, and facts.

These definitions are not perfect, and I’m sure you and my fellow Jilters have problems with some/all of them. But I think they’re suitable for casual use. At the very least, they admit distinctions between concepts.

Anti-ideological atheists misuse these concepts for rhetorical purposes, and they encourage the public’s tendency to conflate them.

This is wrong.

When Neil deGrasse Tyson insists that “evolution is a fact,” he’s playing with rhetoric to make a political point. For too long, Creationists have conflated the scientific and popular definitions of the word “theory,” transmuting well-established and verifiable knowledge about life into speculation: Darwin’s theory of speciation was as reliable as a hopeful suitor’s theory of “why she isn’t returning my phone calls.”

But in both scientific and common English, theory is not an antonym of fact (sorry Creationists) and a theory cannot be a fact (as deGrasse Tyson well knows). A theory is established by facts. Richard Dawkins, Samuel Harris, Daniel Dennett, Neil DeGrasse Tyson, and Bill Nye have had countless opportunities to make these simple distinctions to the public; Christopher Hitchens possessed both the knowledge and rhetorical precision to explain the distinctions. But distinctions don’t pack much punch. Politically and ideologically, it’s better to affirm that “evolution is a fact,” just like gravity, and not allow the Creationists to keep slithering through their own linguistic sophistry. And just as explaining a joke drains its humor, debunking a slick sophistry invariably drains your authority. Better to bludgeon than to slice. And as anyone who has seen the ads or watched the first two episodes of his Cosmos knows, deGrasse Tyson is happy to bludgeon.


*By “socio-humanist,” I refer to scholars in the humanities (I use “humanist” as the humanities equivalent of “scientist”) and certain branches of the social sciences; I’m not referring to the broader category of post-Englightenment “secular humanism,” within which Dawkins might count himself.

What-Iffings of Futures Past: Some Reflections on Counterfactual Fiction

By Seth Studer

Historians don’t really like to deal with counterfactuals, with what might have been. They want to talk about history. How the hell do you know…what might have been? Who knows? Well, I know certain things.

– Robert McNamara, The Fog of War 

1. Different, but the Same

In college, I frequently attempted to diffuse the awkwardness of a first date by describing counterfactual U.S. elections. Attempting to impress my date, I didn’t confine myself to the obvious reversals (Nixon in 1960, Gore in 2000). I described how Dewey in 1948, Humphrey in 1968, or Bush in 1992 could have happened, detailing both the electoral math and the historical consequences.

I did not go on many second dates.

A&8_Political_MapWhat if I had suppressed my urge to share these speculations over coffee at the Java House? What if we’d gone to Starbucks instead? What if I had waited until the fourth or fifth date before retrieving those counterfactual electoral maps from my backpack and explaining how a 1948 Dewey victory would have rendered the modern Republican party unrecognizable? Would I be married to someone else today? Would I be single and living in Angola (or East Sudan, a nation that exists in this scenario)?

Such speculation is pointless, because under no conditions would I have suppressed my enthusiasm for alternate history. Not because my enthusiasm is irrepressible, but because any alternative scenario is impossible. What was is, inalterably: this is the first principle of historical analysis. The fact of the past always takes precedence. Conditions for alterity never were. There is no plausible alternate history.

Our grammar disagrees with the best practices of historical scholarship. The subjunctive mood allows for speculation backward and then forward. In philosophy, counterfactual theory is a thing, although it rarely addresses Confederate victories or missing Hitlers[1]. Literary scholars take counterfactual fiction seriously, but questions of how these fictions work – or how the cultural, political, and even grammatical constructions of counterfactuality work – supersede questions of if they work.

And that’s good, because counterfactual histories never work. Unless you narrow your parameters to banal observations (e.g., “if George H.W. Bush died in 1990, Dan Quayle would have become president”), you cannot arrange historical events in any pattern other than their own. Too many factors were/are in play. The past is inert. So what matters in good counterfactual fiction is not a rigorously executed chain of historical causality, but a balance of plausibility and novelty that favors the latter (e.g., “if George H.W. Bush died in 1990, Dan Quayle would have become president, the Soviet Union would have remained intact, and the United States would have been annexed by Canada”).

The best counterfactual fiction adheres to two principles: first, alter as little as possible for the greatest impact. E.g., assassinating Reagan in 1981 changes a lot with a single bullet. Second, make the consequences of your alteration plausible. E.g., kill Reagan, and you might stop neoliberal economic reforms and forestall the end of the Cold War. On its face, that’s a plausible, high-stakes counterfactual. But if American neoliberalism and Soviet collapse were inevitable even without Reagan, the scenario packs less of a punch.

Alternate-HistoryConsequently, alternate history relies on great men and big events, not broad historical forces. An exhausting plurality of counterfactual novels fiddles with World War II (“what if the Nazis won?!”) or the American Civil War (“what if the South won?!”). After that, counterfactualists find easy work riffing on 1492, assassinations, colonial maps, and close elections. Altogether, these subjects surely account for 80% of counterfactual fiction[2]. Because all alternate histories are inherently implausible, a counterfactualist’s goal is not plausibility. His goal is to blunt his scenario’s inherent implausibility. The best counterfactual speculation adheres to the inflexibility of history. No matter how many players you shift in the foreground, the background – economic, social, and material forces – remains. Alternate history should be different, but the same.

Compare alternate history’s most prolific practitioner, Harry Turtledove, to Philip Roth’s counterfactual novel, The Plot Against America (2004). Turtledove’s interventions in Byzantine decline, Euro-American First Contact, the American Civil War, and WWII have massive ramifications, wholly altering the course of history. Special attention is given to great  men: generals, explorers, heads of state. Sometimes aliens get involved. Roth’s novel, meanwhile, posits an improbable Charles Lindbergh presidency that realigns U.S.-Nazi relations. The narrative structure is memoir: a fictional Roth, situated in the present, remembers his Jewish childhood in Newark. He was a minor figure with only a civilian’s access FDR, Lindbergh, and Hitler. By the novel’s end, the world has not substantially changed. History is not irrecoverably altered by Roth’s (fairly significant) alteration; it simply takes a detour around and back to its natural course.

My favorite counterfactual scenario reverses the Vietnam War, a major event in American cultural memory. Consider the following: sometime after Mao assumes power in China, the U.S. intelligence community decides that Ho Chi Minh should be “Asia’s Tito,” an independent Communist leader who helps contain China and the Soviet Union (the U.S. briefly flirted with this policy, which adds the requisite dash of plausibility). This policy hastens French withdrawal and unifies Vietnam. Fifty-eight thousand Americans and millions of Vietnamese do not die. The 1960s are less tumultuous. But Europe and Asia will still recover from WWII by 1970; financialization will occur in all developed economies; the Soviet Union will either weaken or liberalize; economic and demographic realities will force China open; competition will push the U.S. toward China. Secularization, decolonization, civil rights, perestroika, Islamic extremism, and the rise of the BRICs are all inevitable. The dates and details are different, but averting the Vietnam War – an event that affected millions – ultimately changes very little for the billions now living. A seemingly large event is, in the long run, pretty small.

2. Different, Not the Same

Last month, British novelist D.J. Taylor listed his ten favorite counterfactual novels for the Guardian. Half of Taylor’s picks tamper with WWII and adjacent events. These novels, including Roth’s Plot Against America, represent the best of a bad sub-genre. Taylor himself offers a fun twist on the WWII changaroo in his new novel, The Windsor Faction (unread by me). Playing to our recent (and weird) obsession with interwar England, Taylor keeps the rascally Edward VIII on the throne (sadly for fans of Masterpiece and that movie with Colin Firth, Wallis Simpson is apparently dead: this is how Taylor sidesteps abdication)[3]. Edward VIII foils his government’s attempts to undermine Hitler, because…well, once you stray too far from the premise of a counterfactual novel, things tend to get stupid. Unless you get very, very far from the premise, which is what the best novel on Taylor’s list – Kingsley Amis’s The Alteration – does.

alteration2Here’s The Alteration in a sentence (spoiler alert): a prepubescent English boy is recruited to be a castrato roughly 450 years after Martin Luther became Pope Germanius I, an event that blunted and contained the Protestant Reformation. The title’s “alteration” refers to both counterfactuality and castration. Most of the novel involves the castrato plot, set in 1976; Amis’s historical alterations and their consequences are background material.

The Alteration is not the sexiest or most exciting counterfactual novel you’ll read, but it may be (ahem) the ballsiest. Not content to reverse world wars, Amis essentially reverses Western civilization[4]. He wants an alternate history that is actually different.

So ask yourself: what is the most important event of the past thousand years (in the West, anyway)? The correct answer is surely the convergence of five or so mega-events, all of which occurred within a 100-year period: Euro-American First Contact, the scientific revolution, the proliferation of print culture, the rise of merchant capitalism, and European colonial hegemony. Smack dab in the middle of these mega-events is the Protestant Reformation, which is directly implicated in or bolstered by all of them. You needn’t agree with Max Weber to know that Protestantism was a prime vector for European modernity. And compared to capitalism, print culture, or First Contact, Luther’s break with Rome is a reasonably simple event to reverse.

But the Reformation is the very definition of an inevitable event. Would-be Reformers had always existed. Luther simply appeared at the right time. Amis knows this. How does he get away with his massive alteration? First, unlike most counterfactualists, he distances his narrative from his counterfactual premise. Amis is more interested in his alternate 1976, where England is Catholic and castrati still sing, than in an alternate Diet of Worms. Luther became Pope a long time ago, under conditions that are plausible only in a blur.

Second, Amis is not especially worried about plausibility. The novel is full of humor, absurdities, and inside jokes. In an odd moment, Amis’s protagonist reads a counterfactual version of Philip K. Dick’s Man in the High Castle, in which Pope Germanius I remained Martin Luther the Reformer[5]. Dick’s real-life Man in the High Castle (1962) is the gold standard (hey, what if we’d never ended the gold standard??) of WWII revision. Like Roth and Amis, Dick circumvents the problem of plausibility through distance: he does not observe counterfactual events up close or as they occur. The Alteration‘s fictional Dick is one of numerous devices Amis uses to suspend plausibility, distracting the reader with irony and levity.

alterationFinally, Amis allows for historical inevitability. A small reformation still occurs and anti-Catholic radicals (called Schismatics) still find refuge in North America. Without a capital-R Reformation, global capitalism and modern technology are inhibited. But they appear, albeit in atavistic form. European colonialism is severely stunted, but there are still colonies. Amis retains enough history to authenticate his massive alteration. In most plausible alternate histories, little things change but big things remain the same. The South wins the war, but the slave economy is still doomed. In Amis’s novel, little things persist but the big thing – the trajectory of Western civilization sans Martin Luther – is totally altered.

With a few deft literary maneuvers, Amis imagines vast political, economic, and cultural realignments simply by altering a few 16th century events, all while allowing for history’s inflexibility. In this respect, The Alteration is arguably the most successful counterfactual novel ever written. The result, however, is an unsatisfying novel to most fans of the genre: people like me, who obsess over “what if” scenarios and purchase terrible short story anthologies edited by Harry Turtledove. We don’t want irony or levity or distance. We don’t want alternate histories that acknowledge the impossibility of alternate history. We want options. We want great men: kings and generals and presidents. In other words, we want men and women, great or small, to have a singular impact. We want our votes to count. We want to save Kennedy, elect Gore, kill Hitler, prevent the Vietnam War. We don’t want to be alone, anonymous and without agency, faceless amid the currents of history.

Amis succeeds only because he neuters these desires. In the final pages of The Alteration, the castration that Amis’s heroes have fought so hard to prevent occurs. It occurs in a bizarre and wholly unexpected context, but it occurs nonetheless. It was inevitable. There can be no different ending.

—–

[/1] These rigorous counterfactual inquiries, while interesting, are too abstract (“if x then not y”) and too narrow (“if president dies, vice president succeeds him”) to describe what would’ve happened if JFK survived and why that would’ve been awesome and/or terrible.

[/2] Too few counterfactual novels tinker with World War I. British neutrality is a fascinating and plausible scenario.

[/3] Cultural historians will one day study the Obama-era fascination with Edwardian English aristocracy and Wallis Simpson alongside the Reagan era’s Australia fixation and the fact that everyone stopped playing Guitar Hero the moment George W. Bush stopped being president.

[/4] Many counterfactualist authors have attempted similarly seismic shifts, with mixed results. Turtledove’s attempt is literally seismic, fulfilling Barry Goldwater’s dream of detaching the eastern seaboard from the rest of North America.

[/5] This is a device Dick actually uses in Man in the High Castle: characters read a counterfactual novel-within-the-novel whose plot aligns with actual history. Amis playfully uses Dick as a nexus of counterfactual paradoxes. 

This Is Your Brain on Books

By Seth Studer

neurolinguisticsEarlier this week, OnFiction published the results of a recent study on the biological effects of reading fiction. Researchers at Emory University used MRI scanners to track the brain activity of nineteen participants, all reading the same novel (Robert Harris’s historical thriller Pompeii). The researchers focused on “functional connectivity,” the degree to which activity in one region of the brain prompts or correlates with activity in another region. Basically, your brain’s ability to talk to itself. Participants’ brains were scanned while reading, immediately after reading, and five days after completing the novel. OnFiction described the results:

[The researchers] identified a number of brain networks that got stronger (i.e., the different regions became more closely associated) during the reading period. They also uncovered a network of brain regions that became more robust over the reading period but also appeared to persist after the participants had finished reading. Although this network got weaker over time after the reading period, the correlations between the brain regions in this network were still stronger than those observed prior to the reading period.

Conclusion? Surprise, reading makes you smarter! Or, reading helps your brain make neurological connections more briskly. Those non-adjacent neurons that light up while you’re reading Starship Troopers are potentially responsible for language and comprehension skills (kinda seems obvious, right?), but the researchers aren’t sure yet: the brain remains too dense and mysterious to definitively map. So some of those neurons might be responsible for something totally unrelated to language but related to fiction-processing. Which, for literary scholars, would be awesome to learn about.

Either way: when you read, your brain lights up.

The Emory study focuses on neurological responses to a single novel. But earlier this month, OnFiction reported another study that seemed to demonstrate a measurable difference between “literary fiction” and pulp: a difference many literary scholars spent thirty or more years dismissing. Two psychologists at the New School for Social Research gave readers a randomly assigned texts – some “highbrow,” others “lowbrow,” others nonfiction – and afterward measured the reader’s ability to empathize with others (aka “Theory of Mind”). Participants who read a highbrow text were consistently more empathetic than participants who read the lowbrow text.

In other words, if you need a ruthless hitman, don’t hire the one reading Anna Karenina.

The results of this study were published in Science and discussed on NPR’s All Things Considered. You can hear the audio clip or read the transcript here (I recommend listening to the audio, to experience the full effect of the Danielle Steele/Louise Erdrich pairing).

Gregory Burns, team leader of the first study, is a neuroscientist who has used neurological approaches to economics and sociology. Now he has his eyes on literary analysis. But lit scholars are traditionally wary of theories and methods that appear too positivist, empirical, or quantitative. (Celebrity scientists who condescend and prescribe cures for the humanities without really understanding what humanists actually do aren’t helping.) Much of this wariness comes from decades of disciplinary isolation: C.P. Snow’s “two cultures.” Some of it comes from the academic turf wars and ideological disputes of the 1980s. In the late ’90s, something like Franco Moretti’s amazing Literary Lab would’ve had to been developed slowly and with care, so as not to cause too much of a ruckus. Add a dash of quantitative reasoning in one article, use a database in another, publish a groundbreaking polemic, ensure that you already have tenure and academic fame, and now you’re ready to be semi-empirical without overwhelming backlash!

Of course, so much has changed since the early 2000s. The so-called “Digital Humanities” (a term that seems to mean everything and nothing) has made statistics ‘n’ stuff more palatable to humanists, and the pioneering work of scholars like Nicholas Dames has made science less scary. Today, you can’t go to a literature conference now without a panel on cognitive science and another on economic theory. The “two cultures” are intermingling, beginning with the social sciences, which overlap with humanist concerns more explicitly than, say, physics does. But the studies featured on OnFiction this week should not be dismissed. They aren’t perfect, but their methodologies offer rigorous and robust approaches to literary experience.

Nazism, Bigamy, and the Problem of Paul de Man

By Seth Studer

It’s time to beat up on Paul de Man again.

And yes, he probably deserves it.

In Monday’s Chronicle of Higher Education, Tom Bartlett revealed the juicy details of Evelyn Barish’s new biography The Double Life of Paul de Man (due out in March 2014). Barish suggests that de Man emigrated from Belgium in 1947 to escape embezzlement charges. He was eventually convicted in absentia of stealing one million Belgian francs (roughly US$300k today) from his own publishing house. Barish also discovered that de Man never held an undergraduate degree, and that in his interactions with friends, family, and colleagues, he was sometimes a total dick.

This in addition to what we already knew: de Man was a deadbeat dad, a temporary bigamist, and the author of several blatantly anti-Semitic articles for a pro-Nazi newspaper during the German occupation of Belgium. The articles were disclosed in 1987, three years after de Man’s death. English professors across the nation responded with horror (or schadenfreude) because, throughout the 1960s and ’70s, Paul de Man led the vanguard that introduced deconstructionist theory into American universities[1]. He was a big deal.

deconstructionExcept he wasn’t.

Unlike his friend and fellow deconstructionist, Franco-Jewish philosopher Jacques Derrida, de Man’s scholarship focused narrowly on literary language. He argued that literary texts, through their own internal tensions and oppositions, effectively read themselves. (Your copy of Moby Dick is reading itself, even as it sits dusty on your shelf!) Derrida, meanwhile, wrote about everything from semiotics and political philosophy to his pet cat. Derrida’s writing was difficult, but often in a fun way – weird, cheeky, playful.

Also: if you’re a layperson, you’ve probably heard of Derrida. His obit appeared in the New York Times[2]. He was one of several influential French thinkers who emerged alongside 1960s anti-de Gaullist radicalism. You know them by their surnames: Barthes, Foucault, Lacan, Derrida. De Man was the Belgium to their France; he is virtually unknown outside literature departments. But he, more than any other figure, set the hermeneutic agenda for U.S. literature departments in the ’70s and early ’80s.

The news that de Man had authored Nazi propaganda could not have emerged at a worse time for his students (by then major scholars in their own right) or for deconstruction in general[3]. By 1987, cultural studies and politico-ethical concerns were pushing deconstruction out of the humanities. Deconstruction was too apolitical, too textocentric. This was a sideshow in the Culture Wars: as many professors adopted radical politics, ardent deconstructionists appeared reactionary and insular. Meanwhile, deconstruction’s apparent nihilism was being attacked by positivists, scientists, traditionalist lit scholars, and even social conservatives outside the academy. The de Man-Nazi revelation offered proof of what many already suspected: that deconstruction was nefariously closed-off, vapid repressive, even quasi-totalitarian. By the ’90s, deconstruction had lost its cache.

The problem is that Paul de Man was so good.

blakeDerrida was unfairly dismissed as an emperor without clothes, but he also reveled in appearing to waltz through the kingdom naked. For a certain type of student (e.g., me), de Man was much more satisfying. De Man explained heady concepts without Derridean playfulness. He wrote heavy, dense, substantive prose. He reads like a serious scholar applying a theory rather than performing or practicing it. My favorite of his essays is “The Rhetoric of Temporality,” an account of how representations of time are the basis of literary language. He describes how well-known devices – allegory, symbolism, irony – interact with time. He slowly develops an argument that slippage occurs between allegory and symbolism in Romantic poetry, despite the Romantics’ best effort to keep them separate. On this premise, he introduces two “modes” of representing time in literature: “allegory” (which partially includes symbolism) and “irony.”

Toward the end of the essay, de Man writes:

The dialectical play between the two modes [allegory and irony]…make up what is called literary history.

Deconstructionist jargon like “play” aside, de Man’s declaration is downright old-fashioned. Here is an account of literary history premised on literary analysis. When I read this in graduate school, it felt ballsy and refreshing. No hedging, no contextualizing, no whining, no kidding around, just straight-up confidence in his own system: “this is literary history.” I was floored.

So as deconstructionists went, de Man was a straight shooter, on the page if not in his life (perhaps he viewed his two wives as “two modes of dialectical play”). Unlike Derrida or even Barthes, de Man wasn’t messing with me, wasn’t trying to fool or trick me. Even if he believed (along with his intellectual kin) that “everything was a text,” he generally confined himself to literary or rhetorical analyses. I continue to find him useful, which I can’t say about most of his contemporaries. De Man’s work represented deconstruction at its best.

Fractal_swastika_(IFS)But try as I may, I can’t help but detect a bit of the Nazi in it all: the exegetical totality, the confusion (or manipulation) of text and meaning, the all-encompassing instability. And yeah, the biography.

It matters little whether a good physicist was a Nazi, because Nazism probably didn’t contaminate his work. You can kill the Nazi physicist or hire the Nazi physicist, but the physics itself will contains no traces of Nazism. This is slightly less true of a Nazi biologist, who may have covertly adopted Nazi theories of race. For a philosopher, however, the possibility of cross-contamination is so great as to warrant quarantine. Indignant defenders of de Man who separate his scholarship from his anti-Semitic writings are denying this obvious reality. (Derrida’s defense of de Man was better than most because it allowed for cross-contamination[4].)

De Man was a crook and a cheat and a Nazi collaborator. For most literary scholars today, de Man is interesting but irrelevant: deconstruction happened thirty years ago. It had a good run and probably outlasted its expiration date. Meanwhile, those who, like me, find de Man’s insights useful can argue that his political beliefs are functionally irrelevant to his scholarly work. A Chinese wall exists between the Nazism and “The Rhetoric of Temporality.” To reject or to deny? Neither option is good, and Paul de Man isn’t going anywhere, as Barish’s biography proves.

Literary scholars don’t sever Barthes or Foucault from their social, historical, and ideological roots. De Man should be no exception. It’s naive to believe that, before de Man, the humanities weren’t already poisoned by the ugliest ideologies, but it’s impossible to ignore his collaboration with Nazism. So what would it mean to accept both the scholarship and the potential evil attached to it? To not only refuse to let ourselves off the hook, but to actively get on the hook? De Man offered a compelling and useful explanation of literary language, and he also used the written word to collaborate with Nazis. Does deconstruction have Nazi roots? I don’t trust anyone who says “no” reflexively.

C’mon, let’s not be dismissive or defensive or squeamish! Let’s not be afraid of a little blood on our hands!

———

[/1]  You might think you know what “deconstruction” is, and you’re probably wrong. But you’re also probably correct, more or less. From a literary standpoint, deconstruction holds that a poem (or whatever) consists of oppositions that differ and defer to each other in a process Derrida called “play.” This play both creates and subverts the meaning of the poem (or whatever). For de Man, this meant that a poem (or whatever) is self-interpreting.

[/2] Derrida’s obituary was a minor literary event in humanities programs. I’ve seen it assigned on English syllabi, as an instance of productive misreading or something.

[/3] My favorite student of de Man is the late Barbara Johnson, who applied his theories of literary language with intelligence and clarity to topics ranging from Melville’s Billy Budd to the rhetoric of abortion. Her 1994 book The Wake of Deconstruction describes the de Man scandal.

[/4] Derrida, who as a Jewish child was persecuted by the Vichy French government, defended his friend in typical Derridian fashion: he tweaked the anti-Semitic language and found differing oppositions. The full defense is not available online as far as I can tell, but its substance can be gleaned from Jon Wiener’s intelligent, and disapproving, analysis.