Kamikaze Attacks by the Numbers: A Statistical Analysis of Japan’s Wartime Strategy

Note: This is a guest post by Dave Hackerson.

One of the defining symbols of the vicious struggle between the US and Japan in the Pacific War, this word always conjures up a conflicting mix of emotions inside me. The very word “kamikaze” has become a synonym for “suicide attack” in the English language. The way WW2 was taught in school (in America) pretty much left us with the impression that kamikaze attacks were part of the standard strategy of the Japanese Imperial Army and Navy throughout the entire war. However, it was only recently that I was surprised to learn that the first time the Japanese introduced this strategy was on October 25, 1944 during the second Battle of Leyte Gulf. The Mainichi Shinbun here in Japan put together a wonderful collection to commemorate the 70th anniversary of this strategy. It features data that has not only been debated and analyzed from a number of angles, but it also provides statistical evidence that underscores the utter failure of this strategy. The title of the article is “Did the divine wind really blow? ‘Special strikes’ claim lives of 4000,” and it is the second part of a three part series called “Numbers tell a tale—Looking at the Pacific War through data”. The first part was posted in mid-August, and the third and final part is due to be put online in December. The original Japanese version for this special can be accessed here. The slides I refer to numbers “1” to “5” listed at the very bottom of each page. The current slide is the one highlighted in blue.

In this post, I will provide an overview of the information on this site while occasionally inserting my own analysis and translations of select quotes. I hope it helps to paint a clearer picture of a truly flawed strategy that is still not properly understood by both sides.

Slide 1

True to the series name, this article wastes no time in hitting you with some pure, raw data. The first pie graph (11%, 89%) indicates the actual success rate of kamikaze attacks. As you can see from the graph, only 11% were successful, while the remaining 89% ended in failure. This means that merely 1 in 9 planes actually hit their targets. After introducing these figures, the article focuses on the initial execution of the kamikaze strategy during the Second Battle of Leyte Gulf on October 25, 1944. Five planes hit and sunk the escort carrier USS St. Lo, while other planes succeeded in damaging five other ships. The estimated success rate in this battle was 27%.

The article then puts this percentage into context by comparing it to the success rate of dive bomb attacks (non-kamikaze) in other battles. Here are the figures:

Pearl Harbor (1941): 58.5%

Battle of Ceylon (1942): 89% (percentage of hits on the British carrier HMS Hermes)

Coral Sea (1942): 53% (percentage of hits on the USS Lexington, which was severely damaged)

Looking at these figures, it’s clear that the kamikaze attacks were not that effective. The Japanese navy was overly optimistic and believed they would be fairly successful, but the US quickly adapted, and by the end of the war the success rate fell to 7.9% (Battle of Okinawa). Even the Dai Honei (main headquarters of the Japanese forces) admitted that the attacks had little to no effect.

The next part of the article is titled “War of attrition: ‘Certain death’ strategy that claims both aircraft and pilots”. It discusses the reasons why the hit rate for Japanese air force dropped so dramatically as the war wore on. Here are the three reasons cited.

• Decline in the flying abilities of the fighter pilots
• Deteriorating performance of aircraft and materials
• Improvements in American countermeasures

After introducing these reasons, the article makes a very important statement. “Kamikaze attacks meant that you lost both the aircraft and pilots. This not only wore down Japan’s fighting strength, but essentially destroyed the nation’s capacity to actually wage war in the future.”

The article then turns its attention back to the first kamikaze attack and the pilot chosen to lead it. Lieutenant Yukio Seki was a graduate of the naval academy and proven veteran. He died crashing his plane into the USS St. Lo and was later enshrined as a “軍神 (gun-shin, or military god)” in Yasukuni Shrine. The article seems to imply that this “honor” is actually an injustice to the Seki’s memory in light of what he said before heading into battle. “I’m fully confident that I can drop a bomb on any aircraft during a normal attack. Japan’s screwed if it’s ordering a pilot like me to smash his craft into an enemy vessel.” These words are in stark contrast to the quote and images on the cover photo of Shashin Shuuhou (Photographic Weekly, a morale boosting propaganda magazine published from the mid-1930s until mid-1945) that accompanies this post. The main quote shown to the left of the Lieutenant Seki states: “A single vessel strikes true in defense of the land of gods. Oh, our Kamikaze (Divine Wind) Special Strike Force. Your fidelity will shine radiantly for the next 10,000 generations.” The quote in the bottom right further contradicts the statement Sergeant Seki made before he took off. “Lieutenant Seki, commander of the Shikijima Battalion that served as the First Kamikaze Special Strike Force Battalion to be sent out on an all-out bomb strike. Immediately before heading into battle, Lieutenant Seki is said to have rallied his troops with the following cry: ‘Men, we are not members of a bomber squad. We are the bombs. Now up and away with me!’”

Slide 2

The focus then shifts its attention to the heavy losses among the ranks of Japanese pilots. The Japanese navy started with 7000 well-trained pilots at the beginning of the war. By 1944 over 3900 had died in battle. In the early days of the war the Allies estimated that Japanese pilots had a 6-1 superiority advantage over Allied pilots, but by April of 1943 it was even at 1-1. Japan simply could not replace the pilots it lost at a sufficient pace, so it decided to compensate by “short-tracking” their training. The pie graph here is really telling.

Rank A pilots (over 6 months of flight training): 16.3%

Rank B pilots (4 to 6 months of flight training): 14.4%

Rank C pilots (approx. 3 months of flight training): 25%

Rank D pilots (less than 3 months, or in some cases only flight theory): 44.3%

These figures are breakdown of the pilots sent to fight in the Battle of Okinawa in 1945. The article then cites three reasons believed to have initiated a vicious cycle for the Japanese navy:

1. Compensate loss in air force manpower by short-tracking training and sending raw pilots straight into the fight
2. Raw pilots have a low chance of returning from battle, and most likely fail to influence the course of battle
3. Losses only increase, while the ranks of pilots continue to thin.

The authors of the article squarely place the blame on the shoulders of the upper brass in the navy. Personally, I think the Japanese navy would not have sunk to such desperate measures if Admiral Yamamoto hadn’t been shot down and killed in 1943. He would have found a way to prolong the fight and preserve Japan’s precious little resources. One could argue that the U.S.’s decision to shoot down Yamamoto and take him out of the picture eliminated the voice of reason within the Japanese ranks, and actually paved the way for this strategy to be adopted.

The article implies that Japan was insane for throwing away what little resources it had. When the enemy has 10 times the amount of resources, you do everything you can to hold onto what you have. The Japanese brass seemingly defied this logic by not only wasting aircraft, but needlessly wasting human lives. But why would they do that? Japanese writer Kazutoshi Hando, a man who has written extensively about the Showa Period and WW2, provides some valuable insight into how these men thought. “The very concept of logistics was either given little thought or entirely ignored by the Japanese military… After all, in the eyes of the Army’s General Staff Office and the commissioned officers in the navy, troops were ultimately viewed no more as resources that could be gathered for a mere 1 sen 5 ri (price for a postcard at that time). When they formulated a strategy, they flung the troops out to the front with 6 go (about 900 grams) of rice and a 25 kilogram pack. If you ran out of food, you were told to forage for your own supplies wherever you were. Surrender wasn’t option (because it was actually prohibited under the Japanese military code), so if you found yourself in a losing battle, the only option was gyokusai (a figurative term coined by the Japanese military which loosely translates as “beautiful death”). They didn’t give any thought whatsoever to potential survivors.”

Not only were the majority of the pilots deployed in the latter the days of the war vastly inferior, but the aircraft deployed were also no match for the Allied forces. In addition to fighters and bombers, reconnaissance and even practice planes were deployed! As the war wore on, Japan faced these problems:

• Lack of skilled engineers, resulting in the low performance of new aircraft because of the deterioration in manufacture and production quality.
• Use of low octane, poor quality fuel.

In spite of all these problems, the Japanese armed forces went ahead with this strategy. The navy asked for the construction of aircraft that would save on materials, be easy to fly in training, and able to conserve fuel. Unfortunately, the end product was of inferior quality compared to the aircraft produced in the early days of the war. Combined with poorly trained pilots, it was simply a disaster waiting to happen.

Slide 3

This slide focuses on the performance capacity of the aircraft. Lots of info on plane specs, but as you can see, by the end of the war Allied aircraft were simply far superior to Japanese planes in every respect. The kanji 零 in the name of the plane 零式艦上戦闘機　21型 indicates “rei” or “zero” (Type Zero Carrier Fighter 21). Click and hold the mouse cursor on the plane to rotate the view. The specs of the Zero changed very little during the war. The first generation of Zero fighters (1939) carried a Sakae 21 Engine, which boasted 950 hp. The type produced after 1943 was fitted with the Sakae 52 Engine that delivered 1100 hp, an improvement of only 150 hp, and a top speed of 624 km/h. A seasoned pilot would have had his hands full going up against the likes of the USAF Hellcats and P51s, but with the green pilots the Japanese forces sent up it was clear they no longer carried about fighting for air superiority.

Slide 4

I won’t get into the details here, but this slide reveals how quickly the US adapted to the kamikaze attacks. Surprising as this may sound, these attacks failed to sink any major ships or carriers. This is because the US used radar effectively to scramble fighters to meet the Japanese attacks. In addition, the US had damage control units on board each ship, so even if a kamikaze pilot broke through, the damage could be contained right away, enabling the craft to stay in the fight. While many ships were damaged, less than 50 were actually sunk. The chart here is quite telling. The red bars indicate ships sunk, and the yellow bars indicate ships damaged, but not sunk.

Slide 5

This slide takes an indirect jab at people who attempt to beautify the sacrifices made by kamikaze pilots. The vast majority did not want to participate in the attacks. Saburo Sakai, one of Japan’s ace pilots, commented on how the strategy lowered morale. “The morale sunk”, he said. “Even if the reasons for fighting mean that you have only a 10 percent chance of coming back, you’ll fight hard for that. The guys upstairs (upper brass) claim morale went up. That’s a flat-out lie.”

There were even instances of NCOs ordering their men not to do kamikaze attacks, and instead instructed them to conduct “normal attacks”. In an interview linked to this article, the non-fiction writer Masayasu Hosaka speaks about reading the memoirs of someone who witnessed the pilots flying off to do their kamikaze attack. This witness states that the radios of all the aircraft were kept on, so they could actually hear everything the pilots said, including the statements they uttered right before they met their end. Here are some of the things kamikaze pilots said: “F*ing navy aholes!” , “Oh Mother!”, or the name of their wives or sweethearts. It seems that very few shouted “Banzai Japanese Empire” (“Banzai” means “10,000 years”).

Returning to the question originally posed in the title of the article, it is almost assuredly clear that the divine wind never blew. There wasn’t even much of a breeze. In adding my own two cents, the kamikaze attacks were a great propaganda tool for the US, for it allowed us to portray the enemy as fanatical and beyond reason. This made it easy for us to justify the atomic bombings, especially after the war, because the kamikaze attacks seemingly “proved” that only excessive measures would bring them to the negotiation table. The propaganda twist on kamikaze tactics was carried over into post-war education in the US, and led many of us (or at least myself when I was a kid) to believe that Japanese soldiers were possessed with an unswerving conviction to fight to the death.

In closing, I once again borrow the words of Kazutoshi Hando. He cuts straight to the chase:

The complete irresponsibility and stupidity of the nation’s military leaders drove the troops to their deaths. The same can be said for the kamikaze special strike force strategy. They took advantage of the unadulterated feelings of the pilots. People claim it’s a form of ‘Japanese aesthetics’, but that’s pure nonsense. The General Staff Office built it up as some grand strategy when in actuality they sat at their desks merely playing with their pencils wondering ‘how many planes can we send out today?’ This lot can never be forgiven.

14 Reasons Susan Sontag Invented Buzzfeed!

By Seth Studer

If you’re looking for a progenitor of our list-infested social media, you could do worse than return to one of the most prominent and self-conscious public intellectuals of the last half century. The Los Angeles Review of Books just published an excellent article by Jeremy Schmidt and Jacquelyn Ardam on Susan Sontag’s private hard drives, the contents of which have recently been analyzed and archived by UCLA. Nude photos have yet to circulate through shadowy digital networks (probably because Sontag herself made them readily available – Google Image, if you like), and most of the hard drives’ content is pretty mundane. But is that going to stop humanists from drawing broad socio-cultural conclusions from it?

Is the Pope Catholic?

Sontag, whose work is too accessible and whose analyses are too wide-ranging for serious theory-heads, has enjoyed a renaissance since her death, not as a critic but as an historical figure. She’s one of the authors now, like Marshall McLuhan or Norman Mailer, a one-time cultural institution become primary text. A period marker. You don’t take them seriously, but you take the fact of them seriously.

Sontag was also notable for her liberal use of lists in her essays.

“The archive,” meanwhile, has been an obsession in the humanities since Foucault arrived on these shores in the eighties, but in the new millennium, this obsession has turned far more empirical, more attuned to materiality, minutia, ephemera, and marginalia. The frequently invoked but still inchoate field of “digital humanities” was founded in part to describe the work of digitizing all this…stuff. Hard drives are making this work all the more interesting, because they arrive in archive pre-digitized. Schmidt and Ardam write:

All archival labor negotiates the twin responsibilities of preservation and access. The UCLA archivists hope to provide researchers with an opportunity to encounter the old-school, non-digital portion of the Sontag collection in something close to its original order and form, but while processing that collection they remove paper clips (problem: rust) and rubber bands (problems: degradation, stickiness, stains) from Sontag’s stacks of papers, and add triangular plastic clips, manila folders, storage boxes, and metadata. They know that “original order” is something of a fantasy: in archival theory, that phrase generally signifies the state of the collection at the moment of donation, but that state itself is often open to interpretation.

Microsoft Word docs, emails, jpegs, and MP3s add a whole slew of new decisions to this delicate balancing act. The archivist must wrangle these sorts of files into usable formats by addressing problems of outdated hardware and software, proliferating versions of documents, and the ease with which such files change and update on their own. A key tool in the War on Flux sounds a bit like a comic-book villain: Deep Freeze. Through a combination of hardware and software interventions, the Deep Freeze program preserves (at the binary level of 0’s and 1’s) a particular “desired configuration” in order to maintain the authenticity and preservation of data.

Coincidentally, I spent much of this morning delving into my own hard drive, which contains documents from five previous hard drives, stored in folders titled “Old Stuff” which themselves contain more folders from older hard drives, also titled “Old Stuff.” The “stuff” is poorly organized: drafts of dissertation chapters, half-written essays, photos, untold numbers of .jpgs from the Internet that, for reasons usually obscure now, prompted me to click “Save Image As….” Apparently Sontag’s hard drives were much the same. But Deep Freeze managed to edit the chaos down to a single IBM laptop, available for perusal by scholars and Sontag junkies. Schmidt and Ardam reflect on the end product:

Sontag is — serendipitously, it seems — an ideal subject for exploring the new horizon of the born-digital archive, for the tension between preservation and flux that the electronic archive renders visible is anticipated in Sontag’s own writing. Any Sontag lover knows that the author was an inveterate list-maker. Her journals…are filled with lists, her best-known essay, “Notes on ‘Camp’” (1964), takes the form of a list, and now we know that her computer was filled with lists as well: of movies to see, chores to do, books to re-read. In 1967, the young Sontag explains what she calls her “compulsion to make lists” in her diary. She writes that by making lists, “I perceive value, I confervalue, I create value, I even create — or guarantee — existence.”

As reviewers are fond of noting, the list emerges from Sontag’s diaries as the author’s signature form. … The result of her “compulsion” not just to inventory but to reduce the world to a collection of scrutable parts, the list, Sontag’s archive makes clear, is always unstable, always ready to be added to or subtracted from. The list is a form of flux.

The lists that populate Sontag’s digital archive range from the short to the wonderfully massive. In one, Sontag — always the connoisseur — lists not her favorite drinks, but the “best” ones. The best dry white wines, the best tequilas. (She includes a note that Patrón is pronounced “with a long o.”) More tantalizing is a folder labeled “Word Hoard,” which contains three long lists of single words with occasional annotations. “Adjectives” is 162 pages, “Nouns” is 54 pages, and “Verbs” is 31 pages. Here, Sontag would seem to be a connoisseur of language. But are these words to use in her writing? Words not to use? Fun words? Bad words? New words? What do “rufous,” “rubbery,” “ineluctable,” “horny,” “hoydenish,” and “zany” have in common, other than that they populate her 162-page list of adjectives? … [T]he Sontag laptop is filled with lists of movies in the form of similar but not identical documents with labels such as “150 Films,” “200 Films,” and “250 Films.” The titles are not quite accurate. “150 Films” contains only 110 entries, while “250 Films” is a list of 209. It appears that Sontag added to, deleted from, rearranged, and saved these lists under different titles over the course of a decade.

“Faced with multiple copies of similar lists,” continue Schmidt and Ardam, “we’re tempted to read meaning into their differences: why does Sontag keep changing the place of Godard’s Passion? How should we read the mitosis of ‘250 Films’ into subcategories (films by nationality, films of ‘moral transformation’)? We know that Sontag was a cinephile; what if anything do these ever-proliferating Word documents tell us about her that we didn’t already know?” The last question hits a nerve for both academic humanists and the culture at large (Sontag’s dual audiences).

Through much of the past 15 years, literary scholarship could feel like stamp collecting. For a while, the field of Victorian literary studies resembled the tinkering, amateurish, bric-a-brac style of Victorian culture itself, a new bit of allegedly consequential ephemera in every issue of every journal. Pre-digitized archives offer a new twist on this material. Schmidt and Ardam: “The born-digital archive asks us to interpret not smudges and cross-outs but many, many copies of almost-the-same-thing.” This type of scholarship provides a strong empirical base for broader claims (the kind Sontag favored), but the base threatens to support only a single, towering column, ornate but structurally superfluous. Even good humanist scholarship – the gold standard in my own field remains Mark McGurl’s 2009 The Program Era – can begin to feel like an Apollonian gasket: it contains elaborate intellectual gyrations but never quite extends beyond its own circle. (This did not happen in Victorian studies, by the way; as usual, they remain at the methodological cutting edge of literary studies, pioneering cross-disciplinary approaches to reading, reviving and revising the best of old theories.) My least favorite sentence in any literary study is the one in which the author disclaims generalizability and discourages attaching any broader significance or application to the study. This is one reason why literary theory courses not only offer no stable definition of “literature” (as the E.O. Wilsons of the world would have us do), they frequently fail to introduce students to the many tentative or working definitions from the long history of literary criticism. (We should at least offer our students a list!)

In short, when faced with the question, “What do we do with all this…stuff?” or “What’s the point of all this?”, literary scholars all-too-often have little to say. It’s not that a lack of consensus exists; it’s an actual lack of answers. Increasingly, and encouragingly, one hears that a broader application of the empiricist tendency is the next horizon in literary studies. (How such an application will fit into the increasingly narrow scope of the American university is an altogether different and more vexing problem.)

Sontag’s obsession with lists resonates more directly with the culture at large. The Onion’s spin-off site ClickHole is the apotheosis of post-Facebook Internet culture. Its genius is not for parody but for distillation. The authors at ClickHole strip the substance of clickbait – attention-grabbing headlines, taxonomic quizzes, and endless lists – to the bone of its essential logic. This logic is twofold. All effective clickbait relies on the narcissism of the reader to bait the hook and banal summaries of basic truths once the catch is secure. The structure of “8 Ways Your Life Is Like Harry Potter” would differ little from “8 Ways Your Life Isn’t Like Harry Potter.” A list, like a personality quiz, is especially effective as clickbait because it condenses a complex but recognizable reality into an index of accessible particularities. “Sontag’s lists are both summary and sprawl,” write Schmidt and Ardam, and much the same could be said of the lists endlessly churned out by Buzzfeed, which constitute both an structure of knowledge and a style of knowing to which Sontag herself made significant contributions. Her best writing offered the content of scholarly discourse in a structure and style that not only eschewed the conventions of academic prose, but encouraged reading practices in which readers actively organize, index, and codify their experience – or even their identity – vis a vis whatever the topic may be. Such is the power of lists. This power precedes Sontag, of course. But she was a master practitioner and aware of the list’s potential in the new century, when reading practices would become increasingly democratic and participatory (and accrue all the pitfalls and dangers of democracy and participation). If you don’t think Buzzfeed is aware of that, you aren’t giving them enough credit.

Against Neil deGrasse Tyson: a Longer Polemic

By Seth Studer

In her recent Atlantic review of two new books on atheism, Emma Green brilliantly demarcates what is missing from the now decade-long insurgency of anti-ideological atheism. I use the term “anti-ideological atheism” instead of “neo-atheism” or “new atheism” or the obnoxious, self-applied moniker “noes” because opposition to ideology – to ideational constructions – is one of the major recurring threads among these varied atheist identities (a frightening mixture of elitism and populism is another). Green illustrates this point when she notes the incongruity between Peter Watson’s new history of post-Enlightenment atheism, Age of Atheists, and the kind of atheism most vocally espoused in the 21st century. The central figure in Watson’s study, Friedrich Nietzsche, is almost never cited by Richard Dawkins or Samuel Harris or Neil deGrasse Tyson. Nor, for that matter, are Nietzsche’s atheistic precursors or his atheistic descendants…all diverse in thought, all of whom would have been essential reading for any atheist prior to, well, now.

The most famous atheist, the one whose most famous quote – GOD IS DEAD – your scrawled with a sharpie on the inside door of your junior high locker, is almost persona non grata among our most prominent living atheists. His near-contemporary, Charles Darwin (hardly anyone’s idea of a model atheist), is the belle of the bellicose non-believer’s ball.

Green also notes that the other famous 19th century atheist – Karl Marx, whose account of religious belief vis a vis human consciousness is still convincing, at least more than Nietzsche’s – is likewise incited by our popular atheists. The reason may be simple: invocations of Marx don’t score popularity points anymore, and the business of anti-ideological atheism is nothing if not a business.

But there is, I believe, a larger reason for the absence of Nietzsche, Marx, and almost all other important atheists from today’s anti-ideological atheism. As fellow Jilter Graham Peterson recently said to me, these popular atheists need a dose of humanities: liberal inquiry and a sense that truth is hard, not dispensable in easy little bits like Pez candies. I would expand on that: they need a more dynamic discursivity, they need more contentiousness, they need more classical, humanist-style debate. They need the kind of thinking that frequently accompanies or produces ideology.

But of course, most of them don’t want that. They resist Nietzsche’s ideological critiques. They resist Marx who, despite his inherent materialism, is more systematically ideological than, say, Darwin. Sigmund Freud (who dedicated an entire tract to atheism and who is central to its 20th century development) is never mentioned, along with a host of other names.

And they do not invite new critiques – except, apparently, from Young Earth Creationists.

The title of Green’s review is pitch perfect: “The Intellectual Snobbery of Conspicuous Atheism: Beyond the argument that faith in God is irrational—and therefore illegitimate.” Contrary to what Richard Dawkins and others might claim, atheists are not a persecuted minority in the West (any group consisting mostly of white men is always eager to squeeze and contort their way into “persecuted minority” status, even as persecuted minorities struggle to push out). Anti-ideological atheism is declared conspicuously, a badge of honor and a sign of intellect. Green quotes Adam Gopnik, who introduces the nauseating term “noes,”

What the noes, whatever their numbers, really have now … is a monopoly on legitimate forms of knowledge about the natural world. They have this monopoly for the same reason that computer manufacturers have an edge over crystal-ball makers: The advantages of having an actual explanation of things and processes are self-evident.

In this respect, the “noes” have “an actual explanation of things” in greater abundance than did Nietzsche or Marx or (especially) the atheists of antiquity. In this respect, the atheists of yore and religious believers have more in common with each other than with the “noes” of today.

In my last post, I shared my thoughts about the meteoric rise of Neil deGrasse Tyson (do meteors rise? I’m sure deGrasse Tyson would have something to say about that bit of rhetorical infactitude). It may seem unfair to pick on deGrasse Tyson when, in reality, I’m bemoaning a phenomenon that began back when George W. Bush used vaguely messianio-Methodist language to frame the invasion of Iraq, an event that, whatever you think of its initial rationalizations, was poorly executed, quickly turned to shit, and set the “War on Terror” back at least a decades. In/around 2004, Richard Dawkins (who is still the author of the best popular overview of natural history ever written) realized that conditions existed for a profitable career shift.

Widespread discontent with politico-religious language was in the United States – where right-wing militarists decried the brand of fundamentalist Islam that obliterated lower Manhattan and anti-war leftists decried the (pascificst-by-comparison) brand of fundamentalist Christianity that influenced U.S. policy – coincided with fear of religious extremism in Europe, where the vexed term “Islamophobia” retained some usefulness: legitimate anxieties about theocratic terrorism (e.g., violent anti-Western responses to the deliberately provocative Mohammad cartoons and then the public slaughter of Theo van Gogh) mingled with old-fashioned European xenophobia, which was never a perfect analogue to American xenophobia. And between the U.S. and Europe lies England, where political and public responses to Islamic terrorism less often involved blustery American gun-slinging or shrill continental nativism but rather stern appeals to “common sense.” Since the collapse of British colonialism, intellectuals in England are less apt to use the term civilization than are their cousins across the Channel or their cousins across the Pond (where the term has been historically deployed by cultural warriors, a la Alan Bloom, in order to give anti-colonial leftists the willies).

The term civilized, on the other hand, is still relevant in English public discourse: not with regard to other societies, but to English society. The concept of civilized discourse (or civilised, if you will) doesn’t seem to carry the same ideological freight as civilization. But when Dawkins mocks post-positivist socio-humanist* analyses of, say, indigenous Amazonian cultures who explain natural phenomena (e.g., how the jaguar get its spots) with traditional tales, his arguments carry the epistemological heft of a suburban Thatcherite scanning his daughter’s contemporary philosophy textbook, throwing his hands in the air, and exclaiming “Oh come on!” In other words, Dawkins belongs to the long line of British “common sense” thinkers. Born in Kenya, raised in Africa, and a fan of Kipling, Dawkins has been criticized for possessing a colonial bent to his thought.

And there’s something to be said for common sense, even common sense colonialism; George Orwell, of all people, joined Rudyard Kipling (one of the most misunderstood writers in the English canon) to defend British colonialism in England on the reasonable (if depressing) grounds that, had the English let India be, the Russians would have colonized the subcontinent. This hardly excuses British crimes against India and its people, but even a cursory overview of Russian colonial atrocities forces one to sigh a very troubled and uncomfortable sigh of – what, relief? – that the British Raj was the guilty party.

But common sense is not fact, much less knowledge, and Dawkins has made a career of playing fast and loose with these concepts. In Unweaving the Rainbow (1998), Dawkins defended science not against the pious but against the epistemological excesses of cultural studies. In one chapter, he wrote that an Amazonian tribesman who is convinced that airplanes are fueled by magic (Dawkins’ examples often play off colonial tropes) and the the socio-humanist (usually an American cultural studies professor or graduate student in English whose dress and hygiene or dubious and who write with incomprehensible jargon) who respects the Amazonian’s conviction are both reprehensible, especially the professor, who is an enabler: he could give the ignorant native a cursory lesson in physics, but instead paints a scholarly veneer over so much tribal mumbo-jumbo. Why not explain the real source of wonder and disabuse the native of his false notions: that beautiful physics can explain how people fly!

Despite its best efforts, Unweaving the Rainbow was Dawkins’ first foray into the “Debbie Downer” genre of popular science writing. This genre pits the explanatory power of “scientific knowledge” (more about that term in a moment) against religion, superstition, homeopathy, most of Western philosophy, and pretty much any knowledge acquired or unverified by non-quantitative methods.

The “Debbie Downer” genre can be useful, especially when turned on the practice of science itself: Dawkins and his allies have successfully debunked the dogmatism that led Stephen Jay Gould’s career astray. The atrocities of Nazi and Soviet science were exposed and explained with both rigorous science and common sense. The genre can also be used  to wildly miss the point of things. I have friends who are ardent Calvinists or ex-Calvinists, who are incapable of reading Paul’s epistles without a Calvinist interpretation. They read Paul, but all they see is Calvinism. Likewise with fundamentalists and anti-ideological atheists who read Genesis but only see cosmology. Yet Paul was not a Calvinist, and Genesis is not cosmology. In some sense, the same principle applies to deGrasse Tyson and Gravity. Is this a question of knowing too much or thinking too little?

In Unweaving the Rainbow, Dawkins confronts charge that science takes all the fun and beauty of the world just by, y’know, ‘splainin’ it. Somewhat comically, the book’s title literalizes an instance of poetic language, a practice common among Dawkins’ bête noire: religious fundamentalists. John Keats’ playful exasperation that “charms fly/ at the touch of cold philosophy” and that the natural sciences (still embryonic in Keats’ time) “unweave the rainbow,” reducing it to “the dull catalogue of common things,” is beautifully articulated representation of a well-worn human experience, one that requires appreciation more than rebuttal. But for Dawkins, the poem demands rebuttal, and not a rebuttal that distinguishes between the uses and functions of poetic language. Unweaving the Rainbow is a treatise that, dammit, science makes the world more beautiful, not the other way round.

And Dawkins is correct. After reading his marvelous Ancestor’s Tale, I felt a profound kinship with every toad I encountered on the sidewalk and every grasshopper that attached itself to my arm, six cousinly feet twisting my skin uncomfortably. Between Unweaving the Rainbow and Ancestor’s Tale, Dawkins wrote A Devil’s Chaplin, a haphazardly organized collection of Debbie Downer essays that is probably best understood as the director ancestor of Dawkins’ most successful book, The God Delusion. The book represented a specific cultural moment, described above, when everyone was eager to read why God sucked. I don’t need to rehearse the narrative or the players (something about four horsemen, cognitive, an obnoxious and inappropriate use of the prefix “neo”). Even The God Delusion‘s harshest critics praised Dawkins for capturing the zeitgeist in a bottle. But the most prominent and widely-cited negative review, by Marxist literary theorist Terry Eagleton, did not. Eagleton captured Dawkins, his personality and his project, to near perfection in the London Review of Books:

[Dawkins’ views] are not just the views of an enraged atheist. They are the opinions of a readily identifiable kind of English middle-class liberal rationalist. Reading Dawkins, who occasionally writes as though ‘Thou still unravish’d bride of quietness’ is a mighty funny way to describe a Grecian urn, one can be reasonably certain that he would not be Europe’s greatest enthusiast for Foucault, psychoanalysis, agitprop, Dadaism, anarchism or separatist feminism. All of these phenomena, one imagines, would be as distasteful to his brisk, bloodless rationality as the virgin birth. Yet one can of course be an atheist and a fervent fan of them all. His God-hating, then, is by no means simply the view of a scientist admirably cleansed of prejudice. It belongs to a specific cultural context. One would not expect to muster many votes for either anarchism or the virgin birth in North Oxford. (I should point out that I use the term North Oxford in an ideological rather than geographical sense. Dawkins may be relieved to know that I don’t actually know where he lives.)

Eagleton’s Marxist ad hominem is amusing: he reduces Dawkins’ own self-proclaimed materialism to his class. Dawkins is a very, very identifiable type. I’m not sure whether Eagleton knew, when he quoted Keats, that Dawkins had written a book whose title misread – or at least misappropriated – the most flowery of Romantic poets.

Eagleton’s more substantial complaint – that there are many kind of atheists, not all of whom derive their views from a fetishized notion of the natural sciences’ explanatory powers – was echoed in many other reviews. It was even the basis for a two-part episode of South Park.

Another common complaint: The God Delusion engaged with religious faith very narrowly, responding to only the most extreme fundamentalist interpretations of scripture and dogma. Dawkins hadn’t boned up on his Tillich. He’s a scientist stumbling clumsily through the humanities, unaware that his most basic criticisms of faith have been taken seriously by religious people since the Middle Ages. Again, Eagleton:

What, one wonders, are Dawkins’s views on the epistemological differences between Aquinas and Duns Scotus? Has he read Eriugena on subjectivity, Rahner on grace or Moltmann on hope? Has he even heard of them? Or does he imagine like a bumptious young barrister that you can defeat the opposition while being complacently ignorant of its toughest case? … As far as theology goes, Dawkins has an enormous amount in common with Ian Paisley and American TV evangelists. Both parties agree pretty much on what religion is; it’s just that Dawkins rejects it while Oral Roberts and his unctuous tribe grow fat on it.

More troubling than his exclusion of Eriugena and de facto collusion with Oral Roberts is his exclusion of so many other atheists. The God Delusion was published before Christopher Hitchens’ God is Not Great, a very bad book that nevertheless engaged with atheism per sedrawing from an intellectual history that extended from Lucretius to Spinoza and Thomas Paine (a list Hitchens never tired of reciting on cable news show, grinning slyly at the thought of pot-bellied viewers on their sofas, scratching their heads: I think I’ve heard of that Payne guy, but who in the sam hill is Lew Crishus?).

If Dawkins was a scientist posing as a humanist – or, more correctly, a scientist trying to sell ideology as scientific fact – then Hitchens was a humanist posing as someone with a basic understanding of science. In reality, Hitchens knew the Bible, had spent his career admiring religious thinkers and religious poets. Near the end of the Hitchens v. Douglas Wilson documentary Collision, Hitchens recalls a conversation with Dawkins, during which Hitchens declared that, if given the power to wipe religious belief off the face of the earth, he wouldn’t do it. “Why not?!” shrieked Dawkins – Hitchens, repeating the anecdote to Wilson, does a killer imitation of Dawkins’ spine-tingling shriek. Hitchens has no answer for Dawkins. He simply can’t conceive of a world without at least one religious believer.

More on point, however, is the following passage from Eagleton’s review:

Dawkins considers that all faith is blind faith, and that Christian and Muslim children are brought up to believe unquestioningly. Not even the dim-witted clerics who knocked me about at grammar school thought that. For mainstream Christianity, reason, argument and honest doubt have always played an integral role in belief. (Where, given that he invites us at one point to question everything, is Dawkins’s own critique of science, objectivity, liberalism, atheism and the like?) Reason, to be sure, doesn’t go all the way down for believers, but it doesn’t for most sensitive, civilised non-religious types either. Even Richard Dawkins lives more by faith than by reason. We hold many beliefs that have no unimpeachably rational justification, but are nonetheless reasonable to entertain. Only positivists think that ‘rational’ means ‘scientific’. Dawkins rejects the surely reasonable case that science and religion are not in competition on the grounds that this insulates religion from rational inquiry. But this is a mistake: to claim that science and religion pose different questions to the world is not to suggest that if the bones of Jesus were discovered in Palestine, the pope should get himself down to the dole queue as fast as possible. It is rather to claim that while faith, rather like love, must involve factual knowledge, it is not reducible to it. For my claim to love you to be coherent, I must be able to explain what it is about you that justifies it; but my bank manager might agree with my dewy-eyed description of you without being in love with you himself.

Dawkins would no doubt balk at the notion that he take Eagleton’s advice and “critique” science. Science is self-critiquing, after all! Science is reasonable by its very structure. Science and reason are near synonyms in the anti-ideological atheist lexicon.

This, for me, is the most troubling aspect of Dawkins and deGrasse Tyson’s trendy, anti-ideological atheism.

Let us consider once more the subtitle of Emma Green’s Atlantic review: for  the argument that faith in God is irrational—and therefore illegitimate.” Both Green and Eagleton observe what is perhaps the most troubling aspect of popular, anti-ideological atheism: it conflates terms like “reason,” rationality,” “fact,” “science,” and “knowledge.” In fact, I believe Eagleton goes too far when he asserts that “only positivists think that ‘rational’ means ‘scientific.'” Many positivists can make the distinction. (Eagleton’s reflexive assertion to the contrary is merely a product of decades spent defending post-positivist thought to his fellow Marxists.)

The popularizers of anti-ideological atheism play very fast and loose with a specific set of words: “science,” “reason,” “(ir)rationality,”  “knowledge,” “fact,” “truth,” and “information.” It is absolutely necessary to distinguish between these words. In many contexts, it is not “irrational” to object to scientifically produced knowledge, especially if you’re objecting to the implementation of that knowledge.

If I were a public intellectual with a large platform – that is, if I were Neil deGrasse Tyson – I’d go on a speaking tour. The tour’s only goal would be the definition of some basic terms, as they ought to be used by laypersons (obviously specialists will have slightly different definitions, and that’s okay). Information is data we glean from the world through our senses and technologies. Science is a method that uses information to test ideas and produce knowledge. Ideas are organized assumptions about the world. Ideas that are verifiable using scientific methods become knowledge. Reason is a system of organizing knowledge, which allows knowledge to be used for all sorts of great things: to determine a set of ethics, to decide the best shape of government, to demarcate reasonably accurate beliefs about the world, to guide us through daily decisions, etc. Rationality is reason with a French accent.

Facts are stubborn but undeniable things, some of them unveiled by the scientific method and others revealed through our senses/technologies, which help us glean information and confirm knowledge produced by the scientific method. Truth is the ontological status of reality, which makes it a very tricky thing to define and understand, and is therefore probably best passed over in silence…at least in casual conversations or book tours. True is an elastic adjective that allows us to describe the proximity of knowledge, ideas, and impressions to reality, as we understand it via science, knowledge, reason, and facts.

These definitions are not perfect, and I’m sure you and my fellow Jilters have problems with some/all of them. But I think they’re suitable for casual use. At the very least, they admit distinctions between concepts.

Anti-ideological atheists misuse these concepts for rhetorical purposes, and they encourage the public’s tendency to conflate them.

This is wrong.

When Neil deGrasse Tyson insists that “evolution is a fact,” he’s playing with rhetoric to make a political point. For too long, Creationists have conflated the scientific and popular definitions of the word “theory,” transmuting well-established and verifiable knowledge about life into speculation: Darwin’s theory of speciation was as reliable as a hopeful suitor’s theory of “why she isn’t returning my phone calls.”

But in both scientific and common English, theory is not an antonym of fact (sorry Creationists) and a theory cannot be a fact (as deGrasse Tyson well knows). A theory is established by facts. Richard Dawkins, Samuel Harris, Daniel Dennett, Neil DeGrasse Tyson, and Bill Nye have had countless opportunities to make these simple distinctions to the public; Christopher Hitchens possessed both the knowledge and rhetorical precision to explain the distinctions. But distinctions don’t pack much punch. Politically and ideologically, it’s better to affirm that “evolution is a fact,” just like gravity, and not allow the Creationists to keep slithering through their own linguistic sophistry. And just as explaining a joke drains its humor, debunking a slick sophistry invariably drains your authority. Better to bludgeon than to slice. And as anyone who has seen the ads or watched the first two episodes of his Cosmos knows, deGrasse Tyson is happy to bludgeon.

*By “socio-humanist,” I refer to scholars in the humanities (I use “humanist” as the humanities equivalent of “scientist”) and certain branches of the social sciences; I’m not referring to the broader category of post-Englightenment “secular humanism,” within which Dawkins might count himself.

What The Fox Doesn’t Know?

There’s an emerging dustup between FiveThirtyEight‘s Nate Silver and The New Republic’s Leon Wieseltier. Since much of this already boiling down to a re-hash of CP Snow’s “Two Cultures” debate, I’ll try to look at each’s argument and then observe some strengths and flaws. TL: DR — both are talking past each other, one has some big flaws, and the other is missing the point.

First, Silver. Reading through Silver’s blog, I see two sorts of arguments being made from the philosophy of science that Silver (perhaps in the interest in readability, doesn’t fully explain) — prediction and falsification. Silver sees the primary problem with journalism as being one in which a gap exists between collection and organization of information and explanation and generalization. Silver’s idea of how to fix this gap is strongly bound up in the idea of producing knowledge that is both falsifiable and have good out of sample predictive qualities.

For example, they cite three factors they say were responsible for Mitt Romney’s decline in the polls in early mid-September: the comparatively inferior Republican convention, Romney’s response to the attacks in Benghazi, Libya, and Romney’s gaffe-filled trip to London. In fact, only one of these events had any real effect on the polls: the conventions, which often swing polls in one direction or another. (This does not require any advanced analysis — it’s obvious by looking at the polls immediately before and after each event.) Explanation is more difficult than description, especially if one demands some understanding of causality. …. …But while individual facts are rigorously scrutinized and checked for accuracy in traditional newsrooms, attempts to infer causality sometimes are not, even when they are eminently falsifiable.

Explanation is about why particular things occur, and these explanations should ideally be falsifiable. Notice that Silver does not necessarily say that all explanations are falsifiable. If he did, this would rule out large swaths of the hard sciences that rely on notions that are not directly falsifiable.  He would also rule out the utility of heuristic understandings of phenomena where good data does not exist, or where the results of statistical meta-analysis are inconclusive and contradictory. Still, Silver seems to privilege explanations that are falsifiable — and as I will later detail — gloss over some of the enormous problems with the conception of science that he mentions as a model for his site.

He later goes on to make a covering-law esque argument that particular explanations should be evaluated for how well they scale with the aim of finding useful general truths. He equates explanation and causality with the classical model of an explanandum to be explained and a set of premises that explain it. Silver says that a generalization must be tested by how well it predicts out of sample, and equates this to falsification in the absence of laboratory experiments. However, while Silver may have a point about prediction, there are some distinct nuances to how falsification has been considered in the philosophy of science.

The problem with Silver’s argument is that he glosses over just how hard it is to actually get rid of a theory. If you believe Imre Lakatos, than the hard core of a research program itself is unfalsifiable. If you subscribe to a coherentist view in the philosophy of science, you may believe (like Duhem-Quine) that a theory is not one thing but a web and one has to defeat the core of theory and its outlying components. You may not, as per Feyeraband, believe that we can rise to a general model of science and that domain-specific principles rule. And this is to say nothing of the vast array of historical and sociological work on the ways in which science is actually practiced, which to some extent have some uncomfortable aspects in common with Silver’s critique of punditry as being driven by strong ideological priors.

Now, if we focus solely on the aspect of predictive accuracy Silver seems to be on stronger grounds. Given that it is so hard to really falsify a theory, and that it is also easy to rescue a theory by saving it from failures to predict, Milton Friedman made a much-maligned argument that theory itself is inherently tautological and what matters is whether or not the theory accounts for things that haven’t been observed yet:

The ultimate goal of a positive science is the development of a “theory” or, “hypothesis” that yields valid and meaningful (i.e., not truistic) predictions about phenomena not yet observed. Such a theory is, in general, a complex intermixture of two elements. In part, it is a “language” designed to promote “systematic and organized methods of reasoning.” In part, it is a body of substantive hypotheses designed to abstract essential features of complex reality. Viewed as a language, theory has no substantive content; it is a set of tautologies. Its function is to serve as a filing system for organizing empirical material and facilitating our understanding of it; and the criteria by which it is to be judged are those appropriate to a filing system. Are, the categories clearly and precisely defined? Are they exhaustive? Do we know where to file each individual, item, or is there considerable ambiguity? Is the system of headings and subheadings so designed that we can quickly find an item we want, or must we hunt from place to place? Are the items we shall want to consider jointly filed together? Does the filing system avoid elaborate cross-references?

Friedman in many ways bypasses the problem of falsification by noting that a theory’s internal consistency is not necessarily important because consistency can easily lapse into tautology:

A hypothesis is important if it “explains” much by little, that is, if it abstracts the common and crucial elements from the mass of complex and detailed circumstances surrounding the phenomena to be explained and permits valid predictions on the basis of them alone. To be important, therefore, a hypothesis must be descriptively false in its assumptions; it takes account of, and accounts for, none of the many other attendant circumstances, since its very success shows them to be irrelevant for the phenomena to be explained. To put this point less paradoxically, the relevant question to ask about the “assumptions” of a theory is not whether they are descriptively “realistic,” for they never are, but whether they are sufficiently good approximations for the purpose in hand. And this question can be answered only by seeing whether the theory works, which means whether it yields sufficiently accurate predictions. The two supposedly independent tests thus reduce to one test.

For Friedman the issue is that whether or not valid predictions follow from the minimal components of a theory that can approximate something of interest. This actually contradicts the Tetlock-like argument that Silver makes about ideologically strong priors held by pundits. A pundit could believe any number of things that might seem patently ridiculous — but what matters is that they permit valid predictions. Silver might agree that this is true, and make an argument (as he has) that pundits should be open to revising their beliefs in light of failed predictions, updating their priors in a Bayesian fashion. While I would agree that this would be a Good Thing, it also shows Silver’s lack of understanding about the nature of punditry.

When Silver talks about strong priors and ideological beliefs, he’s in some ways paraphrasing Noah Smith’s now-infamous explanation of “derp” as unusually strong Bayesian belief states that resist posterior estimation. Silver and Smith are arguing that even math-averse pundits have implicit models of how the world works, and those models ought to be evaluated for predictive accuracy. It is true that all pundits that make normative arguments about complicated social things have implicit models of the world, and also make implicit predictions about the future. But this is secondary really to the purpose of punditry to begin with.  Pundits do not see things in terms of probability — Bayesian or Frequentist. The basic column has the following format: “X is the present state of the world, Y is wrong/right in it, Z should be done/not done.” X is the area most amenable to Silver-like data analysis, but as we move from X down to Z the idea of using scientific arguments to address it becomes more and more problematic.  The relationship between science and religion, for example, is still not something that we have gotten a good handle on despite centuries of debate.  Moreover, in most public policy issues data will bound the range of acceptable policy options but not necessarily do much more than that.

Wieseltier’s argument, on the other hand, is a farrago of nonsense. Whereas Silver’s argument simply is problematic because it fails to grapple with some complexities of science and opinion, Wieseltier seems more interested in rhetoric than anything else:

He dignifies only facts. He honors only investigative journalism, explanatory journalism, and data journalism. He does not take a side, except the side of no side. He does not recognize the calling of, or grasp the need for, public reason; or rather, he cannot conceive of public reason except as an exercise in statistical analysis and data visualization. He is the hedgehog who knows only one big thing. And his thing may not be as big as he thinks it is. Since an open society stands or falls on the quality of its citizens’ opinions, the refinement of their opinions, and more generally of the process of opinion-formation, is a primary activity of its intellectuals and its journalists. In such an enterprise, the insistence upon a solid evidentiary foundation for judgments—the combating of ignorance, which is another spectacular influence of the new technology—is obviously important. Just as obviously, this evidentiary foundation may include quantitative measurements; but only if such measurements are appropriate to the particular subject about which a particular judgment is being made. The assumption that it is appropriate to all subjects and all judgments—this auctoritas ex numero—is not at all obvious. Many of the issues that we debate are not issues of fact but issues of value. There is no numerical answer to the question of whether men should be allowed to marry men, and the question of whether the government should help the weak, and the question of whether we should intervene against genocide. And so the intimidation by quantification practiced by Silver and the other data mullahs must be resisted. Up with the facts! Down with the cult of facts!

First, the question is posed wrongly as a matter of measurement and fact. The specific criticism of punditry that Silver makes is one that pundits do not revise their beliefs after events cast doubt on the accuracy of a belief to predict future events. Say that John Mearsheimer, in making an normative policy argument for realist policies, argues that the international system has certain rules and thus himself argues that those rules will lead to certain outcomes. It is fair for Phillip Schrodt to highlight the failure of the system to behave in the way he says, and argue that this should have implications for whether we rely on his theory. Silver’s error is in the assumption that beliefs are predictions, as opposed to the sensible observation that strong beliefs will usually have predictive implications.  Certainly numbers cannot decide the issue of whether men should marry men, but if arguments against same-sex marriage warn that more liberal attitudes towards homosexuality will lead to the decline of marriage it is fair for Silver to try to see if this belief accounts for the variation in marriage and divorce.  It is precisely the fact that internally consistent beliefs can be tautological, as Friedman observes, that makes prediction useful.

Second, nowhere does Silver say that data ought to decide normative issues. The strongest statement that Silver makes about this in his manifesto is ironically counter to the image the TNR casts of him as a quant expressing a view from nowhere: Silver argues that scientific objectivity is distinct from journalistic objectivity in that it should make statements about whether certain arguments can be factually sustained. This is not necessarily an argument that empiricism should be the final arbiter, but that it ought to make a statement about what truths can be discerned from investigation about the rightness and wrongness of argument. And it is also not too much different from the notion of journalistic objectivity, as Silver argues. A good journalist doesn’t represent all of the sides of an issue, they give the reader information as to which ones are problematic. I am not sure, again, how he can square the circle between two notions — it is one thing to scientifically evaluate competing hypotheses, another to scientifically evaluate competing normative beliefs that do not really take the form of hypothesis or theory (even if they may have implicit hypotheses and theories embedded).

Wieseltier gives away his real problem with Silver when he notes this:

The intellectual predispositions that Silver ridicules as “priors” are nothing more than beliefs. What is so sinister about beliefs? He should be a little more wary of scorning them, even in degraded form: without beliefs we are nothing but data, himself included, and we deserve to be considered not only from the standpoint of our manipulability. I am sorry that he finds George Will and Paul Krugman repetitious, but should they revise their beliefs so as not to bore him? Repetition is one of the essential instruments of persuasion, and persuasion is one of the essential activities of a democracy. I do not expect Silver to relinquish his positivism—a prior if ever there was one—because I find it tedious.

It were one thing if punditry consisted of abstract deduction. But it does not. Punditry is about persuasion. Pundits do not make logical arguments from first principles or write mathematical proofs. Nor do pundits utilize any of the techniques of logic found in mathematics and philosophy, write sound mathematical definitions, or build their arguments off of logical deductions in the way that all mathematicians must work off previously proved things. Instead, Wieseltier is making a strong argument that “persuasion is one of the essential activities of a democracy.” Hence Will and Krugman should be free to repeat their beliefs for dramatic effect, in the hope that it would persuade others that they are right. This contradicts Wieseltier’s earlier arguments about reason, logic, and deduction. If Wieseltier wants to argue a reason-based defense of the humanities, which I do find persuasive, he cannot have it both ways. Public reason and persuasion are not the same thing — taken to one extreme persuasion becomes sophistry.

Sophistry, however, is what Wieseltier has been selling for a very long time. In arguing for his policy positions — particularly on the Iraq War. Wieseltier’s columns at TNR present no deductively rigorous argument on the question of intervention and America’s place in the world. Instead they are extended fits of moral posturing, in which he constantly exhorts the reader to a titanic struggle against evil. Instead of logical and rigorous arguments about whether or not a particular stance on Ukraine follows from a particular train of logic, Wieseltier’s world instead is a emotionally charged trip into glory, courage, and justice — where every struggle is always Munich, and every politician an inferior shadow of a Churchillian figure exhorting the populace into total mobilization. Wieseltier, in other words, is engaging in a particularly sophistic form of persuasion that aims to convince us that we ought to embrace a position of total mobilization by utilizing rhetoric and repetition. Indeed, Matt Yglesias (who has an undergraduate degree in philosophy) got it right when he flagged a somewhat muddled take on Kant by Wieseltier — Wieseltier is TNR’s book reviews editor. He is a literary scholar, not a philosopher.  I certainly know I have not lived up to the standards that I am holding Wieseltier to in my own writings, but I at least have become acutely aware that there is something wrong with the kind of argumentative style that I sometimes fall into. Wieseltier, however, conflates public reason with emotive rhetoric.

I must admit that I have my own doubts about Silver’s new enterprise. And like a Bayesian, I have a prior belief that I will adjust when the “data” comes in to evaluate it. I do not feel entirely comfortable with the arguments he makes and also am skeptical that data without mechanisms or heuristic understanding will really deliver the insights that the site promises. That being said Silver strikes me as a very smart person who has thought very deeply about the problems with modern journalism. I at least feel somewhat confident that he will be an evolutionary improvement over the existing model. Wieseltier, however, is the very symbol of the kind of pundit that makes even the most hyperbolic Silver critiques seem understandable. I will take data enthusiasm over Wieseltier’s “persuasion” any day of the week.  I do not also think that Silver will crowd out “public reason.” Indeed, the popularity of Nassim Nicholas Taleb — a quant turned philosopher — seems to indicate otherwise. Someone like Taleb, who grounds arguments in the style of a mathematician or philosopher rather than a statistician (and unlike Wieseltier has a body of technical work that can be philosophically evaluated) will be first to check a Silver-like data journalist if they overreach. We need both empiricists and rigorous deductive analysts, and ideally combinations of both.

Only Threes and Layups

By Kindred Winecoff

I love this. I’ve thought for awhile that most teams should shoot way more 3s and never attempt a 2 point shot past 6-8 feet. The math is really straightforward: a 23′ shot is worth 50% more than a 22′ shot while the odds of success are basically equivalent. The only shot with a greater than 50% average is a layup/dunk so why attempt low-percentage 2 pointers?

I despise Coach K, but he has had arguably the most successful college coaching career by doing two things: putting a decent-to-good 3 point shooter at all four corners on offense and getting his guys to take charges on defense (flopping if necessary). It’s a simple formula. You don’t gotta have Lebron to make that a winning strategy… just Ryan Kelly. Of course if you do have Lebron, or even Austin Cook, spacing the floor gives him opportunities to drive and get layups. Meanwhile, having players crash the boards from the perimeter opens up offensive rebounding lanes for easy putbacks, so some of those missed 3 attempts will lead to layups or dunks.

You don’t need to run some crazy offensive or defensive scheme. Just get guys who can hit 1/3 of their shots from distance and/or drive. If more teams actually started doing this the rules of the game might need to be changed again.

NBA teams are finally starting to get smart. It’s taken awhile, but it’s happening.

A Fistful of Computational Philosophy

It’s been a very interesting time for discussions about modeling and the philosophy of science.

First, my GMU colleague David Masad has a very intriguing post on computational social science (CSS), machine learning, and models.

Just as a data science approach may be insufficient on its own for finding the qualitative and emergent characteristics of a system, agent-based models may benefit from more engagement with data. One common criticism of ABMs is that they lack rigorous foundations. While I think that this is often unfair (particularly when the foundations are rigorous qualitative theory), it is the case that ABMs are often compared with real data only once they are built, either for validation or calibration. As far as I know, using machine learning to fit agent behavior (as I do here) is still uncommon. Ultimately, I think computational social science will need to combine both approaches. Going forward, I’m hoping to extend the type of work I’ve shown here, using data science techniques to understand agent-level behavior and combining it with qualitative theory to situate that behavior within a larger interactive system.

David was responding to pieces by Duncan Watts and Sean J. Taylor that looked at CSS from the perspective of knowledge discovery and automated content extraction. In contrast, David and I go to a program that is more focused on causal mechanisms and models that use qualitative theory as their lodestar. David (rightly) argues that CSS-ers shouldn’t have to choose — the laboratory quality of agent-based models can be combined with data science techniques to make more realistic and useful models. This is an approach already taken by the “cultural algorithms” method.

Elsewhere, fellow GMU’er Russell Thomas has been debating Cliodynamics theorist Peter Turchin about agent-based modeling of human social evolutionary change. Russell’s argument centers around the need  for robustness checks, counterfactuals, and sensitivity analysis concerning models:

Validation and verification are also crucial for simulations since they are situated in a broader ontological and epistemological context.  The two diagrams below show some of this context.  The first diagram comes from a conference paper called “On the meaning of data” and it focuses only on the bare bones of empirical research, which has some similarity to simulation-based research.  It’s simplistic, of course, but it gets across the main point: many factors besides the “model” and the “data” are involved in shape the final results, especially the crucial role of framing and interpretation. ….To say that a simulated model accurately predicts the explanandum, as Dr. Turchin has done, only covers the three boxes and relations on the far left [referring to a diagram] — (from bottom to top) “Simulation Model”, “Simulation Model Data/Results”, and “System Data/Results”.  It leaves out all the other elements and relations, which you can see are highly relevant to validation and verification. The paper by Sargent goes into these issues in detail.

What do both have in common? Masad and Thomas are both grappling with several dimensions of the “curse of computing.” In the linked post, Artem Kaznatcheev looks at the problem of computer simulations, using automated theorem-proving in mathematics as an example:

For me, the issue is not general surveyability, but internalization. No mathematician fully understands the computational part of the proof, at least no more than a pointy-haired boss understands the task his engineers completed. Although some AI enthusiasts might argue that the computer understands its part of the proof, most mathematicians (or people in general) would be reluctant to admit computers as full participants in the “social, informal, intuitive, organic, human process” (De Millo, Lipton, & Perlisp, 1979; pg. 269) of mathematics. For De Millo, Lipton, & Perlisp (1979), PYTHIAGORA’s verification or the computer assisted part of a proof is simply meaningless; it does not contribute to the community’s understanding of mathematics. This is easiest to see in the odd Goldbach conjecture: what understanding do we gain from the $10^{30}$ odd numbers that Helfgott’s computer program checked? It teaches us no new mathematics, no new methods, and brings no extra understanding beyond a verification.

In an alternative world without computer proofs, this verification would be absent. On the one hand, this means that alternative Helfgott would only tighten but not resolve the conjecture. On the other hand, the problem would remain open and continue to keep researchers motivated to find completely analytic ways to resolve it. Of course, even in our real world, a few mathematicians will continue looking for a non-computer assisted proof of the weak Goldbach conjecture, just as they continue to do with the four color theorem. However, the social motivation will be lower and progress slower. This is the curse of computing: giving up understanding for an easy verification.

This is part of a theme that the EGT blog crew has explored in the past — the fact that computers (data science or simulations based on qualitative theory) can help us verify without understanding. This is particularly pernicious when we are dealing with systems with many moving parts, systems that are difficult to understand or derive causality from. Kaznatcheev argues that we should foreground constructive analytical representations first before we begin putting them into computers, attempting to first gain a purchase over the object we are attempting to theorize about.

Now that I’ve completed this mini lit review, I’ll give you my take on this difficult problem.

My own view is that, for a discipline that uses “computational” in its title, we seem to be very uninterested in the idea of computation itself and what it means for our research. And I say this both from the perspective of the formal ideas of computation as well as how we use computer programs and technology for our models. Computers are, to us, an instrument that helps us do our research — whether we are discovering patterns about social life with the data-centric social science Watts and Taylor talk about or the modeling that Masad and Thomas engage in. I’m going to focus more on the latter, since it is something I know more about than data science per se.

The Santa Fe-inspired school of CSS uses computer code and programs as a representational language to encompass models of social process. Object-oriented programming, for example, is used because it is thought to be isomorphic with Herbert Simon’s idea of hierarchal complexity. Simon wrote of a “sciences of the artificial” rooted in humanity’s tendency to produce synthetic objects with both inner and outer environments that mimic the adaptation and design of organic life forms. In a classic essay towards the end of the book, Simon wrote about the idea of an epistemology that described a number of real-world systems — a nested and ranked ordering of interacting objects that could be treated as near black boxes. These objects, Simon argued, interacted together to become more than the sum of their parts. Modeling in general is about producing a simplified “map” of some real-world referent system, not the system itself. Few modelers believe that their models *are* the territory. Hence CSS as Masad, Russell, and I know it is about making hierarchally complex systems composed of these near black boxes as computer programs and using the programs as a representational language.

The problem, though, is that it is difficult to understand the distorting effect of the representational language. Phil Arena once tweeted, in response to a story about plants that supposedly mathematically calculated, that while math is a useful way of representing things it’s bonkers to say that plants are literally doing math. Unfortunately this isn’t really a new problem. The history of Simon’s “sciences of the artificial” is one long and sometimes creepy story of humans imputing intentionality and anthropomorphic qualities to non-human entities…..and humans imputing mechanistic and computational qualities from non-human entities or symbolic systems to humans.  In particular, we’ve always been fascinated with automata, from antiquarian curiosities to modern science fiction’s HAL and WOPR.

A large part of computational social science revolves around artificial agents that we instrument for the purpose of science. The idea of “generative social science” is about constructing societies of computational agents that simulate some real-world thing of interest. In essence, we’ve taken the 18th century chess-playing automations and their cousins and slaved them to act out our ideas in the hope we’ll learn something about real human beings and the social aggregates they create. Simon’s extended analogy between naturally produced objects and synthetic ones in terms of things governed by “inner” and “outer” environments is fun but also problematic.

There is a reasonable question embedded here about what this really tells us about the real world, particularly since the goal of computation itself (and artificial intelligence in particular) has always been to migrate the aspects of cognition least representative of human behavior  and cognition to machines. As J.C.R Licklider wrote in the early 60s, computers are meant to tackle what is most difficult and frustrating for us so we can free ourselves up for creative thought and problem formulation. And computers struggle to capture the aspect of human cognition that we barely think about — the “frame problem“:

To most AI researchers, the frame problem is the challenge of representing the effects of action in logic without having to represent explicitly a large number of intuitively obvious non-effects. But to many philosophers, the AI researchers’ frame problem is suggestive of wider epistemological issues. Is it possible, in principle, to limit the scope of the reasoning required to derive the consequences of an action? And, more generally, how do we account for our apparent ability to make decisions on the basis only of what is relevant to an ongoing situation without having explicitly to consider all that is not relevant?

There are three responses to this, all of which have pros and cons.

First, we can double down and argue that programs and code are a suitable language to model (in a stylized manner) what individuals, institutions, and societies engage in every day. Our petri dish of agents are enough like the real thing that we can use them. This is a persuasive argument, but the problem is that we’re still limited in our theory development to what we can represent with machines. We have to account for the frame problem and the curse of computing. Granted, that isn’t exactly a bad thing — the cognitive science community has gotten along quite fine with simulation engines like SOAR and ACT-R that represent cognition in a way that fits with the demands of computer programming and computation. But we have to always keep this in the back of our heads.

The second perspective, which I’ve toyed with, is the idea of accepting that simulated agents, no matter how cognitively realistic or data-primed we can make it, is never going to tell us more than just what we can do with computer algorithms….and that our agents are simply more sophisticated versions of the 18th-19th century automata crowd attractions. This would put a premium on the idea that the theoretical elements of computation itself — not necessarily what we can represent with the models — is the real prize. Like the Platonist view of mathematics as something that exists independently of human agreement, we could say that computation itself is a neutral and objective language to deductively examine formal properties of society. This is something that Artem Kaznetcheev has done quite a bit with his idea of evolution and scientific progress as machine learning. Computation, like formal proofs in game theory, can deduce qualities about society that stand on the basis of mathematical logic. To quote Katnatcheev again:

For over twenty-three hundred years, at least since the publication of Euclid’s Elements, the conjecture and proof of new theorems has been the sine qua non of mathematics. The method of proof is at “the heart of mathematics, the royal road to creating analytical tools and catalyzing growth” (Rav, 1999; pg 6). Proofs are not mere justifications for theorems; they are the foundations and vessels of mathematical knowledge. Contrary to popular conception, proofs are used for more than supporting new results. Proofs are the results, they are the carriers of mathematical methods, technical tricks, and cross-disciplinary connections.

Of course, at its most basic level, a proof convinces us of the validity of a given theorem. The dramatic decisiveness of proofs with respect to theorems is one of the key characteristics that set mathematics apart from other disciplines. A mathematical proof is unique in its ability to reveal invalid conclusions as faulty even to the author of that conclusion. Contrast this with Max Planck’s conception of progress in science:

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

Further, unlike in science, a mathematical conclusion is shown to be faulty by the proofs and derivations of other mathematicians, not by external observations.

I’m personally sympathetic to this idea — with one important caveat. Social aggregates are emergent, probabilistic, and interactive. Mathematics as a language has some important limitations in the way it can represent those qualities, particularly in the difficulty of creating scientific tools that can be used as environments for tinkering and creative thought. Science is often cast either as a process of either hypothesis-testing or deduction from first principle. Science in practice, however, is often messy, creative, and improvisational — and computing, though sometimes a “curse,” has the potential to serve as an aid to science. Finally, how research is represented is also important. Models and theorists are connectors and communicators, and a model that produces an appealing and interactive narrative can serve as a means of connection. This is something that the data science community understands rather intuitively, even if it can unfortunately produces “the one map that explains everything about ___” hackery. Hence no matter what we do, computers and simulation ought to be the instrument of our science.

The third response is to simply say “so what?” and argue that all of this philosophical desiderata I’ve just elucidated is besides the point. Can the model predict? Does it fit with real world data? Is it robust? etc etc. I would admit that it’s hard to argue against this — at the end of the day, that is what journal reviewers and grant-givers care about. But — perhaps due to my pre-CSS background in mostly qualitative research and theory — I think that the logic of representation and the philosophical assumptions we make about the language of science can’t easily be dismissed.  Ultimately, though we are in the business of accounting for variation, issues of understanding and explanation sit at the core of what we do. If models are maps, accuracy isn’t necessarily the sole criterion — a poorly designed map will also mislead those who use it.

I agree with Masad that models must engage data more, with Kaznatcheev that formal properties and analytical correctness is crucial, and with Thomas that a model also does not speak for itself. But I’d also submit that the process of how we deal with the subject of computation — from both a technological perspective (the “curse of computing”) as well as a theoretical one (how we engage with the formal logic of computation) are the defining questions of the discipline.

Competition, Not Convergence

I have a lot of thoughts on the Nicholas Kristof piece and the stirring it’s raised in the poli-sci blogosphere, but I’ve expended way too much time writing on the academic-policy divide and relevance in other venues to do more than a quick hit here.

First, an excerpt from a great blog by Tom Pepinsky:

Let me propose that disengagement by academics is not the problem. Rather, standing in the way of greater public engagement is that public intellectuals like Kristof, and policymakers in positions of power, are not interested in the sort of knowledge that real social science produces. They don’t want careful and considered, they want sharp and snappy. Superficial and ill-considered “analysis” in the form of 800 word nuggets is just not what the academic disciplines are designed to produce. That’s a good thing. We should not want to produce “TED talk” style research, even if Kristof finds it interesting.

What Pepinsky fundamentally gets is this: journalists/policy-oriented writers like Kristof and political scientists are in competition. There are a finite number of eyeballs, pageviews, think-tank panels, TV interviews, TED talks, and policymakers to be had for each broadly poli-sci related subject or application area. And you have two groups of people that claim intellectual knowledge and expert status in those areas. Yes, Kristof and political scientists aren’t really competing for the same exact audience, but there is significant overlap. Enough overlap that Kristof himself has been on the receiving end of withering attacks by academics in his own subject area.

I’m not telepathic, so I have no idea what prompted Kristof to write his column. But color me unsurprised that a journalists with no training in statistics, formal methods, research design, or obligation to undergo peer review regard such things as off-putting, irrelevant, and even perhaps threatening. Color me unsurprised as well that Kristof’s idea of what academia should be like is…..a souped-up version of journalism. Kristof sells narratives. And if you think this is all about math, you’re wrong. Public intellectuals like Kristof have more or less equal disregard for history, area studies, and ethnography when they contradict a narrative. I’ve certainly lived the latter reality watching natsec debates in Washington proceed with more or less total indifference to academic military and intelligence history and qualitative research in strategic theory. No one in Kristof’s lane cared about Clausewitz or offerings from the Journal of Military History during the counterinsurgency years, so why would they care about econometrics or game theory?

As someone that straddles a difficult line between the policy world, increasingly the tech world, and different corners of academic arcana I’ve always been sympathetic to the idea that academics should be engaged with the world beyond their journals and conferences. But let’s face it: academics, policy analysts, and journalists that produce knowledge on political subjects are all competing for attention. There’s an overlap in the Venn diagram of respective audiences for each group that controls two critical variables: social status/recognition and money. And if there is one thing that academics, policy analysts, and journalists all have in common it’s that no one produces knowledge solely for the sake of it — nothing happens without money, and knowledge production without appreciative eyeballs is like coffee without caffeine.

So this is really a reason to be skeptical of calls for pluralism in political science voiced by people like Kristof. What Kristof is saying is actually anti-pluralistic. It’s not as much a call for pluralism as much as it is a plea for political scientists to be more like NYT op-ed writers. That’s not an excuse to wallow in jargon or to take an overly scholastic (as opposed to engaged) idea of the discipline. But it is to say that Kristof is your competition, and always will be your competition. He won’t likely be happy if you make your work more accessible, as Politico didn’t exactly welcome the popular and well-presented work of Nate Silver when it conflicted with their model of reporting.

So don’t be more like him — instead beat him, steal the more important and influential slices of his readership, and force him to work harder and be more rigorous. And with the rise of Wonkblog, Nate Silver, Monkey Cage, increased public visibility of social scientists of all methodological stripes (from formal modelers to anthropologists like Sarah Kendzior) as well as the growing public and private sector interest in data-intensive social science, maybe he *should* be worried.