Why Did Japan Attack Pearl Harbor?

This is a guest post by Dave HackersonA previous post in this series is can be found here.

The International Dateline is truly a fascinating thing. It’s like a magic wand of time that can both give and take, depending which way you head. Each time my family and I fly back to the Midwest, the space time continuum is seemingly suspended. Leave Tokyo at 4:00 pm, touch down in the Midwest at 2:00 pm, and then reach our final destination by 5:00 pm of the same day. Over 15 hours of travel that appears to have been compressed within the span of one single hour. I still can’t wrap my head around it at times.

This dateline has a way of slightly altering our perspective of historical events. Most Americans are familiar with the following quote from President Franklin Delano Roosevelt: “December 7th, a date that will live in infamy.” The date to which he refers is the day on which the Combined Fleet of the Japanese Imperial Navy under the command of Admiral Isoroku Yamamato attacked the elements of the US Pacific fleet at Pearl Harbor. However, this is the narrative from the American side of the International Dateline. The December 24th edition of Shashin Shuhou (Photographic Weekly), a morale boosting propaganda magazine published in Japan from the 1938 until mid-1945, carried the following headline for its graphic two-page artist’s depiction of the attack: “Shattering the dawn: Attack on Pearl Harbor, December 8th”. The Japanese government christened the 8th day of each month as Taisho Hotaibi (literally means “Day to Reverently Accept the Imperial Edict”) to commemorate the great victory over the United States at Pearl Harbor and the Imperial declaration of war on the US and its allies (the day also served to regularly renew nation’s fervor and commitment to the war effort). Was Pearl Harbor a great victory for the Japanese? The answer to this question depends on the context in which the attack is viewed. From a purely military engagement view, it is safe to say that it was a resounding success, but did this single engagement succeed in shaping the course of the upcoming conflict? This is the question that the Mainichi Shinbun explored in the third installment of its series “Numbers tell a tale—Looking at the Pacific War through data” (the original, in Japanese, is here). True to the narrative on this side of the Pacific, this article was released on December 8th last year. Just as with the other installments in the series, it presents a slew of data that helps to put historical events into context.

“Did the attack on Pearl Harbor truly break the US? Japan’s massive gamble with only a quarter of the US’s national strength.” The title of the article does a nice job of setting up the exhaustive economic analysis it conducts in an attempt to answer this question. The very first thing the article does is to compare the respective GDPs of the US and Japan in 1939. At this time, Japan’s GDP stood at 201.766 billion dollars. However, this amounted to less than a fourth of the US’s GDP of 930.828 billion dollars (note that figures are not adjusted for inflation). Even the UK had a larger GDP than Japan at 315.691 billion dollars. When you combine the GDPs of the US and UK, Japan already suffered a disadvantage of greater than 6 to 1.

The next set of figures the article introduces is related to industrial capacity. The first thing it examines is iron production, and here the article makes reference to the quote by Prussian leader Otto Van Bismarck, who claimed that it was iron which made a nation. Taking Bismarck at his word, Japan’s iron production did not bode well for its position as a nation. In 1940, Japan’s national production of crude steel was 6,856,000 tons per year. In contrast, the US was producing nearly nine times that amount at 60,766,000 tons per year. Likewise, Japan lagged far behind the US in terms of electric power output and automobile ownership. Japan’s electric power output in 1940 stood at 3.47 billion kWh, but this figured was dwarfed by the US’s output of 17.99 billion kWh. The gap in automobile ownership is also especially telling. The 1920s are often considered to be the decade in which America “hit the roads” and became enamored with the automobile, and this fact is backed up the figures for automobiles owned by Americans in 1940. By that year, there were already 32,453,000 automobiles on roads in the US. Japan didn’t even come close, with only 152,000 automobiles scattered across the country.

In addition to lacking the physical resources and infrastructure to sustain a prolonged war of attrition, the makeup of Japan’s economy also posed a number of difficulties. Here the article emphasizes a major difference between Japan and other first world nations at that time: Japan was not a “heavily industrialized nation”. This fact was clearly reflected in the country’s exports. In 1940 finished metal products accounted for only 2.8% of the nation’s exports, while raw silk, textiles, and clothing products made up for more than a quarter. Likewise, only 30% of the nation’s income was generated by industry, which was less than the combined income of agriculture, retail, and transport sectors. In the 1930s, Japan made every effort to expand its heavy industries. The Truman administration dispatched an investigative committee to Japan after the war to study the effects of America’s strategic bombing on Japan and its economy. The study found that in 1930 the industrial makeup of Japan was 38.2% heavy industry and 61.8% light industry. By 1937 Japan had succeeded in reversing these percentages to 57.8% and 42.2%, but the difficulty the nation had in securing the resources it needed for industry restricted its industrial capacity. The study did not mince words in its assessment of the Japanese economy. “The nation of Japan is truly a small country in every manner of speaking, and ultimately a weak nation with an industrial infrastructure dependent on imported materials and resources, utterly defenseless against every type of modern-day attack. The nation’s economy at its core was rooted in a cycle of daily subsistence, in which people only produced what they needed for that day. This left it with no extra capacity whatsoever, leaving it incapable of dealing with potential emergencies that may arise.”

To compensate for its lack of resources, Japan cast its gaze across the waters to Manchuria. Japan had steadily expanded its interests in Manchuria since its victory in the Russo-Japanese War in 1905, and placed the South Manchuria Railway Company as the primary driver of this massive undertaking. This company was founded in 1906 upon the railway Japan received from Russia after the war, and was a national policy concern that was half-owned by the state. Japan aimed to make Manchuria the focal point of an own economic bloc that also included Korea, Taiwan, and China. While Manchuria was rich in natural resources, it was highly underdeveloped, and Japan ultimately exported far more machinery and infrastructure building equipment than the resources it imported. While Japan was able to construct some of this machinery and equipment on its own, it was dependent on material and machine-related imports from the US, UK, the Netherlands, and Australia, the very nations against which it would ultimately go to war. In 1930, Japan exported nearly 96% of its raw silk thread to the US, which would send raw cotton back the other way. Japan would then process this cotton into finished cotton products for export to British India and the UK. Using the profits from these exports, Japan would then import strategic resources from the US, UK, and the Netherlands, such as oil, bauxite to create the aluminum used in air craft, and the bronze needed for the metal casings of bullets. The problematic nature of these trade relationships was pointed out by the Japanese economist Toichi Nawa of Osaka University of Commerce (present-day Osaka City University). In his book Research on the Japanese Spinning Industry and Raw Cotton Problem, Nawa stated that “any confrontation with the UK and US would be tragic, and must be avoided.” He further elaborated on Japan’s trade issues, saying that “the more Japan rushes along its efforts to expand heavy industry and its military industrial manufacturing capacity so it can bolster its policies on the continent (Manchuria and China), the more dependent it becomes on the international market, creating cycle that leads to increased imports of raw materials. Herein lies the gravest of concerns for the Japanese economy.”

Nawa’s words proved to be all too prophetic. Japan’s aggressive agenda in China following the Marco Polo Bridge incident in 1937 brought heavy criticism from the global community. As the conflict in China escalated, Western nations retaliated with economic sanctions and restrictions on imports. The most devastating of these was the US’s decision to ban all oil exports to Japan in August of 1941. The US was the world’s largest producer of oil in 1940, accounting for over 60% of the world’s supply. The upper brass of the Imperial Japanese Navy had predicted that they had enough oil stockpiled to wage war for at least 2 and half years, but if the UK and US shut off all oil exports, they would have no other choice but to move into Dutch territory and seize the oil fields of within 4 to 5 months in order to augment their supply. The attack on Pearl Harbor occurred exactly four months later.

Did Japan truly have the capacity as a nation to wage a modern war against a nation such as the United States? As tensions rose in US-Japan relations, Japanese government and military officials took a hard look at the data available in an attempt to answer this question.

A joint military and civilian economic study group organized around army paymaster Lt. Colonel Jiro Akimaru was set up in February 1941 to undertake this task. Known as the “Akimaru Agency”, this group was split into four sections to study the total war capacity of Japan, the UK-US, Germany, and the Soviet Union. The report they compiled by the end of September 1941 made the following conclusions:

1)      The conflicting state between Japan’s military mobilization and its labor force has become fully evident. Japan has also reached its peak production capacity, and is unable to expand it any further.

2)      Germany’s war capacity is now at a critical point.

3)      Not a single flaw exists within the US’s war economy.

Even if Japan sacrificed the living standards of its populace to boost its war capacity, it still would not have the financial resources to compete with the US. Hiromi Arisawa, a member of the UK-US section who was also president of Hosei University during his lifetime, made the following remarks when reflecting back on the report the Akimaru Agency prepared:

“Japan cut national consumption by 50%. In contrast, America only reduced its national consumption by 15 to 20%. Excluding the amount of supplies they shipped to other Allied nations at that time, the savings from this reduced consumption provided them with 35 billion dollars* for real war expenditures. That was 7.5 times greater than what Japan was capable of achieving with its cuts.”

Lt. Colonel Akimaru alluded to this fact when he presented the report at an internal staff conference meeting for the Army. Gen Sugiyama, Chief of Staff of the Supreme Command, acknowledged that the report was “nearly flawless” in its analysis. After praising Akimaru for the quality of the report, he then issued the following order. “The conclusion of your report goes against national policy. I want you to burn every copy of it immediately.”

Lt. Colonel Hideo Iwakuro, founder of the Nakano School and a military intelligence expert, was dispatched to the Japanese embassy in the US and took part in the planning of unofficial negotiations between the two countries. He returned to Japan in August of 1941 and met with influential figures in the political and business world, trying to persuade them of the futility in war with the US. At the Imperial General Headquarters Government Liaison Conference, Iwakuro presented the following data based on his own personal research to demonstrate the gap between the US and Japan in terms of national strength.

Iwakuro’s conclusion was straight and to the point. “The US has a 10-1 advantage in terms of total war capacity. All the Yamato-damashii (Japanese fighting spirit) we throw at them will not change anything. Japan has no prospects of victory.” Incidentally, the next day War Minister Hideki Tojo (who later became Prime Minister) immediately ordered the transfer of Iwakuro to a unit stationed in Cambodia. Iwakuro made the following remarks to the people who came to see him off at Tokyo Station. “If I should survive this ordeal and ever make it back to Tokyo, the Tokyo Station we see here will most assuredly lie in ruins.” Those words came to fruition in the spring of 1945.

 

Admiral Yamamoto salutes Japanese pilots.
Admiral Yamamoto salutes Japanese pilots.

 

So did the attack on Pearl Harbor truly break the US? The quote made by Admiral Yamamoto at the end of the movie Tora! Tora! Tora! puts it quite succinctly: “All we have done is to awaken a sleeping giant and fill him with a terrible resolve.” Though there is debate about whether he actually uttered those words, Yamamoto was no stranger to the US having studied at Harvard and spending time as a naval attache, and he knew full well the awesome industrial might and material resources the nation possessed. Japan played a great hand with its attack on Pearl Harbor, but as Yamamoto knew, the deck was already stacked against it. The only thing that remained to be seen was how long Japan could make its kitty last.

Why am I ignoring Nigeria?

By Seth Studer

I take a little exception to the smarminess of certain media’s response to the Charlie Hebdo murders. Last week, they inform us, we witnessed two horrific massacres: the murder of 12 satirists in Paris and the murder of roughly 2,000 civilians in Baga (that’s in Borno, Nigeria). But, they continue, judging from CNN, Fox, and your Facebook feed, only one of these terrible crimes got any coverage. To ask the question “which one: the 12 Europeans or the 2,000 Africans?” is to answer it. While the loss of 12 innocent lives and an implied assault on Free Speech (which doesn’t really exist per se in France) rallies millions across the Great White West, virtually no one is speaking for what Teju Cole calls “unmournable bodies” (an eloquent phrase, although the critical theorist’s habit of saying body when you mean person upsets his essay’s thesis). Cole’s essay in the New Yorker (linked above) is intelligent and passionately argued, and he handles his argument’s underlying ethos – the aforementioned smarminess – with more grace than others (the latter article incorrectly states that Nigeria is south of the equator, a reminder that the many truths revealed by postcolonial theory – e.g., global North vs. global South – do not always square with geographical reality). But in general, I felt scolded for paying more attention to France than Nigeria.

And I probably deserve a scolding. Did mainstream news outlets focus on France over Nigeria as the consequence of a bias toward white Europeans? Absolutely! Was the attack on Charlie Hebdo more frightening and noteworthy to Western audiences than the massacre in Baga because the former represents an attack on the imagined “center” of Western civilization rather than its “periphery”? You bet!

So should 2,000 murder victims be more “newsworthy” than 12 murder victims? I think it depends on the circumstances. 

Anyone who hasn’t been following Boko Haram over the past many months is an irresponsible consumer of world news. The mass violence last week represents the terrifying apex of an ongoing story. We spent much of 2014 preoccupied with the horrors inflicted upon the Nigerian people by this radical group (even Michelle Obama got involved, which got American conservative media involved, etc., etc.). The Charlie Hedbo massacre, meanwhile, fell out of a clear blue sky. Both discrimination against Muslims and Muslim unrest in France are ongoing, but nothing concrete or obvious precipitated this attack. These murders arrived on our screens demanding a context. Hence, the intense coverage.

And for me, intense coverage of the Charlie Hebdo massacre is essential not merely because it reinforces Western commitments to free speech (commitments that tend to get waylaid when they’re needed most). Coverage is essential because France is an important European nation in the grips of a major rightward political and cultural shift, one that could potentially turn more strident, more xenophobic, and more violent. After a half century on the fringes (and apparent defeat in the face of European unification), Europe’s right-wing parties (as opposed to its right-of-center parties) are, ahem, on the march. In the United States, extreme right-wing rhetoric has benefited from decades in the mainstream: a speaker’s racism or xenophobia can be carefully coded and embedded in speeches about tax policy. In Europe, the far right has been far wilder and wilier. They’ve retained their ugliness and wear it explicitly on the surface. (Whenever one of my liberal friends unfavorably compares America’s conservative politics with Europe’s socialist policies, I remind them, “Yes, you like their left wing, but you don’t want their right wing.”) Meanwhile, since the 2007/08 global banking crisis, nationalism in Europe – both right-wing and left-wing – has resurged to levels not seen in decades. Because of their knotted political and economic ties to Germany (or Russia), the peoples of Europe are seeking social and cultural distinction. Secession movements have gained renewed traction in the geographical and political expanse between Scotland and Crimea. Consequently, Germans and Russians are also asserting their national character in ways that, twenty years ago, would have seemed taboo.

This, for me, is the context of the Charlie Hebdo attack, far removed from the bloodshed in Nigeria (admittedly, all things connect in our post-post-colonial world, as African expats like Cole convincingly demonstrate). Note that the above paragraph doesn’t include the word “Islam.” I don’t think you need to dwell much on radical Islam to understand the socio-cultural dynamic that drives millions of French residents into the streets. From a French perspective, however, immigration from the Muslim world underscores every aspect of the current national identity crisis. Thus, when an event like the attack on Charlie Hebdo occurs, you get 3.7 million people in the streets and attacks on Muslims.

This, to me, is a very big story indeed.

Two thousand people died in Nigeria last week, it’s true, but 3.7 million people marched throughout France yesterday – roughly one million in Paris alone. What do those one million want? What do they represent? Many of them are doubtless sympathetic with France’s Muslim minorities. Few among them are likely to be extreme French nationalists (though more of them are sympathetic with French nationalism than Western liberals would like to imagine). Whatever their motives, this represents a good moment to take France’s cultural temperature. The context demands it. Your first response to Charlie Hebdo should be an unequivocal condemnation of the murders and support for free speech. But your second response, given the atmosphere in Europe, should be concern for liberalism in France. Because, contrary to what the news coverage is telling you, continental Europe is not historically an easy or natural home to liberal values. And because a march can be a mob by another name.

Nixon/Vietnam/1968: What We Know and When We Know It

By Seth Studer

The story broke in August. I was working on an academic article about Richard Nixon in post-Watergate American culture (forthcoming). When not working not working on the article, I actively avoided Nixon-related stories in the news media and on the Internet: not an easy feat in August 2014, the fortieth anniversary of Nixon’s resignation. Then again, it was a little easier than I would have liked. The stories, like coverage of the JFK assassination’s fiftieth anniversary the year before, were pretty scarce and uneven. Inevitable, I suppose. In two years, Hillary Clinton (nominee presumptive) will cast a ballot for herself as president of the United States. What a dim memory her husband’s impeachment seems already. Ex-senators Trent Lott and Tom Daschle recently visited South Dakota State University, where I teach, to wax poetic about the post-ideological, post-historical nineties: a time when, to hear them tell it, the two great American parties apparently worked together in constant harmony, tossing aside profligate “political differences” for the good of God and country. One recalls the many obituaries of Ronald Reagan that read as if Tip O’Neill and the Gipper played “government” each day before retiring together to a pub in the evening to share pints and sing Irish drinking songs. Our national history is like an elementary school recess period; we will not allow anything vaguely resembling a row on the playground.

And so intelligent Americans consistently misremember, or are compelled to misremember, events the nineties and eighties: events that they witnessed firsthand. How much dimmer Watergate must seem to those who lived through it. My mother watched the hearings as a junior high student while earning babysitting money; tough times, stagflation. I recently quizzed her on the names “John Dean” and “Sam Ervin,” which elicited vague recognition but no concrete memory of the basic contours of the only scandal to prompt an American president to resign. The players themselves did little better. During the first week of August 2014, Bob Woodward and Carl Berstein appeared on NPR, the CBC, and the BBC, describing the events of 1973-1974 – and their role in those events – less convincingly and with less interesting conclusions than they had at the time. PBS News Hour managed to assemble a nice roundtable featuring Timothy Naftali (Nixon aficionado and NYU professor), Beverley Gage (the best historian of post-1945 America working today, currently at Yale), Pat Buchanan (Pat Buchanan), and Luke Nichter, a Nixon scholar and professor at Texas A&M – Central Texas whose July 29 2014 book The Nixon Tapes provides one of the most thorough single-volume accounts of the 37th president’s self-recordings. The main topic: who remembers Watergate? What was the big deal? Nixon wasn’t such a bad guy after all, right?

Journalist Ken Hughes’s Chasing Shadows: The Nixon Tapes, the Chennault Affair, and the Origins of Watergate was published the same day as Nichter’s volume. A few weeks later, after journalists and Nixon buffs ruminated and digested Hughes and Nichter’s work, stories began to pop up across the Internet highlighting new revelations from the transcripts. In particular, a revelation from Hughes provoked strong reactions. Salon featured a dramatic excerpt from Chasing Shadows, featuring a scene that prompted George Will to accuse Nixon of treason. These stories did not receive much fanfare, but the headlines were sensational. New Watergate bombshell! The scandal behind the scandal! Nixon guilty of treason! Even George Will admits it! For most intelligent Americans, the story had all the impact of a new Bee Gees single. But for those who, like me, think and talk and read and write about Richard Milhous Nixon with an almost neurological compulsion, something exciting had happened.

Here’s the short version. One of Hughes’s transcripts (July 15 1971) features Nixon explicitly ordering a break-in at the Brookings Institute. This is notable for one major reason and a couple minor ones. Major: it’s the only time on the tapes we hear Nixon directly order a break-in. Minor: breaking into Brookings is a pretty big deal, and explicitly ordering such a break-in directly is a pretty big deal, especially for Nixon, a master of suggestion and the subtle cue. But there were few obvious reasons for Nixon to give such an order in 1971; the Watergate break-in seems rational by comparison (Democratic National Committee, election year, etc.). One is forced to assume that Brookings had something that Nixon wanted very badly, although Nixon does not quite say.

Hughes argues that this transcript represents the genesis of the plumbers/Watergate/resignation. We are unlikely to find another such explicit order to break into an august enemy think tank; this seems to represent the moment when things went a little crazy for Nixon. Most coverage of Hughes’s book focused on this sensational thesis, that the Brookings affair wrought Watergate.

The evidence against the thesis is very strong: the origins of the Nixon administration’s culture of surveillance are far too numerous and diffuse to reduce to a single event. But even for Hughes (who at times seems to use the word “Watergate” simply to attract a general audience – and why not?), the fact of Nixon ordering a break-in is less compelling than the fact of Nixon ordering this break-in. The ostensible purpose was to blackmail former president Lyndon B. Johnson with documents, located at the Brookings Institute, that revealed Johnson’s plan to broker a surprise Vietnam peace settlement by October 1968. Peace in Vietnam would have significantly boosted Vice President Hubert Humphrey’s chances of succeeding Johnson. Proof of such chicanery would have given Nixon political leverage over Johnson. But, as Hughes writes, Nixon’s staff (including Henry Kissinger, who participated in the fun) doubted that Nixon’s burglars would find anything useful at Brookings. And why would Nixon want leverage over an unpopular ex-president?

From Chasing Shadows:

 At that point, Nixon just wanted the former president to hold a press conference denouncing the leak of the Pentagon Papers—not much of a motive to commit a felony. … [And the] potential downside was enormous—impeachment, conviction, prison, disgrace—and the upside was questionable at best. If Nixon were the kind of president to conduct criminal fishing expeditions for dirt on his predecessors, his tapes would be littered with break-in orders. But Brookings is the only one.

There is a rational explanation. Nixon did have reason to believe that the bombing halt file contained politically explosive information—not about his predecessor, but about himself.

The reason, Hughes argues, is that Nixon hoped to obtain documents implicating himself in the failure of the 1968 Paris Peace Talks. Allegations that Nixon had sabotaged the peace process would emerge and grow in the decades after Watergate.

George Will’s review of Chasing Shadows shifted the focus from Hughes’s thesis to his data, new data which, Will argued, implicated Nixon in “treason” (Will’s word choice received more attention than his argument). While other journalists focused on Hughes’s link between the new data to Watergate – his attempt to carve a “Rosebud” out of a few seconds of tape – Wills argued that the real story had been missed. Hughes provides very strong, if very indirect, evidence of what we already almost knew about Paris 1968: that, to bolster his chances of becoming president, Nixon sabotaged the 1968 Paris Peace Talks that would have almost certainly ended the Vietnam War by early 1969. All the other “White House horrors” – Watergate, ratfucking, domestic espionage, “the Canuck letter,” even Allende – pale in comparison to this.

Johnson and Nixon
Johnson and Nixon

According to both Hughes and Will, Nixon gave an irrational order: break into Brookings and steal documents. Why? To blackmail Johnson? The risk was too great, and they might not even find the documents they wanted. There must be another, more rational reason. According to Hughes, Nixon must have been looking for files implicating himself in sabotage, files that he could obtain by no other means, files that his enemies at Brookings might have possessed: “Ordering the Brookings break-in wasn’t a matter of opportunism or poor presidential impulse control. As far as Nixon knew, it was a matter of survival.” This reasoning (in short, that Nixon would not have behaved irrationally) was strong enough to convince Will to charge a Republican president with treason.

I disagree not with Will’s conclusions, nor even with his reasoning (though depending on Nixon to make rational decisions is frequently a losing game), but with his confidence in this new evidence to make the case for treason.

All responsible historians and Nixon buffs know that Nixon betrayed Johnson and sabotaged the Peace Talks; we also know that Nixon ordered a break-in at the Brookings Institute. The question has always been how well we know. How much data to we possess? How much must we rely on reasonable inference? Who said what, when, where, and why? We already knew that Lyndon B. Johnson probably had direct evidence of Nixon’s involvement – but Johnson’s evidence has never been recovered, and Nixon denied any involvement in the Peace Talks to Johnson’s face. So when a Nixon scholar claims to have evidence of the 1968 sabotage, it’s a big deal. Thanks to Hughes, we have some new data. Nixon ordered the Brookings job. We know no that with 100% certainty. We always suspected, but now we know. But we can still only infer, with great confidence (approaching knowledge), that he sabotaged the 1968 Paris Peace Talks. Such great confidence that I’m willing to say “I know.”

But Hughes hasn’t found the smoking gun that Will and others are made it out to be. Will’s article and Hughes book are both padded with backstory and dot-connecting that aren’t derived directly from the tapes or from the public record.

Hughes would argue that the 1968 sabotage was Nixon’s greatest secret, that he built a citadel of surveillance and paranoia around himself in order to protect that secret, and that Watergate must necessarily be understood as an outcome of this secrecy. I agree that Nixon’s sabotage of the Paris Peace Talks were probably his greatest secret – but we have not heard Nixon himself admit that. That’s the nature of secrets and the nature of Nixon. And Nixon is nothing if not resistance to simple casual analyses. One simply cannot imagine a Nixon White House sans paranoia and plumbers, with or without the Peace Talks scandal, just as one cannot imagine Nixon as a consistent ideologue or as a good friend or as a convincingly honest man.

We will probably never get that piece of hard evidence – the fact in a pumpkin patch, the smoking datum – that proves Nixon intentionally sabotaged the 1968 Paris Peace Talks and deliberately extended the Vietnam War until October 1972 for political purposes. We don’t need such concrete evidence, really – the historical evidence against Nixon is about as strong as historical evidence gets. In lieu of a taped confession, we must content ourselves with reasonable inference based on hard data. And Hughes’s transcript is one more very hard datum to add to the pile, shedding a little more light on Nixon’s most heinous crime; undue focus on Watergate and the plumbers distracts from the fact that Nixon committed his most evil act before he was even president. We should be interested in the 1968 Paris Talks not because they led to Watergate and resignation. We should be interested because they represent a devastating lost opportunity to end the wickedest war in American history.

Remind Me: How is Putin Winning?

By Seth Studer

2d7714fcd3964132a9b56e28dcefbbe4

Last week, the Washington Post ran a headline that captures everything wrong about how Russian president Vladimir Putin’s political and military maneuvers in eastern Europe have been covered in the West: “Ukraine ratifies associations with E.U., grants concessions to rebels.” The newly strengthened relationship between Kiev and the E.U. is rightly emphasized, but the small concessions to Russophone rebels in eastern Ukraine is added as an apparently obligatory counterbalance – common throughout what we in America cloyingly call “the mainstream media” – to reinforce the narrative that Russia is somehow on the move. I say “cloyingly” because the sentiment reflects an American Cold War nostalgia that never quite collapsed under the Berlin Wall or the disintegration of the Soviet Union, a nostalgia for three networks, two newspapers, and one Bad Guy. “See! Concessions! This is why Obama is weak and Putin is strong! The West is in retreat and the rebels are getting concessions! Right?! Right?!”

Mr. President, build up that wall!
Mr. President, build up that wall!

Here’s a different perspective:

At this time last year – September 24, 2013 – Ukraine’s president was little more than Putin’s stooge, Moscow’s man in Kiev, a corrupt thug who lived in a Eurotrash mansion (“Opulence: I has it”) and kept two bells on his nightstand: one for vodka, the other for prostitutes. Viktor Yanukovych had spent his political life advocating and advancing close ties to Russia. He became president after his predecessor, a reformer who was poisoned and disfigured in what amounts to hilarious retro-Cold War shenanigans gone terribly wrong, failed to win reelection. Ukraine was leaning toward Russia, and through Yanukovych, Putin effectively determined Ukrainian foreign policy. You might say that Putin was co-president of Ukraine.

Flash forward one year: Yanukovych is gone, ousted by his own people. Instead of enjoying considerable power over Ukrainian policy, Putin now owns Crimea (which has only been Ukrainian since 1954), exerts direct influence over some parts of eastern Ukraine (instead of the whole thing), and finances (though denies any ties to) a ragtag bunch of crypto-fascist Russophones who can’t distinguish between a Ukrainian fighter jet and a passenger plane full of innocent Europeans (they can’t even control their Twitter accounts; at least ISIS has decent PR guys).

Meanwhile, Kiev has never been closer to Europe, and its fate has never seem more intertwined with the EU’s. As a bonus, the Baltic states just got reassurance that NATO benefits will be honored, and Russia is facing several not-insiginificant economic sanctions from many of its ostensible allies.

Am I the only one who sees Putin as the net-loser here? And Obama? He barely had to do a thing to achieve this outcome.

But...but...Obama has a pink backpack and Putin doesn't wear a shirt!
But…but…Obama has a pink backpack and Putin doesn’t wear a shirt!

Twenty-five years ago, Berlin was the primary political border of Europe, where East and West faced off. Today, the border has shifted eastward…all the way to Kiev. Putin (shirtless) is in a helluva fix, and all Obama (mom jeans) had to do was make a couple phone calls. The West is kind of kicking ass, and we’re not even trying that hard. Because while Russian hardliners project a lot of strength, they tend to exert it by beating dogs, shooting tigers, and undermining themselves.

Building a Better Middlebrow: the Case of Ken Burns’s “The Roosevelts,” Pt. 1

By Seth Studer

Ken Burns

Preface: No spoilers, please…

I am not yet finished watching Ken Burns’s fourteen-hour long saga The Roosevelts: An Intimate History. Nevertheless, I can already reflect on what Burns’s latest contribution tells us about the much-touted “Golden Age of Television.” An historical documentary on PBS spanning fourteen hours, most of it comprised of black-and-white archival footage and Baby Boomer talking heads (e.g., George Will, Doris Kearns Goodwin, and one or two real historians), is being sold to the American public as “intimate.” And the series is intimate; Burns’s focus almost never turns from Roosevelts Teddy, Franklin, or Eleanor. The Roosevelts is easily his most intimate portrayal of a Great American (or, in this case, a Great American Family), and it reflects his growth as a filmmaker over the last half-decade, beginning with The National Parks: America’s Best Idea (a hot mess, to be sure, but a beautiful hot mess) and Prohibition (a tight little policy pic – his best film). In many ways, The Roosevelts is a return to the Burns I knew and hated in The Civil War and Jazz. But he’s returned wiser, sharper. His obnoxious Great Man, Big Battles gloss on the byzantine complexities of American social and political history has never, ever looked so good and contained so much substance. We can learn a lot from Burns’s most recent hybrid success-failure. Specifically, how to build a better middlebrow within American mass culture: a middlebrow it deserves and, I think, a middlebrow it wants.

The Roosevelts’ final episode aired last Saturday, but I’m not worried about catching up. Since the middle of last week, PBS has posted the following message to my Facebook feed at least twelve times: “Remember: you can binge watch the ENTIRE series – until Sept 28th – on your local PBS station’s website or Roku.” Today, the most consistent and interesting purveyor of American middlebrow culture is AMC. Mad Men, Breaking Bad, The Walking Dead: the pretensions of HBO with half of the budget and twice the accessibility. And AMC uses the exact same language to sell me Mad Men that PBS is using to sell me The Roosevelts.

"Binge all over me," says Betty Draper.
“Binge all over me,” says Betty Draper.

Much like Netflix, which has built a business model premised on its customer’s desire to “binge” on original content (we all finish House of Cards and Orange is the New Black knowing full well it will be an entire year before we get new episodes), AMC is encouraging its audiences to consume its products in the manner of a frat boy seeking to increase his blood alcoholic content as quickly as possible, or in the manner of a psychologically distressed person for whom food is a dangerous psycho-physiological outlet. Given the well-established link between consumption, consumerism, and sex (“INDULGE” is the word they coupled with Christina Hendrick’s Joan Harris), no one is really surprised by AMC’s ad campaign. But when the same tactics are applied to a 14-hour documentary about Eleanor Roosevelt, the time has come to ask some interesting questions.

Part One: Ken Burns – not a Historian, but he plays one on TV!

Throughout the 1980s, Ken Burns directed small documentaries on topics ranging from the Shakers to Huey Long and the Statue of Liberty. In 1990, he earned national fame for his seventh documentary, The Civil War, a nearly twelve-hour documentary about the Conflagration Between the States that, amazingly, managed to say very little about the causes – social, political, and cultural – of the war itself. A viewer could watch all 690 minutes of Ken Burns’s Civil War and learn nothing about the Civil War. Besides the battles, of course. Burns spends as much time on the Battle of Chattanooga (the third most important battle fought in Tennessee, the second or third least important state in the Confederacy) as he spends on the policy battles that raged between Lincoln, his advisors, and the Congress; or the internal divisions and resentments within the Confederacy itself, which did as much to weaken their cause as the Union juggernaut. Slavery is discussed, obviously, but as a fact and not a consequence of U.S. policy; the impact of its demise on U.S. politics is minimized. Every single black character is voiced by Morgan Freeman, who gravely intones the words of Frederick Douglass and then hams it up, step ‘n’ fetch it-style, when reading the words of perfectly literate enslaved (or merely working class) black men.

If Burns’s later films would suffer from an overemphasis on personalities, his Civil War underplays them in favor of events. Lincoln’s political acumen; Grant and Sherman’s brutal tactical genius; the stubborn dignity of black leaders who, receiving emancipation, refused to prostrate themselves before Northern whites; the grace with which many Confederate leaders, Lee among them, accepted defeat; all of these Great Persons are overshadowed by Great Battles, so that viewers in every media market from Picacho Pass to Pennsylvania could look out across their amber waves of telephone wire and pavement and intone, “It happened here.”

Among the talking heads, the thickly accented Shelby Foote utterly consumes Burns’s Civil War. He appears at least ten times more frequently than any other historian or author. Foote is a documentarian’s dream: folksy, charismatic, intellectual, and a born storyteller. But Foote is also kind of an idiot. When he volunteers to name “two authentic geniuses” produced by a war that gave America seven presidents, he identifies Abraham Lincoln (one of the great statesmen of the nineteenth century, along with Benjamin Disraeli and Otto von Bismarck) and Nathan Bedford Forrest (a lieutenant general in the Confederate army and founder of the terroristic Ku Klux Klan’s first iteration). This declaration had, apparently, once placed Foote in hot water once with a Southern relative, who grimly intoned, We never thought much of Mr. Lincoln down here. Foote chuckles in response to his anecdote. Southerners have strange feelings about that war, he observes.

Ya think?

Foote is not a Confederate partisan. He is simply a Civil War buff. But a buff is the most dangerous kind of historian. I am a Nixon/Watergate buff, which is why I am reluctant to make major claims about the man or the event. Foote has made a career buffing up the Civil War, giving it sheen but no shine, clearing away dirt but revealing nothing. Burns is in awe of Foote, whose volumes on the Civil War constitute the kind of history most popular with “buffs”: battles, more battles, personalities on the field, more battles, blood, guts, glory. We remember the names of colonels and privates but none of the congressmen. We learn more about Forrest than we learn about William Seward, Charles Sumner, Thaddeus Stevens, Alexander Stephens, or Judah P. Benjamin.

Here’s a tip: any middlebrow history of the American Civil War that does not begin – begin – with transatlantic trade, not merely of slaves but of all goods, is lying to you. Period.

Between The Civil War and The Roosevelts, Ken Burns’s style underwent significant improvement. He produced two “event” histories of Baseball and Jazz, widely praised except by hardcore fans of baseball and jazz, alongside shorter treatments of subjects we portray on banknotes and passports: Lewis and Clark, Thomas Jefferson, Mark Twain, and westward expansion. In 2007, he attempted to catch the White Whale of all American historical narratives, World War II, but took such a circumspect route – no straightforward, consensus-minded historical narrative; firsthand accounts from veterans; a “bottom-up” approach to major events – that he confused most of his viewers, who came expecting “the Burns treatment” (letters from Eisenhower, Tom Hanks as Patton, David McCullough’s eyebrows, etc.). The War was a failure.

By the end of The War, Burns seemed finally to grasp his own unique strengths and limitations. On the one hand, he could spew middlebrow schlock about the United States of America better than anyone. On the other hand, he had a tendency to attach himself to Great White Dudes (Shelby Foote, Thomas Jefferson) and no capacity to represent the subtle movements upon which history progresses. Why not, then, spew schlock and attach himself to lesser-known, more interesting Great White Dudes? And why not cast these Dudes in a story less obvious than, say, THE CIVIL WAR or THE WEST?

What followed were the best documentaries Ken Burns has yet made.

In my next post: The National Parks, Prohibition, The Rooseveltsand Burns in TV’s “Golden Age”

The Wizard of Oz Is an Anti-Finance Manifesto

By Kindred Winecoff

Somewhat apropos of my previous post is the following anecdote, which I’ve read a number of times and have always forgotten. I’m pasting it here for posterity’s sake. It is from Daniel Little’s review of David Graeber’s Debt: The First 5,000 Years:

There are many startling facts and descriptions that Graeber produces as he tells his story of the development of the ideologies of money, credit, and debt.  One of the most interesting to me has to do with The Wonderful Wizard of Oz.

L. Frank Baum’s book The Wonderful Wizard of Oz, which appeared in 1900, is widely recognized to be a parable for the Populist campaign of William Jennings Bryan, who twice ran for president on the Free Silver platform — vowing to replace the gold standard with a bimetallic system that would allow the free creation of silver money alongside gold. … According to the Populist reading, the Wicked Witches of the East and West represent the East and West Coast bankers (promoters of and benefactors from the tight money supply), the Scarecrow represented the farmers (who didn’t have the brains to avoid the debt trap), the Tin Woodsman was the industrial proletariat (who didn’t have the heart to act in solidarity with the farmers), the Cowardly Lion represented the political class (who didn’t have the courage to intervene). … “Oz” is of course the standard abbreviation for “ounce.” (52)

The symbolism of the “yellow brick road” needs no elaboration.

UPDATE: As was been pointed out by Thomas in the comments, this was discussed long ago in the Journal of Political Economy.

The Left’s Omertà

By Kindred Winecoff

You can’t be a star for what you say, only for the way you say it. Far from being driven apart by differing opinions, [Christopher] Hitchens and [Robert] Conquest were drawn together by their common love of language. The long consequence of their encounters in those years can be enjoyed in the opening pages of Hitchens’ little book Orwell’s Victory (2002) [ed.: Why Orwell Matters in the U.S.], where its author is to be found conceding that Conquest might have had a point about the Bolsheviks all along. But those who never doubted that he did can’t expect credit for having been right. What we can expect is to be dismissed for having been on the Right. To be a liberal democrat was considered reactionary then, and to have been so then is to be considered reactionary now. People who have abandoned erroneous opinions would be giving up too much if they ceased to regard people who never held them as naive. As Revel pointed out, the Left demands a monopoly of rectification.

— Clive James, criticizing one of his friends while writing on Solzhenitsyn, As of This Writing, p. 225 of the 2003 Norton hardback.

I have enjoyed and profited from much of Corey Robin’s writing, but lately he’s been tilting at windmills just a bit. Last year he famously charged Hayek, and with him the rest of the right — the definition of which seems to be those for whom Robin does not care — of pronounced übermenschy tendencies. The convoluted and conspiratorial reasoning of that essay was more reminiscent of a Dan Brown plot or a Glenn Beck chalkboard than Robin’s earlier work. I objected to that article at the time and hoped it was just a misfire, but since then he’s conjured more and more smoke from less and less fire. The Petreaus Affair was, quite simply, not worth the time and effort that was put into it. The BDS/ASA kerfluffle, in which Robin insisted that boycotts were unprincipled only if they were in response to other boycotts, was even more absurd. (If you don’t support BDS and the ASA boycott guess what! You’re a “latter-day McCarthyite“. There should be a version of Godwin’s Law for McCarthyism, which we’ll come back to in a minute.)

Now he has written this:

James Madison, Federalist 51:

The constant aim is…that the private interest of every individual may be a sentinel over the public rights.

Elia Kazan, on why he named names:

Reason 1: “I’ve got to think of my kids.”

Reason 2: “All right, I earned over $400,000 last year from theater. But Skouras [head of Twentieth-Century Fox] says I’ll never make another movie. You’ve spent your money, haven’t you? It’s easy for you. But I’ve got a stake.”

Which led to an exchange somewhat limited by the 140 character cap:

https://twitter.com/whinecough/status/436371599353069568

https://twitter.com/whinecough/status/436371764608655360

https://twitter.com/whinecough/status/436373072979832832

https://twitter.com/whinecough/status/436376123627167744

As I said, I understood Robin’s attempt at making a point. But the point is invalid, and Robin’s blind spot is disturbing. If Kazan’s testimony to the House Committee on Un-American Activities (HUAC) evidenced a public disaster then the disaster had occurred well before Kazan entered the room. Kazan’s choice was to speak truthfully to a democratically-elected legislature — at a time when the Democratic party controlled the House, Senate, and Presidency — that was investigating sabotage against the government of the United States, or to defy it. At first he defied it. Under increasing pressure he named eight names, all of which were already known by the HUAC. Of those, one was already dead, another also testified and contracted with Kazan to name each other so both would avoid blacklists (they did), and the others continued to work on the same New York stages that Robin indicates were more than good enough for Kazan. Kazan’s reputation was the most damaged of any as the result of this event. So where’s the public disaster?

All of those Kazan named were members of or fellow travelers with the same American communist party (CPUSA) that was allied with Stalin before and after the war (including before, during, and after the Molotov-Ribbentrop Pact) and that had “tried” Kazan via an internal judicial proceeding for the crime of being insufficiently activist when doing so would have cost Kazan his career at a stage when he wasn’t well off. Kazan kept his ideals but left the Party as a result. It is worth repeating: Kazan’s livelihood was threatened by American communists in the 1930s, well before Congress came calling. If the CPUSA had been successful in their longer-term revolutionary aims his livelihood — and given the Stalinist proclivities of the American Party at the time, perhaps his life — would have been jeopardized once again. Even after leaving the Party Kazan remained an ideological communist until the Hitler-Stalin pact destroyed what illusions still remained. That was his Kronstadt moment, as it was for many communists.

So what principle was at stake for Kazan, exactly, that he should have sacrificed his own interest to avert “public disaster”? To defend those who had previously bullied him and would undoubtedly do so again if given the chance? To support the members of and sympathizers with a Party that had stuck with Stalin through his murderous show trials, his cynical alliance with Hitler, and his imperial occupation of Eastern Europe? What kind of principle is that? Or, as Kazan put it to Arthur Miller,

To defend a secrecy I don’t think right and to defend people who have already been named or soon would be by someone else… I hate the Communists and have for many years, and don’t feel right about giving up my career to defend them. I will give up my film career if it is in the interests of defending something I believe in, but not this.

This is not hard to understand. There are some people in my life for whom I would sacrifice quite a lot. There are others for whom I would sacrifice a much smaller amount. And there are still others for whom I would sacrifice nothing, because they have wronged me and those that I love or because they espouse principles that I find repugnant. By all accounts, Kazan considered the question in earnest and recognized no principle worth defending. From my vantage point it is difficult to disagree: CPUSA was a repugnant institution, and members of repugnant institutions should not be guaranteed lucrative positions in glamorous industries if only they can convince everyone to hide the fact of their membership, whether it has lapsed or not. Still, rather than acting vindictively, Kazan testified in a way that would cause the least pain for himself and for those around him: he named names already named. He then used the career he saved to make numerous movies from the perspective of the non-communist and non-authoritarian left, including Viva Zapata! the year after his HUAC testimony and On the Waterfront the year after that.

I will agree with everyone who says that the HUAC over-stepped its bounds by miles, that many or most of the members of the HUAC were more concerned with political gain than principle, and that the entire scene was noxious. But the left’s valorization of all those who refused to testify before HUAC and vilification of those who did raises a different set of questions. Who today would side with Alger Hiss over Elia Kazan? Because when Hiss perjured himself concerning his own espionage — as the result of a libel trial Hiss initiated against Whittaker Chambers, it must be remembered; he brought it on himself in more than one way — he not only bamboozled the left but also catalyzed HUAC into the McCarthyite machine in the first place. (It also jump-started the previously mediocre career of Richard Nixon.) And it was a perjury. Nor was Hiss the only one. Had Harry Dexter White lived a bit longer he would have become even more famous than Hiss.

Kazan was brought before HUAC four years after White lied under oath and then died under the strain and two years after Hiss was convicted. In between those two events Richard Nixon graduated from the House to the Senate and McCarthy went on the war path. Both of those events would have been much less likely had the postwar left not unthinkingly supported Hiss. Meanwhile, Kazan did not commit espionage, falsely accuse others of libel, perjure himself, or otherwise discredit the anti-communist left for decades. He did not create a political launching pad for McCarthy and Nixon. He did not reveal any new information. It is quite possible that he did not even materially injure anyone’s life or reputation, at least beyond the extent that he would have been injured had he refused to name already-known names. And if Kazan repudiated the CPUSA — an organization that acted in secrecy with the avowed goal of demolishing the non-Soviet left and destroying the American state — by 1952 it was certainly worth repudiating. According to the International Committee of the Fourth International (in a post pillorying Kazan’s defenders, no less):

Tragically for them and the working class as a whole, the Communist Party by the time of the blacklist had been destroyed as a vehicle of progressive social change. It was a Stalinist party, with a cynical and treacherous leadership, loyal to the twists and turns of the bureaucracy in Moscow.

How is this not worth denouncing? The Trotskyists may be biased but they are not wrong. And yet it is Kazan who is scorned rather than Hiss, despite the fact that the latter did exponentially more damage to the credibility of the left than the former. Kazan contributed to the purging of Bolsheviks from the left — a necessary precursor to the social democratic gains of the succeeding decades — at the expense of making eight members of America’s upper class slightly less materially comfortable.

Why is this so objectionable? According to Robin, it is because Kazan acted in his “private interest” while being interrogated by a government action that he opposed and initially resisted. Robin believes Kazan should become an object lesson for why the Right is wrong. Nevermind that Kazan remained a liberal all his life. Nevermind that Kazan’s testimony, in the context in which it was given, was not merely a question of private interest. Had Kazan wanted to do more damage to the left he undoubtedly could have.

Corey Robin’s post is mood affiliation in pure form. I have no idea what Robin actually thinks of Hiss. Everything he has written about this period acknowledges only reactionary suppression, never the possible reasons for it. The index of both his books contain no mention of Hiss, Google reveals nothing written by him on the subject, and the proceedings from this conference have transcripts for every single speaker but Robin. The silence is curious for someone who has written so extensively on the issues that Robin has, especially when his indices reveal multiple entries for Chambers, communist collaborators, and the Red Scare. Robin is definitely not ignorant. The question is whether he is credible. He increasingly reads like an out-of-time 20th century apologist for anything that is not Right.

Of course, Robin is only using Kazan to discredit Madison and then, via some unclear transitivity, modern-day right reactionaries or maybe the entire structure of American governance. But why? Madison was one of the strongest of the nascent America’s republicans, and in the snippet Robin pulls (as elsewhere) he essentially adopts the language of Rousseau. Here is what happens during the elipses in the quote Robin provides above:

… to divide and arrange the several offices in such a manner as that each may be a check on the other…

That is, Madison wishes for a broad distribution of power, and constant competition among those who would seek it, so that none of them may ever fully obtain it. Robin finds this objectionable because private power is one part of that equation. This is expressing too much and too little all at once. Can private interests cause public disaster? Of course. Does this imply in any way that private interests ought to be abolished? There is not a single data point in history that recommends this conclusion. The irony in all of this, of course, is that Robin finds Kazan’s collaboration with the government objectionable. If a democratically-elected government in which all branches are controlled by the only left party with substantial popular support does not meet his criteria for “public interest” then what would? It was 1952… the other options were not appealing.

Christopher Hitchens once displayed the attitude Clive James criticized by writing about the “loyalty oath”: “If Hiss was wrong, then Nixon and McCarthy were right. And that could not be.” But it was, in this case even if in no others, and it remains so, and Kazan either knew it or sensed it. The movement that coalesced in defense of Hiss’ fabrications is not worth defending now. It galvanized all the worst reactionaries in the postwar era. It contributed nothing to the improvement of the lives of the working class. None of the names Kazan gave were even a part of the working class, nor did they represent it. Meanwhile, the language is important: only the credulous take loyalty oaths. Kazan broke omertà 62 years ago, and Robin isn’t finished with him yet. 

The nice thing about history is that we get to see how it ran. It turns out that the greatest period for the working class occurred in the United States in the twenty years after Kazan testified. This flowering was not a product of CPUSA agitation but of the incrementalist liberals like Kazan that they opposed. Meanwhile, a short four years later, another Kronstadt moment would occur. At that moment who was overdue for reflection: Kazan or his former friend Arthur Miller, who attacked Kazan by writing The Crucible? They later reconciled, and once Miller finally got around to protesting the suppression of expression in the USSR his works were subsequently banned. He at least learned the lesson. (Sort of. He refused to put his name to an open letter protesting Khomeini’s fatwa against Rushdie.)

The journalist Elmer Davis once wondered “How long will these ex-Communists and ex-Sympathizers abuse the patience of the vast majority which had enough sense to never be Communists or Sympathizers at all?” Quite a long time, apparently. Robin’s ability to castigate the usual suspects — Burke, Buckley, and Bush — has always been impressive, but by this stage one wonders if he’s run out of turf. When he has moved into new areas he has displayed reactionary tendencies of his own: if it ever was Right it can’t ever be right. This is demoralizing for those of us who identify with the left but have no interest in genuflecting to “radical” absolutists of yesteryear or today. In the end such demands will only produce ambivalence in many, as they did in Kazan:

I don’t think there’s anything in my life toward which I have more ambivalence, because, obviously, there’s something disgusting about giving other people’s names. On the other hand . . . at that time I was convinced that the Soviet empire was monolithic…. I also felt that their behavior over Korea was aggressive and essentially imperialistic…. Since then, I’ve had two feelings. One feeling is that what I did was repulsive, and the opposite feeling, when I see what the Soviet Union has done to its writers, and their death camps, and the Nazi pact and the Polish and Czech repression…. It revived in me the feeling I had at that time, that it was essentially a symbolic act, not a personal act. I also have to admit and I’ve never denied, that there was a personal element in it, which is that I was angry, humiliated, and disturbed–furious, I guess–at the way they booted me out of the Party…. There was no doubt that there was a vast organization which was making fools of the liberals in Hollywood…. It was disgusting to me what many of them did, crawling in front of the Party. Albert Maltz published something in few Masses, I think, that revolted me: he was made to get on his hands and knees and beg forgiveness for things he’d written and things he’d felt. I felt that essentially I had a choice between two evils, but one thing I could not see was (by not saying anything) to continue to be a part of the secret maneuvering and behind the scenes planning that was the Communist Party as I knew it. I’ve often, since then, felt on a personal level that it’s a shame that I named people, although they were all known, it’s not as if I were turning them over to the police; everybody knew who they were, it was obvious and clear. It was a token act to me, and expressed what I thought at the time….
I don’t say that what I did was entirely a good thing.

What’s called “a difficult decision” is a difficult decision because either way you go there are penalties, right? What makes some things difficult in life is if you’re marrying one woman you’re not marrying another woman. If you go one course you’re not going another course. But I would rather do what I did than crawl in front of a ritualistic Left and lie the way those other comrades did, and betray my own soul. I didn’t betray it.

A vibrant 21st century left does not need to assume every position of its 20th century forebears. It can, and should, be reflective. It can, and should, be willing to acknowledge the gains made by the liberal capitalist compromise. And it can, and should, acknowledge that loyalty oaths and secrecy pacts were mistakes of the past, while openness and transparency — even in the face of persecution — is self-recommending. Rather than excoriate a potential liberal ally for making a reasonable choice under duress sixty years ago we can, and should, try to build broader coalitions rather than narrower. Any left that seeks to sublimate all private interests into knee-jerk collectivism in the 21st century, or any other, is doomed.

The Worst President of the 20th Century: Part Five

By Seth Studer

In yesterday’s post, I wrote that presidents should not be judged as individuals but as metonyms for a complex of policies, persons, and decisions. The problem with this sensible approach to presidential history is that you can’t really make lists comparing and ranking presidents across decades.

And on President’s Day, that’s no fun.

Given that my ability to talk about the presidency with any credibility is more or less limited to the 20th/21st century, I will confine the scope of my list to the eighteen presidents whose entire tenure took place within those two centuries (i.e., I’m excluding William McKinley, who was assassinated in September 1901, and including George W. Bush). I will also write two lists: one for the greatest presidents since 1902, one list for the worst.

And here’s the real catch/compromise: both lists will include all eighteen presidents.

I will write one list with the disposition of Arthur Schlesinger Jr. and one with the disposition of Christopher Hitchens. I will write one list judging the presidents by the mean of their greatest accomplishments and the other by the mean of their failures. The results will be two very different lists. Great presidents will also be terrible. Presidents ranked in the middle of one list will rank high, or low, on the other.

In the interest of brevity, I have attempted (attempted) to summarize their accomplishments and failures in one or two or five sentences (although I’ve allowed myself the option/luxury of separate “foreign” and “domestic” categories). The conversation can continue in the comments, if you like (I’ve written plenty that doesn’t appear here, so fire away!).

I’m posting List #1 today. List #2 is coming soon.

The Greatest Presidents since 1902

1. Lyndon B. Johnson

Domestic: Upon assuming office, Johnson called Martin Luther King Jr. and said, “I’m going to try to be all of your hopes.” In the subsequent two years, Johnson did more for black civil rights than any other U.S. president before or since, including Lincoln. The 1964 Civil Rights Act alone earns him first place.

LBJ & MLK
LBJ & MLK

Foreign: The best we can say about LBJ’s disastrous, oscillating Vietnam policies is that he inherited it from a reckless president and, by 1968, was closing a deal on a genuine ceasefire (a deal that was sabotaged by #4).

2. Dwight D. Eisenhower

Foreign: He led the United States through the most volatile and sensitive years of the Cold War – the most dangerous period of the 20th century – with nary a misstep.

Domestic: Eisenhower consolidated and retained the best elements of the New Deal while encouraging economic growth on an unprecedented scale. If there was any doubt before, Eisenhower made clear that FDR’s reforms were a permanent part of American life. He was arguably our greatest “peacetime” president, if you consider the Cold War “peacetime.”

3. Franklin D. Roosevelt

Domestic: What we call “the New Deal” was a complex of not-always-interrelated policies – some good, some bad – the net impact of which mitigated the worst effects of the Great Depression. Easy to forget: FDR essentially governed as a centrist during a period of social unrest and dangerous extremes, when socialists and fascists alike had loud voices in American streets.

eisenhower-dwight-rockwell.jpg?w=1000
Eisenhower by Rockwell

Foreign: He deliberately kept the United States out of World War II until the last possible moment, and then fully committed all the resources of the U.S. to the war.

4. Richard Nixon

Foreign: Who made your cell phone?

Nixon governed at the precise moment the rest of the world recovered from World War II, when America’s economic standing was most vulnerable. Nixon understood this. He envisioned a “post-America” world when Fareed Zakaria was learning to read, and – 1970s oil crises aside – he succeeded. When Bob Dole eulogized his mentor by declaring that “the second half of the 20th century will be known as the age of Nixon,” he was only wrong in saying “will,” rather than “should.” We’re living in Nixon’s world, a 21st century where the United States – no longer the world’s lone superpower (that lasted like five years) – is nevertheless positioned very, very nicely.

Meanwhile, Nixon and Kissinger made pragmatism cool, shaving the ideological edge off anti-Communist rhetoric in the U.S. and paving the way for critical compromises, important treaties, and the much beloved Reagan-Gorby relationship.

Nixon essentially won the Cold War.

Domestic: Most of his economic reforms reflected his foreign policy (i.e., to ensure the U.S. is economically well-positioned for the next fifty years). His administration accomplished many liberal goals that Democrats had struggled to accomplish (e.g., creation of the EPA), but on more moderate terms.

FDR, on the left.
FDR, on the left.

5. Theodore Roosevelt

Domestic: The iconoclast in me wants to take the cult of TR down a peg (and down several notches on this list). He feels like an overrated president to me. But his progressivism and trust-busting were so impressive and critical given the times, and the precedents they set were so far reaching (for good and ill, but mostly for good), that I can’t place him any lower.

Plus: people who gush over National Parks today are irritating, but people who opposed National Parks back then were more irritating.

Foreign: Whenever a U.S. president brokers the end of a war, it’s a good thing. I confess I find TR’s foreign policy a little confusing. He’s often wrongly blamed/given credit for “American imperialism” or U.S. interventionism, but American adventures overseas date back to Thomas Jefferson at the latest. We forced Japan out of isolation when a guy named Millard Fillmore was president. But TR amplified and clarified the foreign policy of his predecessors, to be sure, and is probably best imagined as a conduit between the quiet, sneakier, “we have half the globe to ourselves” foreign policies of the 19th century, a period when much of the world could be ignored, and the fully-engaged, globalized foreign policies of 20th century administrations.

Also, he had a personality.

Teddyball
Teddyball

6. Woodrow Wilson

Domestic: People forget that the income tax was levied to help offset tariff reform, which would have drastically improved the United States’ position in the global economy, had the nations of Europe not decided that a devastating world war, an unprecedented socialist revolution, and the collapse of four empires was a quicker way to improve everyone’s position in the global economy (or at least level the playing field).

Either way, the United States came out on top. And Wilson’s model of governance endured. If Teddy Roosevelt transferred prestige and power back from the legislative to the executive branch, Wilson molded that power into its current form. With the Federal Reserve, et al, Wilson consolidated the Progressive reforms of the previous two administrations into a new, permanent system of government that, like it or not, produced the most prosperous nation and the most prosperous century in the history of civilization. (One major depression < four wildly unprecedented economic booms.)

Foreign: He tried his hardest.

8 (tie). Ronald Reagan

Foreign: As more documents are declassified, we see how cautious, sensitive, and covertly pragmatic was Reagan’s approach to the Soviet Union…even in his first years as president, when the collapsing Soviet Union’s internal politics were extremely dangerous and volatile. Reagan’s anti-Communism extended only to small, inconsequential nations. When the time came to end the Cold War, he did virtually everything right. Even his intransigence served a purpose, providing Gorbachev leverage against the Soviet equivalents of, well, Ronald Reagan.

Domestic: People made money.

Wilson, pleased after winning the H.P. Lovecraft Lookalike contest.
Wilson, pleased after winning the H.P. Lovecraft Lookalike contest.

8 (tie). Bill Clinton

Domestic: Like Wilson after TR/Taft or Eisenhower after FDR/Truman, Clinton consolidated and blunted major reforms implemented by the opposite party. By the end of his two terms, any serious opposition to neoliberalism in his own party was long dead. He raised taxes during a period of unprecedented economic growth, eventually generating the surplus that Republicans were always talking about (like Nixon: “what they put forward, I put through”). On almost everything but income taxes, Clinton was more Reagan than Reagan. The so-called “Reagan Revolution” ought to be renamed the “Reagan-Clinton Revolution.”

Foreign: By the time he took office, it was too late to reverse the political collapse of post-Soviet states/satellites. He responded to terrorist attacks as violations of international law, worthy of a military response but not full-on all-in total warfare. Y’know, the good old days.

9. Harry Truman

Domestic: Truman put the civil rights of black Americans on the Democratic agenda, risking his re-nomination over an unpopular plank that would lead to the 1964 Civil Rights bill. People give JFK credit on Civil Rights, when he was essentially the weakest link in a chain between Truman, Eisenhower, and LBJ.

Truman successfully transitioned the U.S. economy from a war to a peacetime economy, leading the nation through the 1946 recession and fears that the United States would share the British postwar experience (where recession persisted well into the 1950s).

Do you have to ask? Have you seen that face?
Do you have to ask? Have you seen that face?

Foreign: Truman’s is the hardest foreign policy to assess. He ended the war with Japan and helped rebuild what would become one of the world’s most successful democracies. He fired a psychotically dangerous and dangerously popular general, at great political risk.

The atom bomb was an extension of Roosevelt’s Japanese war policies; Roosevelt didn’t build the bombs not to be used. In my mind, there’s no exchange rate on human life. Death by an atom bomb is no more evil than death by firebombing (and firebombing was far more destructive), so I don’t hold Truman in special contempt merely because he used atomic weapons. (Hindsight helps: he apparently set no precedent for U.S. or Soviet leaders, and we survived the Cold War without a nuclear exchange. If Truman had pulled the trigger on an apocalyptic war, perhaps I’d assess Hiroshima differently.)

Did Truman avert war with the Soviet Union or a Soviet invasion of western Europe? Probably not. The U.S.S.R. was too weak to do anything but heave threateningly at its borders. Were the policies that framed what we’d eventually call “the Cold War” prudent and successful? Today I wish we’d favored engagement over containment; but at the time, containment was viewed as an alternative to direct conflict. Is it a victory when you save Berlin but lose half of Germany? I don’t know. Truman essentially created Israel, a fact that should inspire pride but inspires ambivalence in many. Even his greatest achievements in foreign policy make you wince from time to time.

10. William Howard Taft

Domestic: Scaled back TR’s reforms without abandoning the Progressive project. He made Progressive reforms more palatable to the business community, who by 1909 felt alienated and antagonized, and would have mobilized against further executive interference and reversed TR’s best reforms had Taft not essentially held out an olive branch (a branch that cost him a second term).

Remember when conservatives impeached this guy?
Remember when conservatives impeached this guy?

11. Warren G. Harding

Domestic: an underrated president. Elected to reform the excesses of the Progressive era, which he began to do…and his reforms would have been more moderate than Coolidge’s excessive inaction. If Secretary of Commerce Herbert Hoover had gone to President Harding, rather than to President Coolidge, to warn of recklessness on Wall Street, Harding might have listened. Harding’s famous scandals are overplayed and meaningless.

12. Herbert Hoover

Domestic: He did what he could. He intervened in the economy as the Depression worsened. Most of his stimulus policies were adopted by FDR and incorporated into New Deal programs. If FDR had been elected president in 1928, he would have lost in 1932, too.

13. George H.W. Bush

Domestic: Americans with Disabilities Act. Try getting around New York City in a wheelchair in the 1970s.

Foreign: Asserted strong civilian leadership over the military (military historian Thomas E. Ricks calls Dick Cheney the greatest secretary of defense in the modern era for this reason). Demonstrated how to fight a short, quick, effective war after the Cold War.

coolidge

14. John F. Kennedy

Foreign: he taunted Khrushchev over and over, and when Khrushchev responded by installing nuclear missiles in Cuba, he didn’t overreact. He got scared, he calmed down, and he behaved soberly at a very, very critical moment. I give him credit for that.

15. George W. Bush

Domestic: Made earnest attempts to reform Social Security, education, and a few other things.

Foreign: Excellent work in Africa. Convinced recalcitrant extremists in the Republican party to chill out about AIDS, other social issues.

16. Calvin Coolidge

Domestic: People made money.

Time's headline does its best to keep Vietnam from unifying
Time’s headline does its best to keep Vietnam from unifying

17. Gerald Ford

Domestic: Inspired John Updike to write, “What was unthinkable under Eisenhower and racy under Kennedy had become, under Ford, almost compulsory. … [W]as there ever a Ford Administration? Evidence for its existence seems to be scanty.”

18. Jimmy Carter

Foreign: One rarely wants to praise a president for instigating a war, but rumors of the Carter administration’s hand in the Soviet invasion of Afghanistan – that they exacerbated conditions in Afghanistan and prompted the flailing Soviet government to make a rash and ultimately fatal decision, to “give Russia their Vietnam” – if true, well…it’s the sort of thing that, if Reagan had done it, Republicans would never stop celebrating.

And although Carter deserved some credit for the Camp David Accords, most of the credit and all of the risk belonged to Sadat.

Next time: the Worst Presidents since 1902.

The Worst President of the 20th Century: Part Four

By Seth Studer

This is part of an ongoing series about 20th century American presidents, what they did, and how we think and talk about them. Each essay can be read on its own, but if you wish to see the others, click here.

Hegel, busily inventing the 20th century while students patiently await their own intellectual germination.
Hegel, busily inventing the 20th century while students patiently await their own intellectual germination.

 

1. Presidential History: from the Academy to the Public

On December 15 1996, JFK hagiographer Arthur Schlesinger Jr. used the pages of the New York Times Magazine to work through some daddy issues.

Some background:

Arthur’s father, Harvard professor Arthur Schlesinger Sr., was one of several historians who in the 1920s and ‘30s helped develop, defend, and establish what would become the default methodology in U.S. history departments. These historians – influenced by post-Marxian social theory in Europe and the rapid development of sociology over the past half-century – argued that “bottom-up” analyses of history yielded better knowledge than the then-still-traditional “top-down” approach.

Further background:

Imagine you are a student of history at the University of Iowa in 1910. You could expect an education with two foci: verifiable facts and historical narratives. These foci generated two activities: the verification of facts using primary sources and the construction of narratives using those facts.

This method of study was executed with one underlying assumption: history does not repeat itself. And because the past will not return, intensive research into the diets of 17th century French peasants is less useful than a study of King Louis XIV’s domestic policies. Louis was the main actor, after all, the one who made things happen. Seventeenth-century French peasants were minor actors by comparison; their impact (even if you consolidate them) was minimal. If you want to understand the past, start at the source.

Much like the model of American university itself, this traditional historical methodology originated in 19th century Germany. This method had been practiced to varying degrees by amateur historians since the Renaissance. But it was formalized by professional historians in Germany (esp. Leopold von Ranke and Friedrich Karl von Savigny) in reaction against Hegelian philosophy, which explained, totalized, and subordinated history to transhistorical systems. Hegel’s work successfully pollinated a thousand significant ideas and academic disciplines. And while every university in Europe and America had a few token Hegelian historians, the fact-and-narrative method dominated. In Anglo-American historical scholarship, “facts” were generally details surrounding big events caused by big actors (important men, but occasionally large populations if they could be viewed as “top-down” actors).

By the time Schlesinger the Elder arrived at Harvard (in 1924), new approaches to historical knowledge were percolating. They had already developed in other discipline over the past half-century. These new approaches would use economic pressures, material conditions, and theories of society to write history. Although this shift in focus seemed radical to older historians, these new methods shared with their predecessors an aversion to abstract models and systems (Hegelian or otherwise), a rejection of the notion that “history repeats itself” (or even that it can rhyme without awkwardly forcing it), commitment to facts, and deference to empiricism as an epistemological base.

These new methods did not quell interest in the great persons of history. Historians tempered their emphasis on Sun Kings and Great Events. Meanwhile, the general public remained entranced by the glow of George Washington and the Civil War. Both remained invested in narratives. And narratives tend to reveal patterns; it is difficult to inoculate yourself against this. On the eve of World War II, Schlesinger Sr. published an article that proposed a cyclical interpretation of U.S. political history. He argued that trends in U.S. federal polices followed a pendulum-like pattern. Although this hardly amounted to a large Hegelian system, Schlesinger Sr. had stepped outside the mainstream.

Except that he didn’t. By 1948, Schlesinger Sr. had established a reputation outside academe, and was asked by Life magazine to poll his colleagues and rank the presidents of the United States. I do not know whether Schlesinger Sr. paused to reflect on the inadvisability of such a endeavor, on how many of his own best practices he’d be violating. A list ranking the presidents would only tell us what a handful of historians in 1948 thought, but it would tell us nothing about the presidents. Further, such a list would only encourage the public’s inflated view of the presidency.

Nevertheless, Schlesinger agreed. He asked his participants to assign each president a degree of greatness, ranging from “great” and “near great” to “below average” and “failure.” Each category was assigned a value, and the number of votes each president received in each category determined their place on the list. The results weren’t surprising: Washington and Lincoln at the top, Buchanan at the bottom. The list and its accompanying article in Life were so popular that Schlesinger Sr. was invited to repeat the experiment in 1962. On both occasions, Schlesinger Sr. surveyed nearly 100 historians.

Schlesinger Sr.’s progeny, Arthur Jr., began his foray into the family business at the family’s company in 1940. In 1954, Harvard promoted Arthuer Schlesinger Jr. to full professor sans PhD, largely on the merits of his popular, Pulitzer Prize-winning study of Jacksonian democracy (also: he was Art Sr.’s boy).

"The Historian as Participant": Arthur Schlesinger Jr.
“The Historian as Participant”: Arthur Schlesinger Jr.

Schlesinger Jr. behaved like anyone who’d just received the world’s biggest Free Meals for Life ticket, at the world’s greatest university: he resigned for the volatile world of politics and a chance to fill an elusive, ephemeral, and newly emerging role in American society: the public intellectual.

Long active in Democratic politics, Jr. hit the mother lode in 1960 when he joined the campaign and administration of John Fitzgerald Kennedy. He served as one of several “house intellectuals”: young men hired by Jack and Bobby to lounge around the White House and write memoranda to be set aside as doodling paper for the president (an incurable doodler). These men also provided a requisite level of eggheadedness – they lent an intellectual veneer – to offset the Kennedy glamour (Jack) and thuggishness (Bob). They, like the White House furniture, had a good and honorable purpose.

After Kennedy’s assassination, Schlesinger Jr. wrote A Thousand Days, an instantly popular, hagiographic “insider’s view” of the JFK administration. From then on, Schlesinger Jr.’s reputation as a scholarly but thoroughly public historian-intellectual was impeachable. He confidently spoke about administrations into which he had not enjoyed an “insider’s view” (e.g., Nixon’s). He taught at the City University of New York, but he no longer moved with the currents of historical scholarship. And frankly, between celebrity and serious scholarship, which would you choose?

But Schlesinger Jr. did little to popularize the best practices of his ostensible métier. And so in 1996, when the New York Times Magazine solicited a list ranking the presidents, Schlesinger Jr. must have thought about his father. Although Sr.’s scholarship was far more rigorous than Jr.’s, both men had turned away from hard scholarship to satisfy a very basic desire, one that the overwhelming majority of their countrymen felt, one that (in 1996) a new, radical brand of hardcore historicists had spent nearly two decades combating: the desire for easy access to history, to one’s own national history.

Of course Schlesinger Jr. would oblige.

The 1996 survey resembled the ’48 and ’62 surveys. Identical format. The pool of participants was decidedly smaller (Schlesinger Jr. surveyed twenty-nine professional historians, two politicians, and Doris Kearns Goodwin). Confirming a thesis developed later by Meena Bose et al (which examined hundreds of similar surveys), long-dead presidents fared better than more recent presidents, who tended to fall in the “average” category or lower.

You may see the full list here. If you remove all but the 20th century presidents from Schlesinger Jr.’s list, here are the rankings (accompanied by their presidential GPA):

  1. Franklin D. Roosevelt (“Great,” 3.97)
  2. Teddy Roosevelt (“Near Great” 3.31)
  3. Woodrow Wilson (“Near Great” 3.21)
  4. Harry Truman (“Near Great” 3.10)
  5. Dwight Eisenhower (“High Avg.” 2.34)
  6. John F. Kennedy (“High Avg.” 2.29)
  7. Lyndon B. Johnson (“High Avg.” 2.21)
  8. Bill Clinton (“Avg.” 1.58)
  9. Robert Taft (“Avg.” 1.52)
  10. George H.W. Bush (“Avg.” 1.45)
  11. Ronald Reagan (“Avg.” 1.42)
  12. Jimmy Carter (“Avg.” 1.37)
  13. Gerald Ford (“Avg.” 1)
  14. Calvin Coolidge (“Below Avg.” 0.88)
  15. Herbert Hoover (“Failure” -9)
  16. Richard Nixon (“Failure” -21)
  17. Warren G. Harding (“Failure” -48)

Schlesinger’s list was widely publicized (a Schlesinger-authored analysis of the list appeared in Political Science Quarterly). Political commentators in the mass media, which had contained less mass backed when Schlesinger Sr. was publishing his lists, used it to render judgments on still-active presidents, politicians, and legislation. One of these judgments had very interesting consequences, something Schlesinger Jr. could not have anticipated or desired.

2. Presidential History: from the Public to Mythology

Will Bunch, a left-wing journalist, has exhaustively and credibly documented the cottage industry of Reagan hagiography that emerged in the mid-1990s. At that time, conservative Republicans were confounded that Bill Clinton’s popularity persisted despite the fact that he wasn’t a conservative Republican. In 1992, the year of Clinton’s election, a Gallup poll fixed Ronald Reagan’s favorability ratings around 44%. Jimmy Carter’s, meanwhile, were around 63%. In Bunch’s account, 1996 was the tipping point for Republican panic over Reagan’s legacy and, writes Bunch, the Schlesinger Jr. rankings were a major factor in this tipping point. Reagan was deemed “Average.” He scored 1.42 out of 4, below both H.W. Bush and Clinton (who hadn’t even finished his first term when the survey was published). This revelation – that history professors at prestigious universities don’t much care for Ronald Reagan – should have elicited a reaction comparable to news of the pope’s sectarian affiliation. But tax guy Grover Norquist, writes Bunch, was sufficiently alarmed and motivated by the rankings to take action. The following spring, Norquist founded the Reagan Legacy Project: a division of his influential Americans for Tax Reform group that would be devoted solely to hyping Reagan.

A summary of the near deification of Reagan among conservatives, and the increased admiration for Reagan among everyone else, during the late 1990s and 2000s is unnecessary. This mood peaked somewhere between Reagan’s 2004 funeral and the 2008 primaries, when the Republican candidates fought desperately to out-Reagan each other. Reagan was a certifiably mythic figure.

But myth-making can be an ugly business. Because in mythology, there’s a frustrating tendency to be killed by one’s offspring.

Now that Reagan has an iPhone, they can follow each other on Vine.
Now that Reagan has an iPhone, they can follow each other on Vine.

By 2012, something had changed. Republicans, conservative pundits, and your father-in-law were suddenly hearing Reagan’s policies and statements turned against them. Reagan was being quoted by people who looked like Rachel Maddow and argued over minutia on Daily Kos forums. In February 2011, the centennial of Reagan’s birth, countless media outlets ran profiles of “the real Reagan” or “Reagan the liberal.” Time featured a cover that showed Obama and the Gipper chumming it up against a white background (like two guys in a Mac ad). Among the Slate and Salon class, Reagan’s liberal streak was already an article of faith.

Reagan had backfired.

Like Barry Goldwater before him (and Robert Taft before Goldwater), Reagan no longer seemed so conservative…not because American conservatives had shifted so far to the right but because “conservative” means different things in different eras.

Something else had changed: the distance between the 2012 primaries and Ronald Reagan’s last day in office was roughly the distance between Hitler’s suicide and the assassination of Martin Luther King Jr. The distance between 2012 and the day Reagan became president was the distance between Hitler’s suicide and Reagan’s first presidential campaign against Gerald Ford. Enough time had passed – and enough documents were being declassified – for serious historians to begin assessing Reagan’s presidency with sobriety and distance. Two Princeton historians, Sean Wilentz and Daniel T. Rodgers, published books on the Reagan era in 2009 and 2011. Wilentz’s accessible Age of Reagan was favorably reviewed by George Will in Time magazine. Rodgers’ Age of Fracture was an ambitious attempt to synthesize American culture during the end of Cold War. Age of Fracture was published 30 years after Reagan’s first inauguration.

The 1980s felt remote.

In 1999, three years after Schlesinger Jr.’s list, Time magazine published a list ranking the presidents of the 20th century as part of their “End of the Century” coverage (they had polled nine journalists and historians, including Schlesinger Jr.). Time published its list with anonymous comments from the participants:

  1. Franklin D. Roosevelt: “Indisputably the century’s greatest”
  2. Theodore Roosevelt: “The great prophet of affirmative government”
  3. Woodrow Wilson: “A great visionary who presided over major domestic advances”
  4. Harry S Truman: “A decent human being with homespun virtues”
  5. Dwight D. Eisenhower: “No hint of scandal either. The good old days”
  6. Ronald Reagan: “Jury still out”
  7. Lyndon B. Johnson: “America would have found a way to give blacks the vote without him, but don’t ask me how”
  8. John F. Kennedy: “Might be first-tier if he had lived longer”
  9. George Bush: “A skilled and decent administrator”
  10. Bill Clinton: “Jury out here too–maybe literally!”
  11. William Howard Taft: “Achieved nothing good with excellent situation left him by T.R.”
  12. (tie) Calvin Coolidge: “Left little historical legacy”; “Could have been greater if faced with challenges”
  13. (tie) Gerald Ford: “Returned nation to normality”
  14. Jimmy Carter: “Should have been a preacher”
  15. Richard Nixon: “The most difficult President to assess”; “Uniquely a failure among American Presidents”
  16. Warren G. Harding: Term: “Whatever personal shortcomings, presided over a period of economic growth”
  17. Herbert Hoover: “Victim of bad luck”

 

 

 

 

 

 

 

 

 

Presidential rankings are interesting because, when you compare them across time, they reveal the fluctuations of American cultural identity and how history is incorporated into that cultural identity. When Schlesinger Sr.’s second poll was published in 1962, half the interest it generated was the addition of a few new presidents, and the other half was the (relatively few) changes between the ’48 and ’62 lists. People wanted to know how the 19th century had changed between 1948 and 1962.

One might think that public rankings (of which there are many) would reveal more about cultural attitudes toward former presidents, because…well, they’re the public. But although the public has a monopoly on the culture, the public alone is not necessarily the best gauge of the culture’s self-conception. Rankings generated by historians arguably tell us more about cultural changes, because historians a.) possess more comprehensive knowledge of U.S. history and b) try (or imagine themselves) to be thoughtful and rigorous in their assessments. For this reason, the changes between their rankings – though smaller and less dramatic than changes on public rankings – are arguably more charged with cultural meaning: these are the bits of culture that filtered through even the historians’ ostensible sense of fairness vis a vis the past. What appear to be the thinnest vein proves to be the richest mine.

Imagine the Time list today. Lyndon B. Johnson’s ranking is slightly higher than in the 1996 Schlesinger Jr. list. But in 2014, two Robert Caro volumes and one Affordable Care Act later, I believe he might rank higher. In Lee Daniels’ recent and self-conscious ABOUT AMERICAN IDENTITY!!!, Johnson is portrayed more favorably than Kennedy. The film ends with a succession of voices speaking hope, civil rights, and the black experience. Johnson’s is the last white voice we hear. Steven Spielberg and Tony Kushner’s Lincoln reflects admiringly the qualities that Lincoln and Johnson shared.

Similarly, Wilson’s visionary internationalism might have seemed appealing into the heady, post-historical days of the 1990s. But after nearly eight years of disastrous neo-conservative internationalism (just as visionary as Wilson’s), when Samuel P. Huntington joined Francis Fukuyama on State Department shelves, Wilson’s foreign policy idealism is less attractive. Eisenhower, whose stock has been rising, might take Wilson’s place on the list.

In 2014, Truman and his accomplishments feel distant, while Nixon’s appear towering (the towers that link our iPhones, made in China). Nixon was already undergoing massive rehabilitation in 1999, but many of the historians/journalists on the Time panel lived through the ‘60s and ‘70s. I’ve found it’s difficult to talk reasonably about Nixon with a person whose political life began in the 1960s. Today, though, Nixon would surely be among the top ten. Truman would, too, but lower down.

No serious Carter rehabilitation has taken place. Bill Clinton, meanwhile, received almost instant rehabilitation, and would rank higher today. The narrative of the 1990s – a period of massive economic growth and peace, all of it ruined by the next guy – would prove irresistible to Time‘s panel.

Reagan? Reagan might stay in place, just outside the Top Five. Presidents would move around him, up and down, but he would stay fixed, smiling and content. Since 1999, as documents are declassified and trickle down, historians like Sean Wilentz have confirmed half of what Reagan’s worshippers believe, and dispelled quite a lot of what Reagan’s despisers believe. So it’s a wash.

Herbert Hoover is still a victim of bad luck.

3. Presidential History: from Mythology to Fable; or, Slouching toward Rushmore

Responsible historians would balk at these lists (even if we’re analyzing the historians rather than the presidents – the samples are too small). And rightly so. They encourage the wrong kind of thinking. When I speak to students about presidents, I encourage them to ignore any success/failure paradigm and treat the presidential name – Lincoln, Roosevelt, Clinton – as not a person’s name but as a metonym for a collective: a complex of policies and individuals that must be judged one by one. That approach is more productive, and that collective more interesting, than one man’s career and biography.

And yet we want our Great Men. We want them not merely as they existed in the past, but projected forward into our present: speaking to our problems, condemning our enemies, confirming our prejudices blessing our decisions.

We want to know the presidents.

But if you want to actually engage with the past, you’ll first find only dead silence. The anti-Hegelians resisted abstracted or cyclical history because they believed that the past speaks only to and about itself. At its most extreme, this view reduces history to delicate facts that crumble with the slightest extrapolation. The practice of history, the ability to make claims about the past, is practically impossible. The past is a sealed tomb. Most historians today are more pragmatic, borrowing methods and principles from the social sciences. They borrow these methods because the tools are strong and the excavation of historical knowledge is incredibly hard. You cannot apply even the recent past forward without great rigor and painstaking precision. To utilize historical knowledge properly, you must rely on slivers of specificity or sturdily engineered abstractions (usually constructed with the help of others). And you cannot allow specificity and abstraction to cross-contaminate. Everything you claim must be qualified and controlled.

This does not make for engaging or accessible presidential biographies.

Driving home last week, I endured a Minnesota Public Radio host lapping up the latest historical musings of journalist Simon Winchester. His new book that purports to introduce “the men who united the states,” men who are – fortunately for Winchester – “explorers, inventors, eccentrics, and mavericks.” The book is actually about oft-overlooked figures who show up at critical moments in U.S. history. A noble enough subject. But Winchester gives us fables. When asked, regarding Minnesota (paraphrase), “Who settled this area? Who made Minnesota a place? And why did they come here?”, he does not mention the fur trade and the decline of French colonialism in North America and the War of 1812 and federal incentive programs and politico-economic refugees from central Europe. No, Winchester driveled on about people “seeking adventure,” stir-crazy Easterners who wanted to live on the outskirts of civilization. They had an itch, and building a nation on prairie wilderness was the cure! He actually quoted Will Cather’s famous musings on the subject: the plains are “not a country at all, but the material out of which countries are made.” This passage beautifully represented a Virginian’s first impressions of Nebraska, written from Cather’s 20th century vantage, not the actual motivations of the actual miserable masses who traded New York and cholera for the Great Plains and scarlet fever. But for Winchester, the retrospect (Cather) preceded the event (the pioneers).

The contours of the present are determined by the material past. The fact of the material past is undeniable. We know that it exists, but it is hard to see. History shows us the shadows of the past, sometimes with surprising clarity, but it is easily corrupted. Fabulism is inherent in practically every publicly accessible account of American history. Perhaps this fabulism cannot be eradicated; it can only be pruned and minimized.

Regrettably, the overwhelming majority of those Americans who actually bother to think about history prefer that fabulism flourish. They want to learn from the past. They want a greatest president. They want a worst president. They want to make the past present. But the past belongs to the dead, who are mute and can be understood only by the conditions and corpses they leave behind. We take their words out of context the moment we speak them. We construct fables. We want the Angel of History to fly facing forward, like the bald eagle.

Next time: I violate everything I’ve written here and rank the 20th century presidents!