Why Did Japan Attack Pearl Harbor?

This is a guest post by Dave HackersonA previous post in this series is can be found here.

The International Dateline is truly a fascinating thing. It’s like a magic wand of time that can both give and take, depending which way you head. Each time my family and I fly back to the Midwest, the space time continuum is seemingly suspended. Leave Tokyo at 4:00 pm, touch down in the Midwest at 2:00 pm, and then reach our final destination by 5:00 pm of the same day. Over 15 hours of travel that appears to have been compressed within the span of one single hour. I still can’t wrap my head around it at times.

This dateline has a way of slightly altering our perspective of historical events. Most Americans are familiar with the following quote from President Franklin Delano Roosevelt: “December 7th, a date that will live in infamy.” The date to which he refers is the day on which the Combined Fleet of the Japanese Imperial Navy under the command of Admiral Isoroku Yamamato attacked the elements of the US Pacific fleet at Pearl Harbor. However, this is the narrative from the American side of the International Dateline. The December 24th edition of Shashin Shuhou (Photographic Weekly), a morale boosting propaganda magazine published in Japan from the 1938 until mid-1945, carried the following headline for its graphic two-page artist’s depiction of the attack: “Shattering the dawn: Attack on Pearl Harbor, December 8th”. The Japanese government christened the 8th day of each month as Taisho Hotaibi (literally means “Day to Reverently Accept the Imperial Edict”) to commemorate the great victory over the United States at Pearl Harbor and the Imperial declaration of war on the US and its allies (the day also served to regularly renew nation’s fervor and commitment to the war effort). Was Pearl Harbor a great victory for the Japanese? The answer to this question depends on the context in which the attack is viewed. From a purely military engagement view, it is safe to say that it was a resounding success, but did this single engagement succeed in shaping the course of the upcoming conflict? This is the question that the Mainichi Shinbun explored in the third installment of its series “Numbers tell a tale—Looking at the Pacific War through data” (the original, in Japanese, is here). True to the narrative on this side of the Pacific, this article was released on December 8th last year. Just as with the other installments in the series, it presents a slew of data that helps to put historical events into context.

“Did the attack on Pearl Harbor truly break the US? Japan’s massive gamble with only a quarter of the US’s national strength.” The title of the article does a nice job of setting up the exhaustive economic analysis it conducts in an attempt to answer this question. The very first thing the article does is to compare the respective GDPs of the US and Japan in 1939. At this time, Japan’s GDP stood at 201.766 billion dollars. However, this amounted to less than a fourth of the US’s GDP of 930.828 billion dollars (note that figures are not adjusted for inflation). Even the UK had a larger GDP than Japan at 315.691 billion dollars. When you combine the GDPs of the US and UK, Japan already suffered a disadvantage of greater than 6 to 1.

The next set of figures the article introduces is related to industrial capacity. The first thing it examines is iron production, and here the article makes reference to the quote by Prussian leader Otto Van Bismarck, who claimed that it was iron which made a nation. Taking Bismarck at his word, Japan’s iron production did not bode well for its position as a nation. In 1940, Japan’s national production of crude steel was 6,856,000 tons per year. In contrast, the US was producing nearly nine times that amount at 60,766,000 tons per year. Likewise, Japan lagged far behind the US in terms of electric power output and automobile ownership. Japan’s electric power output in 1940 stood at 3.47 billion kWh, but this figured was dwarfed by the US’s output of 17.99 billion kWh. The gap in automobile ownership is also especially telling. The 1920s are often considered to be the decade in which America “hit the roads” and became enamored with the automobile, and this fact is backed up the figures for automobiles owned by Americans in 1940. By that year, there were already 32,453,000 automobiles on roads in the US. Japan didn’t even come close, with only 152,000 automobiles scattered across the country.

In addition to lacking the physical resources and infrastructure to sustain a prolonged war of attrition, the makeup of Japan’s economy also posed a number of difficulties. Here the article emphasizes a major difference between Japan and other first world nations at that time: Japan was not a “heavily industrialized nation”. This fact was clearly reflected in the country’s exports. In 1940 finished metal products accounted for only 2.8% of the nation’s exports, while raw silk, textiles, and clothing products made up for more than a quarter. Likewise, only 30% of the nation’s income was generated by industry, which was less than the combined income of agriculture, retail, and transport sectors. In the 1930s, Japan made every effort to expand its heavy industries. The Truman administration dispatched an investigative committee to Japan after the war to study the effects of America’s strategic bombing on Japan and its economy. The study found that in 1930 the industrial makeup of Japan was 38.2% heavy industry and 61.8% light industry. By 1937 Japan had succeeded in reversing these percentages to 57.8% and 42.2%, but the difficulty the nation had in securing the resources it needed for industry restricted its industrial capacity. The study did not mince words in its assessment of the Japanese economy. “The nation of Japan is truly a small country in every manner of speaking, and ultimately a weak nation with an industrial infrastructure dependent on imported materials and resources, utterly defenseless against every type of modern-day attack. The nation’s economy at its core was rooted in a cycle of daily subsistence, in which people only produced what they needed for that day. This left it with no extra capacity whatsoever, leaving it incapable of dealing with potential emergencies that may arise.”

To compensate for its lack of resources, Japan cast its gaze across the waters to Manchuria. Japan had steadily expanded its interests in Manchuria since its victory in the Russo-Japanese War in 1905, and placed the South Manchuria Railway Company as the primary driver of this massive undertaking. This company was founded in 1906 upon the railway Japan received from Russia after the war, and was a national policy concern that was half-owned by the state. Japan aimed to make Manchuria the focal point of an own economic bloc that also included Korea, Taiwan, and China. While Manchuria was rich in natural resources, it was highly underdeveloped, and Japan ultimately exported far more machinery and infrastructure building equipment than the resources it imported. While Japan was able to construct some of this machinery and equipment on its own, it was dependent on material and machine-related imports from the US, UK, the Netherlands, and Australia, the very nations against which it would ultimately go to war. In 1930, Japan exported nearly 96% of its raw silk thread to the US, which would send raw cotton back the other way. Japan would then process this cotton into finished cotton products for export to British India and the UK. Using the profits from these exports, Japan would then import strategic resources from the US, UK, and the Netherlands, such as oil, bauxite to create the aluminum used in air craft, and the bronze needed for the metal casings of bullets. The problematic nature of these trade relationships was pointed out by the Japanese economist Toichi Nawa of Osaka University of Commerce (present-day Osaka City University). In his book Research on the Japanese Spinning Industry and Raw Cotton Problem, Nawa stated that “any confrontation with the UK and US would be tragic, and must be avoided.” He further elaborated on Japan’s trade issues, saying that “the more Japan rushes along its efforts to expand heavy industry and its military industrial manufacturing capacity so it can bolster its policies on the continent (Manchuria and China), the more dependent it becomes on the international market, creating cycle that leads to increased imports of raw materials. Herein lies the gravest of concerns for the Japanese economy.”

Nawa’s words proved to be all too prophetic. Japan’s aggressive agenda in China following the Marco Polo Bridge incident in 1937 brought heavy criticism from the global community. As the conflict in China escalated, Western nations retaliated with economic sanctions and restrictions on imports. The most devastating of these was the US’s decision to ban all oil exports to Japan in August of 1941. The US was the world’s largest producer of oil in 1940, accounting for over 60% of the world’s supply. The upper brass of the Imperial Japanese Navy had predicted that they had enough oil stockpiled to wage war for at least 2 and half years, but if the UK and US shut off all oil exports, they would have no other choice but to move into Dutch territory and seize the oil fields of within 4 to 5 months in order to augment their supply. The attack on Pearl Harbor occurred exactly four months later.

Did Japan truly have the capacity as a nation to wage a modern war against a nation such as the United States? As tensions rose in US-Japan relations, Japanese government and military officials took a hard look at the data available in an attempt to answer this question.

A joint military and civilian economic study group organized around army paymaster Lt. Colonel Jiro Akimaru was set up in February 1941 to undertake this task. Known as the “Akimaru Agency”, this group was split into four sections to study the total war capacity of Japan, the UK-US, Germany, and the Soviet Union. The report they compiled by the end of September 1941 made the following conclusions:

1)      The conflicting state between Japan’s military mobilization and its labor force has become fully evident. Japan has also reached its peak production capacity, and is unable to expand it any further.

2)      Germany’s war capacity is now at a critical point.

3)      Not a single flaw exists within the US’s war economy.

Even if Japan sacrificed the living standards of its populace to boost its war capacity, it still would not have the financial resources to compete with the US. Hiromi Arisawa, a member of the UK-US section who was also president of Hosei University during his lifetime, made the following remarks when reflecting back on the report the Akimaru Agency prepared:

“Japan cut national consumption by 50%. In contrast, America only reduced its national consumption by 15 to 20%. Excluding the amount of supplies they shipped to other Allied nations at that time, the savings from this reduced consumption provided them with 35 billion dollars* for real war expenditures. That was 7.5 times greater than what Japan was capable of achieving with its cuts.”

Lt. Colonel Akimaru alluded to this fact when he presented the report at an internal staff conference meeting for the Army. Gen Sugiyama, Chief of Staff of the Supreme Command, acknowledged that the report was “nearly flawless” in its analysis. After praising Akimaru for the quality of the report, he then issued the following order. “The conclusion of your report goes against national policy. I want you to burn every copy of it immediately.”

Lt. Colonel Hideo Iwakuro, founder of the Nakano School and a military intelligence expert, was dispatched to the Japanese embassy in the US and took part in the planning of unofficial negotiations between the two countries. He returned to Japan in August of 1941 and met with influential figures in the political and business world, trying to persuade them of the futility in war with the US. At the Imperial General Headquarters Government Liaison Conference, Iwakuro presented the following data based on his own personal research to demonstrate the gap between the US and Japan in terms of national strength.

Iwakuro’s conclusion was straight and to the point. “The US has a 10-1 advantage in terms of total war capacity. All the Yamato-damashii (Japanese fighting spirit) we throw at them will not change anything. Japan has no prospects of victory.” Incidentally, the next day War Minister Hideki Tojo (who later became Prime Minister) immediately ordered the transfer of Iwakuro to a unit stationed in Cambodia. Iwakuro made the following remarks to the people who came to see him off at Tokyo Station. “If I should survive this ordeal and ever make it back to Tokyo, the Tokyo Station we see here will most assuredly lie in ruins.” Those words came to fruition in the spring of 1945.

 

Admiral Yamamoto salutes Japanese pilots.
Admiral Yamamoto salutes Japanese pilots.

 

So did the attack on Pearl Harbor truly break the US? The quote made by Admiral Yamamoto at the end of the movie Tora! Tora! Tora! puts it quite succinctly: “All we have done is to awaken a sleeping giant and fill him with a terrible resolve.” Though there is debate about whether he actually uttered those words, Yamamoto was no stranger to the US having studied at Harvard and spending time as a naval attache, and he knew full well the awesome industrial might and material resources the nation possessed. Japan played a great hand with its attack on Pearl Harbor, but as Yamamoto knew, the deck was already stacked against it. The only thing that remained to be seen was how long Japan could make its kitty last.

Patricia Arquette’s “badass” Oscar Speech or: Intersectionality 101 (Again)

By Amanda Grigg

Patricia Arquette took on the pay gap and equal rights for women in her best supporting actress acceptance speech last night and the internet went wild.

“To every woman who gave birth, to every taxpayer and citizen of this nation, we have fought for everybody else’s equal rights. It’s our time to have wage equality once and for all and equal rights for women in the United States of America!”

54eaedd68a2fdf64646020f9_meryl-arquetteTwitter was all abuzz (atwitter?). Vulture called it a “Badass, feminist” speech and The Daily Beast praised it similarly as a “Badass” call for “equality for women.” It was also responsible for a Meryl Streep/Jennifer Lopez reaction that launched a thousand gifs. Some people (myself included) were a bit put-off by a white woman saying “we fought for everybody else/it’s our time” but overall the speech was a hit. The Washington Post was particularly impressed by Arquette’s emphasis on mothers, for whom the wage gap is especially prominent (more on this later). Unfortunately when Arquette elaborated on her thoughts in the press room things went downhill. She went on to say “It’s time for all the women in America, and all the men that love women and all the gay people and all the people of color that we’ve all fought for to fight for us now.” Bitch Magazine  lamented that Arquette went on to undermine her earlier statements and Rh Reality Check called the press room comments “a spectacular intersectionality fail.” She has since clarified via twitter, noting that women of color are most harmed by the pay gap and that she advocates for the equal rights of all women, and LGBT people. Of course she had already ignited yet another in a series of debates about the failures of mainstream white/celebrity feminism. Which is as good an excuse as any for a stroll through feminist history and intersectionality theory.

When we talk about intersectionality we’re talking about how oppression works and specifically, people whose identities place them at the intersection of multiple forms of oppression. So, for example, a white woman might be discriminated against because she’s a woman, and a Black man might be discriminated against because he’s Black. A Black woman, on the other hand, will experience discrimination as a result of her race AND gender. In her work coining the term intersectionality Kimberlé Crenshaw uses the imagery of an actual traffic intersection. As she explains, some accidents (oppression) might be the result of cars coming from one direction (i.e. discrimination against white women based on sexism) but some might result from cars coming from multiple directions and colliding at the intersection (i.e. discrimination against Black women as a result of their race and gender).

Equally important, the oppression that Black women experiences does not just look like a combination of sexism faced by white women and racism faced by Black men. Here we can think back to campaigns for reproductive rights in the 1960s and 70s. White women were largely fighting for their right to choose not to have children, in the form of access to safe birth control methods and abortion. At the same time women of color were experiencing forced and coerced sterilization at alarming rates, though this was largely ignored by mainstream feminist groups fighting for reproductive rights. Women of color were thus calling for greater attention to their
Sterilization_protestright to choose to have children, including freedom from forced sterilization and the material conditions necessary to reasonably choose motherhood (including calls for access to child care).[1] So while both white women and women of color experienced oppression targeting their ability to reproduce the form that oppression took looked very different depending on a woman’s race.

As evidence of the extent to which the experiences of white women (and demands of white feminists) obscure those of women of color, consider this. You’re probably familiar with Roe v. Wade, the Supreme Court case that legalized abortion. You might even remember learning about it in high school history. What about the establishment of federal regulations for sterilization? They came just a few years after Roe v. Wade, represented an equally important victory for reproductive justice, and resulted from an equally impressive grass roots campaign.[2] But you’re far less likely to have heard about them, let alone seem them in a high school history book.

And now, a history lesson (bear with me).

Consider the ideal of the female homemaker popularized around the time of the 6458802_origindustrial revolution. Scholars might refer to this set of ideals as the “cult of domesticity” or “true womanhood” or as part of the ideology of “public and private spheres” but put simply, women were seen as too pure and delicate for the harsh working world, and as naturally suited to creating a warm, welcoming home to which working men could retreat. Though women’s work was understood to be necessary, it was not valued socially or economically to the extent that remunerative work in the public sphere was valued. Today women are still far more likely remain at home with children than men, and to perform a larger portion of housework and child care duties even if both partners are employed [3].

So of course feminists have critiqued this model and worked to both open up opportunities for women outside of the home and to gain recognition for the immense value of work historically performed by women in the home.

So far so good? Well, no. Because this ideal applied (and to the extent that it still exists, applies) to a very select group of upper and middle-class white women. Black women have never been seen as too frail for work and since slavery have been forced, expected, or relegated to performing manual labor. Black women haven’t been expected to act as full time homemakers, nor have economic conditions – including formal and informal barriers and inequalities that made earning a stable family wage nearly impossible for Black men – historically permitted it. Further, where white women might be pressured to devote more time to mothering, the mothering of Black women has consistently been devalued in the United.

The reproductive labor of white women was so valued that in 1935 the federal government created a federal assistance program to allow single mothers to provide for their children while remaining in the home [4]. The program, called Aid to Families with Dependent Children, included a number of measures that had the effect of excluding or discouraging the participation poor women of color. The wide discretion of caseworkers to grant, deny, and revoke benefits, the race-based standards of “suitability” for aid, including living arrangements and cooking styles, the frequent absence of case workers who spoke Spanish, and the regular denial of black women because they were deemed to be employable and therefore not deserving, resulted in the systematic mistreatment and exclusion of Black and Latina women.[5]

In the 1960s civil rights and welfare rights organizers were successful in their efforts to end features of AFDC policy that denied assistance to poor women of color.[6] With the end of race-based access to benefits, the generosity of benefits began to be determined by race, resulting in in uneven levels of support for white and black mothers.[7] Quantitative research on AFDC benefits following expansion of access has found that states with higher proportions of black single mothers systematically provided less generous benefits than states with higher proportions of white single mothers.[8] As the racial makeup of the AFDC program changed, critics began Poor People s Campaign 1968 Welfare Rights Organization The National Welfare Rights Organization marching to end hunger. Photo from the Jack Rottier Collection.arguing that the program encouraged unwed motherhood, and that it did so among the least productive members of society.[9] By the 1990s, AFDC was much-maligned, and notably associated with black single mothers who were often assumed to be taking advantage of welfare. In response to critiques the 1996 welfare reform replaced AFDC with Temporary Assistance for Needy Families (TANF). Unlike AFDC, the TANF program required single mothers receiving aid to work outside of the home. Critics of TANF have suggested that it clearly demonstrates the devaluation of black women’ reproductive labor. In her criticism of TANF’s work requirements, Gwendolyn Mink argues that social approval and value of reproductive labor “does not exist for the care-giving work of poor single mothers, in part because they are poor and single, and in part because the poor single mother of popular imagination is Black.”[10] Similarly Dorothy Roberts has argued that “Forcing low-skilled mothers into the workforce regardless of the type or conditions of employment available to them assumes that any job is more beneficial to their families than the care they provide at home.”[11]

In light of all of this, my initial description of sexist work/family ideals appears to be woefully incomplete. More importantly, it generalizes about the experiences of “women” in a way that reinforces the notion that white women set the standard for womanhood even as it rejects that standard. In so doing it makes it far more difficult to/less likely that we will work to combat all forms of oppression as part of our fight against our specific experience of oppression. White feminists might, for example, fight to get women out of the home, but do so without challenging the division of labor in the home or the structure of the modern workplace, thus reinforcing a system in which domestic work is still undervalued but one in which poor women/women of color/migrant women perform the domestic work abdicated by privileged women for little pay, with few benefits, and in positions of immense vulnerability. Just as an example.

The same thing happens when white feminists speak about the problems facing “women” when the content of their argument is specific to middle and upper-income white women. Or, when someone speaks about the problems tumblr_mj1o77X4UU1s1x2pho1_500facing women in opposition to or as separate from the problems facing the poor, women of color, the disabled, immigrants, or any other group to which a woman could belong (enter Patricia Arquette). This doesn’t mean that women can’t work together to address a problem like the wage gap – just that we can’t assume that all women experience that problem in the same way, or that a solution that combats the problem for women positioned in one way will address it for all women and won’t actually make it worse for some women. It also doesn’t mean that women, and men, and LGBT activists, and civil rights advocates can’t work together. In fact, an intersectional understanding of oppressions as fundamentally linked would suggest that it’s vital that we do so. But it does mean that instead of telling groups who also continue to experience and combat serious discrimination on multiple fronts (and which include women) that it’s “our time” and presuming that this isn’t “their issue” we should ask, how does this issue affect you? What do you think needs to be done? How can we work together on this? And then – this part is key – we listen.

 

 

[1] See Jennifer Nelson, Women of Color and the Reproductive Rights Movement

[2] http://isreview.org/issue/91/black-feminism-and-intersectionality

[3] See: google.

[4] Gwendolyn Mink goes so far as to say “Economic provision for mothers’ care of children was once the primary purpose of welfare.” (Mink, Welfare’s End, 105)

[5] Premilla Nadasen, Welfare Warriors; Mink Welfare’s End; Mimi Abramovitz Regulating the Lives of Women

[6] Jill Quadagno, The Color of Welfare 

[7] Stephanir Moller, 2002 “Gender and Race in the U.S. Welfare State

[8] Moller 2002

[9] See: William Shockley; The Moynihan Report

[10] Mink, Welfares end. P. 120

[11] Dorothy Roberts, Boston Review http://new.bostonreview.net/BR29.2/roberts.html

Why am I ignoring Nigeria?

By Seth Studer

I take a little exception to the smarminess of certain media’s response to the Charlie Hebdo murders. Last week, they inform us, we witnessed two horrific massacres: the murder of 12 satirists in Paris and the murder of roughly 2,000 civilians in Baga (that’s in Borno, Nigeria). But, they continue, judging from CNN, Fox, and your Facebook feed, only one of these terrible crimes got any coverage. To ask the question “which one: the 12 Europeans or the 2,000 Africans?” is to answer it. While the loss of 12 innocent lives and an implied assault on Free Speech (which doesn’t really exist per se in France) rallies millions across the Great White West, virtually no one is speaking for what Teju Cole calls “unmournable bodies” (an eloquent phrase, although the critical theorist’s habit of saying body when you mean person upsets his essay’s thesis). Cole’s essay in the New Yorker (linked above) is intelligent and passionately argued, and he handles his argument’s underlying ethos – the aforementioned smarminess – with more grace than others (the latter article incorrectly states that Nigeria is south of the equator, a reminder that the many truths revealed by postcolonial theory – e.g., global North vs. global South – do not always square with geographical reality). But in general, I felt scolded for paying more attention to France than Nigeria.

And I probably deserve a scolding. Did mainstream news outlets focus on France over Nigeria as the consequence of a bias toward white Europeans? Absolutely! Was the attack on Charlie Hebdo more frightening and noteworthy to Western audiences than the massacre in Baga because the former represents an attack on the imagined “center” of Western civilization rather than its “periphery”? You bet!

So should 2,000 murder victims be more “newsworthy” than 12 murder victims? I think it depends on the circumstances. 

Anyone who hasn’t been following Boko Haram over the past many months is an irresponsible consumer of world news. The mass violence last week represents the terrifying apex of an ongoing story. We spent much of 2014 preoccupied with the horrors inflicted upon the Nigerian people by this radical group (even Michelle Obama got involved, which got American conservative media involved, etc., etc.). The Charlie Hedbo massacre, meanwhile, fell out of a clear blue sky. Both discrimination against Muslims and Muslim unrest in France are ongoing, but nothing concrete or obvious precipitated this attack. These murders arrived on our screens demanding a context. Hence, the intense coverage.

And for me, intense coverage of the Charlie Hebdo massacre is essential not merely because it reinforces Western commitments to free speech (commitments that tend to get waylaid when they’re needed most). Coverage is essential because France is an important European nation in the grips of a major rightward political and cultural shift, one that could potentially turn more strident, more xenophobic, and more violent. After a half century on the fringes (and apparent defeat in the face of European unification), Europe’s right-wing parties (as opposed to its right-of-center parties) are, ahem, on the march. In the United States, extreme right-wing rhetoric has benefited from decades in the mainstream: a speaker’s racism or xenophobia can be carefully coded and embedded in speeches about tax policy. In Europe, the far right has been far wilder and wilier. They’ve retained their ugliness and wear it explicitly on the surface. (Whenever one of my liberal friends unfavorably compares America’s conservative politics with Europe’s socialist policies, I remind them, “Yes, you like their left wing, but you don’t want their right wing.”) Meanwhile, since the 2007/08 global banking crisis, nationalism in Europe – both right-wing and left-wing – has resurged to levels not seen in decades. Because of their knotted political and economic ties to Germany (or Russia), the peoples of Europe are seeking social and cultural distinction. Secession movements have gained renewed traction in the geographical and political expanse between Scotland and Crimea. Consequently, Germans and Russians are also asserting their national character in ways that, twenty years ago, would have seemed taboo.

This, for me, is the context of the Charlie Hebdo attack, far removed from the bloodshed in Nigeria (admittedly, all things connect in our post-post-colonial world, as African expats like Cole convincingly demonstrate). Note that the above paragraph doesn’t include the word “Islam.” I don’t think you need to dwell much on radical Islam to understand the socio-cultural dynamic that drives millions of French residents into the streets. From a French perspective, however, immigration from the Muslim world underscores every aspect of the current national identity crisis. Thus, when an event like the attack on Charlie Hebdo occurs, you get 3.7 million people in the streets and attacks on Muslims.

This, to me, is a very big story indeed.

Two thousand people died in Nigeria last week, it’s true, but 3.7 million people marched throughout France yesterday – roughly one million in Paris alone. What do those one million want? What do they represent? Many of them are doubtless sympathetic with France’s Muslim minorities. Few among them are likely to be extreme French nationalists (though more of them are sympathetic with French nationalism than Western liberals would like to imagine). Whatever their motives, this represents a good moment to take France’s cultural temperature. The context demands it. Your first response to Charlie Hebdo should be an unequivocal condemnation of the murders and support for free speech. But your second response, given the atmosphere in Europe, should be concern for liberalism in France. Because, contrary to what the news coverage is telling you, continental Europe is not historically an easy or natural home to liberal values. And because a march can be a mob by another name.

How Do We Know the US’ Cuba Policy Failed?

By Kindred Winecoff

Dan Drezner has a good post on the US-Cuba détente and how it is consistent with Obama’s foreign policy pattern of seeking to alter undesirable status quo situations. I agree with all of it but the ending:

…it’s hard to deny that America’s Cuba policy had failed.

It’s worth asking what objective motivated America’s Cuba policy before concluding that it failed. Several possibilities:

1. Limiting the expansion of global communism into the Western Hemisphere (c. 1960-1990).

It’s easy to forget that this actually was a thing once upon a time. Castro’s early Cuban government was not only brutal on the island but also actively sought to export revolution elsewhere, and provided material support to rebels pursuing that end. Castro encouraged Khrushchev to deploy nuclear weapons during the Missile Crisis and, at least for a time, sought those weapons for himself. The embargo did limit Castro’s material influence during the Cold War, and thereby cut off one of the main potential routes of activity for the USSR in the West. It meant that Castro would no longer be able to credibly promise to assist those seeking to overthrow US-friendly governments. And, among other things, this ensured that on the occasions where the Cold War hotted up it would not be near the US’s territory.

2. Limiting the influence of left populists in the Western Hemisphere (c. 1990-2010).

The post-Cold War era was greeted triumphantly in many parts of the West, but not so much in Latin America. The devastating effects of the debt crises of the 1980s and 1990s, along with IMF-mandated structural reforms, reinforced anti-American sentiment in the region. There remained a pervasive idea that Latin America was stuck in a dependent relationship with the US that would forever forestall development. Faced with this and rapid development elsewhere in the world, new leaders like Chavez, Morales, and Correa looked to the Cuban regime as a model of resistance and pushed for solidarity in opposition to the US-led international order. Discrediting this idea — using both carrots and sticks — has been a key objective of the US in the years since, and as regional alternatives to the US stagnate or collapse that goal looks closer to being achieved than it possibly ever has.

3. Winning elections in Florida (c. 1990-present).

Who says the embargo was about primarily about foreign policy objectives in the recent past? Successive presidential elections more or less came down to several thousand votes in Florida (or were expected to do so), and until quite recently the Cuban expat community has vociferously opposed normalization with Castro’s regime. There’s a pretty simple electoral math here: keep the anti-Castro Cuban-Americans happy, or you could lose to the person who does.  

4. The end of the Castro regime.

Was this a true foreign policy goal of the US after the Kennedy Administration? Maybe they would have liked to see it happen, but Castro was very much contained and the US foreign policy apparatus has traditionally been comfortable containing regimes it doesn’t like. There doesn’t seem to be much evidence that the US was pursuing regime change per se at any point since the 1960s, and it certainly isn’t doing so today. Regime change is risky, and the US has had no compunction about isolating, but otherwise tolerating, distasteful governments.

So did the US’ Cuba policy fail? The answer depends on what is meant by the question, but it seems to have achieved much of what it wanted to achieve at very little cost. I’d call that a limited win or, at the very worst, a slightly aggravating stalemate. Given that it had achieved limited success, and that the course of history rendered other objectives moot, the Obama administration was quite right to change the policy. But that does not constitute an admission of failure.

 

Would You Rather Be Rich in the Past or ‘Comfortable’ Today?

By Kindred Winecoff

Scott Sumner:

In a recent post I suggested that one could argue that the entire increase in per capita income over the past 50 years was pure inflation (and hence that real GDP per capita didn’t rise at all.) But also that one could equally well argue that there has been no inflation over the past 50 years. The official government figures show real GDP/person rising slightly more than 150% since 1964, whereas the PCE deflator is up about 6-fold. …

Here’s one thought experiment. Get a department store catalog from today, and compare it to a catalog from 1964. (I recently saw Don Boudreaux do something similar at a conference.) Almost any millennial would rather shop out of the modern catalog, even with the same nominal amount of money to spend. Of course that’s just goods; there is also services, which have risen much faster in price. OK, so ask a millennial whether they’d rather live today on $100,000/year, or back in 1964 with the same nominal income. Recall the rotary phones and bulky cameras. The cars that rusted out frequently. Cars that you couldn’t count on to start on a cold morning. I recall getting cavities filled in 1964, without Novocaine. Not fun. No internet. Crappy TVs, where you have to constantly move the rabbit ears on top to get a decent picture. Lame black and white sitcoms, with 3 channels to choose from. Shorter life expectancy, even for the affluent. No Thai restaurants, sushi places or Starbucks. It’s steak and potatoes. Now against all that is the fact that someone making $100,000/year in 1964 was pretty rich, so your social standing was much higher than that income today. So it’s a close call, maybe living standards have risen for people making $100,000/year, maybe not. Zero inflation in the past 50 years may not be right, but it’s a reasonable estimate for a millennial, grounded in utility theory. In which period does $100,000 buy more happiness? We don’t know.

I think if we really don’t know the answer to this question then it’s only because happiness is subjective. To me it’s obvious that a $100,000/year salary is worth more today than it used to be. For one thing, in 1964 tax rates in basically every Western economy were absurdly high, so that that $100,000 would really be somewhere from $10,000-30,000. George Harrison wasn’t exaggerating; how would you like to live in a country where your best artists and creators were forced into (or simply chose) tax exile?

But let’s leave that aside for now. In 1964 a $100,000 salary would make you an elite, but your real income would actually be much smaller than that because of all of the 2014 goods you could not purchase at any price. Sumner runs many of them down, but the point is that $100,000 is still enough to live quite well in this country — even in the expensive cities — but the range of choice has exploded, and many of the modern choices now come at very low cost.

Let’s not forget that politics was quite different in 1964 as well: segregation persisted, the Cold War was raging, and even in the U.S. the “elite” were defined as much by their pedigree as income. We weren’t far removed from McCarthy, and were in the midst of a succession of assassinations of American political leaders and overt revolutionary threats in many Western societies. No birth control, no abortion, few rights for women and homosexuals in general. Being an elite in that world would likely feel very uncomfortable, and of course this blog (and essentially all media I consume) wouldn’t exist. So for me 2014 is the obvious choice.

Tyler Cowen has a more interesting question:

But here’s the catch: would you rather have net nominal 20k today or in 1964? I would opt for 1964, where you would be quite prosperous and could track the career of Miles Davis and hear the Horowitz comeback concert at Carnegie Hall. (To push along the scale a bit, $5 nominal in 1964 is clearly worth much more than $5 today nominal. Back then you might eat the world’s best piece of fish for that much.)

I’m still not sure. $20k/year back then wouldn’t be enough to make you very well off, and the marginal cost of culture consumption today has sunk almost to zero. Was Miles Davis really so much better than anyone working today? For everyone in the world who does not live in NYC, is it better to be able to watch his concerts on YouTube now, and on demand, than not to have seen them at all? Lenny Bruce was still active in 1964 but almost no one ever saw him (for both technological and political reasons). I might still take the $20k today, and I’ve lived on less than that for my entire adult life until last year, so this is an informed choice. But I agree that it’s a much more difficult decision.

It is an interesting question, mostly because it reveals what people value most. It’s a mutation of the “veil of ignorance”. So what would you choose?

Aunt Flo Meets Uncle Sam: Menstruating While Incarcerated

By Amanda Grigg

The ACLU of Michigan filed a federal lawsuit today on behalf of eight female inmates from the Muskegon County Jail who assert that “inhuman and degrading policies at the filthy, overcrowded lockup violate their constitutional rights.” Among the (many) degrading policies is the jail’s refusal to provide adequate feminine hygiene products to inmates.

Unfortunately, this is a common problem facing female inmates. According to Maya Schenwar, who has worked regularly with incarcerated women she has heard one recurring complaint from female prisoners: “There are never enough feminine hygiene products to go around.” Many facilities don’t provide feminine hygiene products at all, requiring women to buy pads or tampons from prison commissary. In these facilities women can wait weeks for their commissary to come in. Others have no external source of funds and are forced to go without or use makeshift hygiene products made of toilet paper.

toxic_tampons_pads_504x334This might seem like a minor problem, but I promise it’s not. As Schenwar explains, “The hygiene-product shortage amounts to far more than an annoying inconvenience. Women described to me the discomfort and smell, especially in the summer, of living in close quarters with other women who are often menstruating simultaneously.”

So let’s talk periods. According to the American Congress of Obstetricians and Gynecologists the average woman’s period lasts from three to five days, though menstruating for as few as two days and up to seven days is also considered normal. The amount of menstrual fluid (of which about 50% is actually blood) also varies, from 1-6 tablespoons over the course of a single menstrual cycle. [1]

A normal period might require a woman to change her pad or tampon as often as every hour or two hours during heavy bleeding. The Department of Health and Human Service’s Office of Women’s Health directs advises women to change their pads before they are soaked with blood and to always use the lowest absorbency tampon necessary (using different kinds of tampons on light and heavy days), changing every 4-8 hours to avoid developing bacteria associated with potentially deadly toxic shock syndrome.

Bleeding that last more than 7 days or is very heavy, requiring pad/tampon changes more than every two hours or containing clots larger than a quarter (yes, clots smaller than that are normal, stay with me guys), is a recognized medical condition known as “menorrhagia.” For some women “menorrhagia” is accompanied by debilitating cramps and/or fatigue resulting from iron deficiency. Though it’s a diagnosable medical condition heavy bleeding is common among women, affecting more than ten million women (one in five) in the United States each year. [2]

According to several of the ACLU’s plaintiffs the Muskegon County Jail does not provide adequate hygiene products and women are often left to “bleed through their clothes” and not provided with new clothes until the next laundry day. Women reported being forced to wait hours “or even days” to receive requested feminine hygiene products.”

Studies suggest that incarcerated women are more likely than their peers to have irregular periods.[3] Female prisoners in England have described a range of menstrual symptoms including “increased premenstrual tension (PMT), excessive menstruation, painful menstruation and menstrual cessation.”[4]  So in addition to those women with “normal” periods, forced to bleed for 2-7 days without adequate protection and sometimes without any at all it’s also a safe bet to assume that there are women among the prison population who experience heavy bleeding that leaves them even worse off when provided with what would under normal conditions be considered inadequate supplies. Others might experience unexpected bleeding when periods become erratic, making it difficult to plan ahead to request feminine hygiene products several days in advance. In any case, wearing the same pad for hours or days (a likely result of insufficient supplies) on top of being extremely uncomfortable and unsanitary, increases the likelihood of contracting a bacterial or fungal infection.

womancuffed-smallThose with external funds can turn to the jail commissary which is run by a private company called Canteen Services and features wildly overpriced feminine hygiene products. According to the Muskegon County Jail Commissary Menu, a package of 8 tampons costs $4.23. Let’s assume that these aren’t the highest end tampons on the market. For ten cents more you could buy five times as many off-brand tampons at Walmart. If we estimate 8 hours per tampon as recommended by HHS, that’s 13 days (or 2-3 menstrual cycle’s) worth of protection from a store-brand vs. just shy of 3 days worth of protection for roughly the same price from the commissary. A package of 10 pads runs $4.55 at the Muskegon County Jail Commissary. For .80 cents less, you could purchase a 28 count box of overnight maxi pads with wings or a 20 count box of overnight extra-heavy flow maxi pads. Can you say prison industrial complex?

An inmate at a facility in Washington, described menstruating in prison as “an experience that either intentionally works to degrade inmates, or degrades us as a result of cost-saving measures: either way, the results are the same. Prison makes us hate part of our selves; it turns us against our own bodies.”

Sadly degradation in the Muskegon County Jail isn’t limited to the lack of feminine hygiene products. The accounts of the plaintiff’s in the ACLU’s lawsuit against the Muskegon County Jail offer harrowing examples of this kind gendered (and often racialized) degradation and dehumanization.

Here’s Londora Kitchens’ account of her time in Muskegon

I requested toilet tissue and sanitary napkins from jail officials on several occasions but my requests were ignored. For example on July 13, 2014, I was menstruating and was out of sanitary napkins. During this period, Officer Grieves told me that I was “shit out of luck,” and I better not “bleed on the floor.”

I am African-American. Ivan Morris, a guard at the jail, refers to African-American inmates as “your kind.” I have heard him say, “you’re in a cage like animals in a zoo.”

I understand that I made a mistake in breaking the law. However, nobody deserves to be forced to live like an animal and to be treated like one. We are women deserving of basic respect, sanitary conditions, bodily privacy, and simply to be treated like the women we are. Most inmates here have already been through so much. Being treated so inhumanely makes rehabilitation more difficult.

Michelle Semelbauer was incarcerated for being unable to pay a fine (a modern reincarnation of debtors’ prisons that the Supreme Court has declared unconstitutional [5]) and was consequently unable to purchase underwear – which the jail does not provide to indigent inmates. She was forced to remove all of her clothing (a one piece bodysuit) every time she used the bathrooms, leaving her naked in full view of male guards.

WomenPrison3Paulette Bosch (among others) dealt with a MRSA (an antibiotic resistant staph) infection during her time at the Muskegon County Jail:

During the entire time I was in the holding tank, I was not allowed to shower. During my stay in the holding tank, I noticed that my Cesarean section wound had become infected. The wound remained infected for months. I was told by jail medical staff the infection was Methicillin-resistant Staphylococcus aureus (MRSA).

Jail staff forced me to clean my infected wound myself in my filthy cell. They ignored my doctor’s instructions and didn’t even provide me enough medical supplies to regularly clean and treat my wound. I would plead for more supplies, but the staff just kept telling me they would not help me.

All of this is only the tip of the iceberg. Current and former inmates report that the jail is chronically overcrowded, infested with insects and vermin, that they have had to wait hours to receive toilet paper, that guards watch women undress and shower, though showers and sinks are regularly broken and covered in mold, that holding cells are covered in urine and vomit, and that guards make racist comments to black inmates, including including calling an African American inmate a “gorilla” and a “monkey.” Others burned themselves using faucets that provided only scalding water, were left in holding cells in clothes covered in vomit, were denied shoes, and forced to shower in pools of (likely filthy) standing water.

Scholars and advocates for prisoners argue that the poor conditions in jails and prisons across the country are often the result of economic incentives. In the case of for-profit prisons, the less spent on prisoners, the more the company profits. Among non-profit facilities there is little incentive to direct limited taxpayer funds to prisoners, who tend to come from some of the most marginalized groups in society. Incarcerated individuals are more likely to belong to racial minorities, to be poor, to struggle with addiction, and to suffer from mental illness. In addition, over 4 million Americans are currently disenfranchised as a result of voting laws targeting former felons. And even state-run facilities employ private companies to provide meals, commissary goods, and to conduct financial transactions (generally families sending money to prisoners, for which there is often a rather steep processing fee).

The number of individuals imprisoned in the United States quadrupled between 1980 and 2008. The number of women incarcerated increased at 1.5 times the rate of men.  Today the U.S. is home to 5% of the world’s population and 25% of its prisoners. Black women are incarcerated at nearly three times the rate of white women.We imprison more people an any other country in the world, and an estimated half million more than China. And we’re making a killing.

 

 

Full disclosure: My fiancé is one of the lawyers working on this case. All of the information about the Muskegon County Jail included here was (and can still be) found in the public record, specifically in the ACLU’s press release and plaintiff declarations. For those interested, there are stories from each of the plaintiffs, their biographies, and more information on the lawsuit here.

[1] Ziporyn, Karen J. Carlson, Stephanie A. Eisenstat, Terra (2004). The new Harvard guide to women’s health. Cambridge, Mass.: Harvard University Press. p. 381. According to Kotex losing up to a cup of menstrual fluid is normal but they seem likely to have a pro-excessive bleeding agenda.

[2] http://www.cdc.gov/ncbddd/blooddisorders/women/menorrhagia.html

[3] Allsworth, Jenifer E., et al. “The influence of stress on the menstrual cycle among newly incarcerated women.” Women’s Health Issues 17.4 (2007): 202-209. http://www.sciencedirect.com/science/article/pii/S104938670700028X

[4] Smith, C. “Assessing health needs in women’s prisons’, Prison Service Journal. 1998.

[5] See Williams v. Illinois and Bearden v. Georgia 

Kamikaze Attacks by the Numbers: A Statistical Analysis of Japan’s Wartime Strategy

kamikaze_l

 

Note: This is a guest post by Dave Hackerson.

One of the defining symbols of the vicious struggle between the US and Japan in the Pacific War, this word always conjures up a conflicting mix of emotions inside me. The very word “kamikaze” has become a synonym for “suicide attack” in the English language. The way WW2 was taught in school (in America) pretty much left us with the impression that kamikaze attacks were part of the standard strategy of the Japanese Imperial Army and Navy throughout the entire war. However, it was only recently that I was surprised to learn that the first time the Japanese introduced this strategy was on October 25, 1944 during the second Battle of Leyte Gulf. The Mainichi Shinbun here in Japan put together a wonderful collection to commemorate the 70th anniversary of this strategy. It features data that has not only been debated and analyzed from a number of angles, but it also provides statistical evidence that underscores the utter failure of this strategy. The title of the article is “Did the divine wind really blow? ‘Special strikes’ claim lives of 4000,” and it is the second part of a three part series called “Numbers tell a tale—Looking at the Pacific War through data”. The first part was posted in mid-August, and the third and final part is due to be put online in December. The original Japanese version for this special can be accessed here. The slides I refer to numbers “1” to “5” listed at the very bottom of each page. The current slide is the one highlighted in blue.

In this post, I will provide an overview of the information on this site while occasionally inserting my own analysis and translations of select quotes. I hope it helps to paint a clearer picture of a truly flawed strategy that is still not properly understood by both sides.

Slide 1

True to the series name, this article wastes no time in hitting you with some pure, raw data. The first pie graph (11%, 89%) indicates the actual success rate of kamikaze attacks. As you can see from the graph, only 11% were successful, while the remaining 89% ended in failure. This means that merely 1 in 9 planes actually hit their targets. After introducing these figures, the article focuses on the initial execution of the kamikaze strategy during the Second Battle of Leyte Gulf on October 25, 1944. Five planes hit and sunk the escort carrier USS St. Lo, while other planes succeeded in damaging five other ships. The estimated success rate in this battle was 27%.

The article then puts this percentage into context by comparing it to the success rate of dive bomb attacks (non-kamikaze) in other battles. Here are the figures:

Pearl Harbor (1941): 58.5%

Battle of Ceylon (1942): 89% (percentage of hits on the British carrier HMS Hermes)

Coral Sea (1942): 53% (percentage of hits on the USS Lexington, which was severely damaged)

Looking at these figures, it’s clear that the kamikaze attacks were not that effective. The Japanese navy was overly optimistic and believed they would be fairly successful, but the US quickly adapted, and by the end of the war the success rate fell to 7.9% (Battle of Okinawa). Even the Dai Honei (main headquarters of the Japanese forces) admitted that the attacks had little to no effect.

The next part of the article is titled “War of attrition: ‘Certain death’ strategy that claims both aircraft and pilots”. It discusses the reasons why the hit rate for Japanese air force dropped so dramatically as the war wore on. Here are the three reasons cited.

  • Decline in the flying abilities of the fighter pilots
  • Deteriorating performance of aircraft and materials
  • Improvements in American countermeasures

After introducing these reasons, the article makes a very important statement. “Kamikaze attacks meant that you lost both the aircraft and pilots. This not only wore down Japan’s fighting strength, but essentially destroyed the nation’s capacity to actually wage war in the future.”

The article then turns its attention back to the first kamikaze attack and the pilot chosen to lead it. Lieutenant Yukio Seki was a graduate of the naval academy and proven veteran. He died crashing his plane into the USS St. Lo and was later enshrined as a “軍神 (gun-shin, or military god)” in Yasukuni Shrine. The article seems to imply that this “honor” is actually an injustice to the Seki’s memory in light of what he said before heading into battle. “I’m fully confident that I can drop a bomb on any aircraft during a normal attack. Japan’s screwed if it’s ordering a pilot like me to smash his craft into an enemy vessel.” These words are in stark contrast to the quote and images on the cover photo of Shashin Shuuhou (Photographic Weekly, a morale boosting propaganda magazine published from the mid-1930s until mid-1945) that accompanies this post. The main quote shown to the left of the Lieutenant Seki states: “A single vessel strikes true in defense of the land of gods. Oh, our Kamikaze (Divine Wind) Special Strike Force. Your fidelity will shine radiantly for the next 10,000 generations.” The quote in the bottom right further contradicts the statement Sergeant Seki made before he took off. “Lieutenant Seki, commander of the Shikijima Battalion that served as the First Kamikaze Special Strike Force Battalion to be sent out on an all-out bomb strike. Immediately before heading into battle, Lieutenant Seki is said to have rallied his troops with the following cry: ‘Men, we are not members of a bomber squad. We are the bombs. Now up and away with me!’”

Slide 2

The focus then shifts its attention to the heavy losses among the ranks of Japanese pilots. The Japanese navy started with 7000 well-trained pilots at the beginning of the war. By 1944 over 3900 had died in battle. In the early days of the war the Allies estimated that Japanese pilots had a 6-1 superiority advantage over Allied pilots, but by April of 1943 it was even at 1-1. Japan simply could not replace the pilots it lost at a sufficient pace, so it decided to compensate by “short-tracking” their training. The pie graph here is really telling.

Rank A pilots (over 6 months of flight training): 16.3%

Rank B pilots (4 to 6 months of flight training): 14.4%

Rank C pilots (approx. 3 months of flight training): 25%

Rank D pilots (less than 3 months, or in some cases only flight theory): 44.3%

These figures are breakdown of the pilots sent to fight in the Battle of Okinawa in 1945. The article then cites three reasons believed to have initiated a vicious cycle for the Japanese navy:

  1. Compensate loss in air force manpower by short-tracking training and sending raw pilots straight into the fight
  2. Raw pilots have a low chance of returning from battle, and most likely fail to influence the course of battle
  3. Losses only increase, while the ranks of pilots continue to thin.

The authors of the article squarely place the blame on the shoulders of the upper brass in the navy. Personally, I think the Japanese navy would not have sunk to such desperate measures if Admiral Yamamoto hadn’t been shot down and killed in 1943. He would have found a way to prolong the fight and preserve Japan’s precious little resources. One could argue that the U.S.’s decision to shoot down Yamamoto and take him out of the picture eliminated the voice of reason within the Japanese ranks, and actually paved the way for this strategy to be adopted.

The article implies that Japan was insane for throwing away what little resources it had. When the enemy has 10 times the amount of resources, you do everything you can to hold onto what you have. The Japanese brass seemingly defied this logic by not only wasting aircraft, but needlessly wasting human lives. But why would they do that? Japanese writer Kazutoshi Hando, a man who has written extensively about the Showa Period and WW2, provides some valuable insight into how these men thought. “The very concept of logistics was either given little thought or entirely ignored by the Japanese military… After all, in the eyes of the Army’s General Staff Office and the commissioned officers in the navy, troops were ultimately viewed no more as resources that could be gathered for a mere 1 sen 5 ri (price for a postcard at that time). When they formulated a strategy, they flung the troops out to the front with 6 go (about 900 grams) of rice and a 25 kilogram pack. If you ran out of food, you were told to forage for your own supplies wherever you were. Surrender wasn’t option (because it was actually prohibited under the Japanese military code), so if you found yourself in a losing battle, the only option was gyokusai (a figurative term coined by the Japanese military which loosely translates as “beautiful death”). They didn’t give any thought whatsoever to potential survivors.”

Not only were the majority of the pilots deployed in the latter the days of the war vastly inferior, but the aircraft deployed were also no match for the Allied forces. In addition to fighters and bombers, reconnaissance and even practice planes were deployed! As the war wore on, Japan faced these problems:

  • Lack of skilled engineers, resulting in the low performance of new aircraft because of the deterioration in manufacture and production quality.
  • Use of low octane, poor quality fuel.

In spite of all these problems, the Japanese armed forces went ahead with this strategy. The navy asked for the construction of aircraft that would save on materials, be easy to fly in training, and able to conserve fuel. Unfortunately, the end product was of inferior quality compared to the aircraft produced in the early days of the war. Combined with poorly trained pilots, it was simply a disaster waiting to happen.

Slide 3

This slide focuses on the performance capacity of the aircraft. Lots of info on plane specs, but as you can see, by the end of the war Allied aircraft were simply far superior to Japanese planes in every respect. The kanji 零 in the name of the plane 零式艦上戦闘機 21型 indicates “rei” or “zero” (Type Zero Carrier Fighter 21). Click and hold the mouse cursor on the plane to rotate the view. The specs of the Zero changed very little during the war. The first generation of Zero fighters (1939) carried a Sakae 21 Engine, which boasted 950 hp. The type produced after 1943 was fitted with the Sakae 52 Engine that delivered 1100 hp, an improvement of only 150 hp, and a top speed of 624 km/h. A seasoned pilot would have had his hands full going up against the likes of the USAF Hellcats and P51s, but with the green pilots the Japanese forces sent up it was clear they no longer carried about fighting for air superiority.

Slide 4

I won’t get into the details here, but this slide reveals how quickly the US adapted to the kamikaze attacks. Surprising as this may sound, these attacks failed to sink any major ships or carriers. This is because the US used radar effectively to scramble fighters to meet the Japanese attacks. In addition, the US had damage control units on board each ship, so even if a kamikaze pilot broke through, the damage could be contained right away, enabling the craft to stay in the fight. While many ships were damaged, less than 50 were actually sunk. The chart here is quite telling. The red bars indicate ships sunk, and the yellow bars indicate ships damaged, but not sunk.

Slide 5

This slide takes an indirect jab at people who attempt to beautify the sacrifices made by kamikaze pilots. The vast majority did not want to participate in the attacks. Saburo Sakai, one of Japan’s ace pilots, commented on how the strategy lowered morale. “The morale sunk”, he said. “Even if the reasons for fighting mean that you have only a 10 percent chance of coming back, you’ll fight hard for that. The guys upstairs (upper brass) claim morale went up. That’s a flat-out lie.”

There were even instances of NCOs ordering their men not to do kamikaze attacks, and instead instructed them to conduct “normal attacks”. In an interview linked to this article, the non-fiction writer Masayasu Hosaka speaks about reading the memoirs of someone who witnessed the pilots flying off to do their kamikaze attack. This witness states that the radios of all the aircraft were kept on, so they could actually hear everything the pilots said, including the statements they uttered right before they met their end. Here are some of the things kamikaze pilots said: “F*ing navy aholes!” , “Oh Mother!”, or the name of their wives or sweethearts. It seems that very few shouted “Banzai Japanese Empire” (“Banzai” means “10,000 years”).

Returning to the question originally posed in the title of the article, it is almost assuredly clear that the divine wind never blew. There wasn’t even much of a breeze. In adding my own two cents, the kamikaze attacks were a great propaganda tool for the US, for it allowed us to portray the enemy as fanatical and beyond reason. This made it easy for us to justify the atomic bombings, especially after the war, because the kamikaze attacks seemingly “proved” that only excessive measures would bring them to the negotiation table. The propaganda twist on kamikaze tactics was carried over into post-war education in the US, and led many of us (or at least myself when I was a kid) to believe that Japanese soldiers were possessed with an unswerving conviction to fight to the death.

In closing, I once again borrow the words of Kazutoshi Hando. He cuts straight to the chase:

The complete irresponsibility and stupidity of the nation’s military leaders drove the troops to their deaths. The same can be said for the kamikaze special strike force strategy. They took advantage of the unadulterated feelings of the pilots. People claim it’s a form of ‘Japanese aesthetics’, but that’s pure nonsense. The General Staff Office built it up as some grand strategy when in actuality they sat at their desks merely playing with their pencils wondering ‘how many planes can we send out today?’ This lot can never be forgiven.

 

Gordon Tullock, RIP

By Kindred Winecoff

He was not my favorite economist, but there is no question that he had a strong mind that was consistently capable of locating puzzles which had escaped the attention of others. My favorite, perhaps, is his observation that given how much is at stake it is very surprising that there is so little money in politics. Spending even $1 billion on a presidential campaign is very little, when compared to the amount of influence over a $15 trillion economy that a president has. (The most up-to-date explanation for this is that spending on politics is mostly a consumption good, not rent-seeking.) On another occasion Tullock argued that if we really wanted to improve automobile safety we should replace all airbags with an 8 inch ice pick that would ram into drivers’ chests if they crashed. I know I’d drive more slowly and carefully under such conditions.

The fact that he died on Election Day is appropriate, or perhaps ironic. Tullock was an outspoken opponent of voting for instrumental reasons — voting incurs costs while the probability of impacting the outcome is minuscule, so the act of voting generates negative utility in expectation — and he extended the logic to revolutions. He had many interesting ideas, although whether they amount to a consistent philosophy or politics is debatable.

14 Reasons Susan Sontag Invented Buzzfeed!

By Seth Studer

41wboBULMFLIf you’re looking for a progenitor of our list-infested social media, you could do worse than return to one of the most prominent and self-conscious public intellectuals of the last half century. The Los Angeles Review of Books just published an excellent article by Jeremy Schmidt and Jacquelyn Ardam on Susan Sontag’s private hard drives, the contents of which have recently been analyzed and archived by UCLA. Nude photos have yet to circulate through shadowy digital networks (probably because Sontag herself made them readily available – Google Image, if you like), and most of the hard drives’ content is pretty mundane. But is that going to stop humanists from drawing broad socio-cultural conclusions from it?

Is the Pope Catholic?

Did Susan Sontag shop at Sephora?

Sontag, whose work is too accessible and whose analyses are too wide-ranging for serious theory-heads, has enjoyed a renaissance since her death, not as a critic but as an historical figure. She’s one of the authors now, like Marshall McLuhan or Norman Mailer, a one-time cultural institution become primary text. A period marker. You don’t take them seriously, but you take the fact of them seriously.

Sontag was also notable for her liberal use of lists in her essays.

“The archive,” meanwhile, has been an obsession in the humanities since Foucault arrived on these shores in the eighties, but in the new millennium, this obsession has turned far more empirical, more attuned to materiality, minutia, ephemera, and marginalia. The frequently invoked but still inchoate field of “digital humanities” was founded in part to describe the work of digitizing all this…stuff. Hard drives are making this work all the more interesting, because they arrive in archive pre-digitized. Schmidt and Ardam write:

All archival labor negotiates the twin responsibilities of preservation and access. The UCLA archivists hope to provide researchers with an opportunity to encounter the old-school, non-digital portion of the Sontag collection in something close to its original order and form, but while processing that collection they remove paper clips (problem: rust) and rubber bands (problems: degradation, stickiness, stains) from Sontag’s stacks of papers, and add triangular plastic clips, manila folders, storage boxes, and metadata. They know that “original order” is something of a fantasy: in archival theory, that phrase generally signifies the state of the collection at the moment of donation, but that state itself is often open to interpretation.

Microsoft Word docs, emails, jpegs, and MP3s add a whole slew of new decisions to this delicate balancing act. The archivist must wrangle these sorts of files into usable formats by addressing problems of outdated hardware and software, proliferating versions of documents, and the ease with which such files change and update on their own. A key tool in the War on Flux sounds a bit like a comic-book villain: Deep Freeze. Through a combination of hardware and software interventions, the Deep Freeze program preserves (at the binary level of 0’s and 1’s) a particular “desired configuration” in order to maintain the authenticity and preservation of data.

Coincidentally, I spent much of this morning delving into my own hard drive, which contains documents from five previous hard drives, stored in folders titled “Old Stuff” which themselves contain more folders from older hard drives, also titled “Old Stuff.” The “stuff” is poorly organized: drafts of dissertation chapters, half-written essays, photos, untold numbers of .jpgs from the Internet that, for reasons usually obscure now, prompted me to click “Save Image As….” Apparently Sontag’s hard drives were much the same. But Deep Freeze managed to edit the chaos down to a single IBM laptop, available for perusal by scholars and Sontag junkies. Schmidt and Ardam reflect on the end product:

Sontag is — serendipitously, it seems — an ideal subject for exploring the new horizon of the born-digital archive, for the tension between preservation and flux that the electronic archive renders visible is anticipated in Sontag’s own writing. Any Sontag lover knows that the author was an inveterate list-maker. Her journals…are filled with lists, her best-known essay, “Notes on ‘Camp’” (1964), takes the form of a list, and now we know that her computer was filled with lists as well: of movies to see, chores to do, books to re-read. In 1967, the young Sontag explains what she calls her “compulsion to make lists” in her diary. She writes that by making lists, “I perceive value, I confervalue, I create value, I even create — or guarantee — existence.”

As reviewers are fond of noting, the list emerges from Sontag’s diaries as the author’s signature form. … The result of her “compulsion” not just to inventory but to reduce the world to a collection of scrutable parts, the list, Sontag’s archive makes clear, is always unstable, always ready to be added to or subtracted from. The list is a form of flux.

The lists that populate Sontag’s digital archive range from the short to the wonderfully massive. In one, Sontag — always the connoisseur — lists not her favorite drinks, but the “best” ones. The best dry white wines, the best tequilas. (She includes a note that Patrón is pronounced “with a long o.”) More tantalizing is a folder labeled “Word Hoard,” which contains three long lists of single words with occasional annotations. “Adjectives” is 162 pages, “Nouns” is 54 pages, and “Verbs” is 31 pages. Here, Sontag would seem to be a connoisseur of language. But are these words to use in her writing? Words not to use? Fun words? Bad words? New words? What do “rufous,” “rubbery,” “ineluctable,” “horny,” “hoydenish,” and “zany” have in common, other than that they populate her 162-page list of adjectives? … [T]he Sontag laptop is filled with lists of movies in the form of similar but not identical documents with labels such as “150 Films,” “200 Films,” and “250 Films.” The titles are not quite accurate. “150 Films” contains only 110 entries, while “250 Films” is a list of 209. It appears that Sontag added to, deleted from, rearranged, and saved these lists under different titles over the course of a decade.

“Faced with multiple copies of similar lists,” continue Schmidt and Ardam, “we’re tempted to read meaning into their differences: why does Sontag keep changing the place of Godard’s Passion? How should we read the mitosis of ‘250 Films’ into subcategories (films by nationality, films of ‘moral transformation’)? We know that Sontag was a cinephile; what if anything do these ever-proliferating Word documents tell us about her that we didn’t already know?” The last question hits a nerve for both academic humanists and the culture at large (Sontag’s dual audiences).

Through much of the past 15 years, literary scholarship could feel like stamp collecting. For a while, the field of Victorian literary studies resembled the tinkering, amateurish, bric-a-brac style of Victorian culture itself, a new bit of allegedly consequential ephemera in every issue of every journal. Pre-digitized archives offer a new twist on this material. Schmidt and Ardam: “The born-digital archive asks us to interpret not smudges and cross-outs but many, many copies of almost-the-same-thing.” This type of scholarship provides a strong empirical base for broader claims (the kind Sontag favored), but the base threatens to support only a single, towering column, ornate but structurally superfluous. Even good humanist scholarship – the gold standard in my own field remains Mark McGurl’s 2009 The Program Era – can begin to feel like an Apollonian gasket: it contains elaborate intellectual gyrations but never quite extends beyond its own circle. (This did not happen in Victorian studies, by the way; as usual, they remain at the methodological cutting edge of literary studies, pioneering cross-disciplinary approaches to reading, reviving and revising the best of old theories.) My least favorite sentence in any literary study is the one in which the author disclaims generalizability and discourages attaching any broader significance or application to the study. This is one reason why literary theory courses not only offer no stable definition of “literature” (as the E.O. Wilsons of the world would have us do), they frequently fail to introduce students to the many tentative or working definitions from the long history of literary criticism. (We should at least offer our students a list!)

In short, when faced with the question, “What do we do with all this…stuff?” or “What’s the point of all this?”, literary scholars all-too-often have little to say. It’s not that a lack of consensus exists; it’s an actual lack of answers. Increasingly, and encouragingly, one hears that a broader application of the empiricist tendency is the next horizon in literary studies. (How such an application will fit into the increasingly narrow scope of the American university is an altogether different and more vexing problem.)

Sontag’s obsession with lists resonates more directly with the culture at large. The Onion’s spin-off site ClickHole is the apotheosis of post-Facebook Internet culture. Its genius is not for parody but for distillation. The authors at ClickHole strip the substance of clickbait – attention-grabbing headlines, taxonomic quizzes, and endless lists – to the bone of its essential logic. This logic is twofold. All effective clickbait relies on the narcissism of the reader to bait the hook and banal summaries of basic truths once the catch is secure. The structure of “8 Ways Your Life Is Like Harry Potter” would differ little from “8 Ways Your Life Isn’t Like Harry Potter.” A list, like a personality quiz, is especially effective as clickbait because it condenses a complex but recognizable reality into an index of accessible particularities. “Sontag’s lists are both summary and sprawl,” write Schmidt and Ardam, and much the same could be said of the lists endlessly churned out by Buzzfeed, which constitute both an structure of knowledge and a style of knowing to which Sontag herself made significant contributions. Her best writing offered the content of scholarly discourse in a structure and style that not only eschewed the conventions of academic prose, but encouraged reading practices in which readers actively organize, index, and codify their experience – or even their identity – vis a vis whatever the topic may be. Such is the power of lists. This power precedes Sontag, of course. But she was a master practitioner and aware of the list’s potential in the new century, when reading practices would become increasingly democratic and participatory (and accrue all the pitfalls and dangers of democracy and participation). If you don’t think Buzzfeed is aware of that, you aren’t giving them enough credit.