Did Whites Steal Rock ‘n’ Roll From Blacks?

By Graham Peterson

Jim Morrison ain’t the final word on Rock ’n’ Roll history, but he’s a good start. In the clip below, Jim opens up a can of forgotten, but not rotten, Rock ’n’ Roll history — its white roots.

The view that Rock ‘n’ Roll was ripped off from black Rhythm & Blues is, more or less, the predominant view. It is not an uncontested view, as the Wikipedia admits. But if you grew up on the left, or around musicians and heads, you probably learned that Rock ’n’ Roll is blood money from yet another Great American Swindle. Jim agrees; of course Rock ‘n’ Roll evolved out of The Blues. But it also evolved out of early Country music, out of Bluegrass and Folk — white genres.

It’s an important point, not because white power, but because the white details of Rock ‘n’ Roll history got left on the shelf for a bad theory. Theory is a flashlight that tells you where the goods are. Unfortunately, critical theory has bad batteries and a narrow beam.

Without belaboring Horkheimer et al., the idea in critical theory is that culture fits a metaphor of exploitation, of theft. Culture is just another expression of colonial imperialism. Cultures get invaded and assimilated into a homogenous mass. It follows from this vision that black music got co-opted and assimilated into white music, in order to keep blacks down. That’s cultural appropriation. But, like Jim says, some of the main ingredients in Rock ‘n’ Roll were imported from Europe, through whites. Critical theory has no quarter for these folks.

The Europeans who brought bluegrass and folk to the United States were Scotch-Irish immigrants who settled across Appalachia. At the time, Scotland and Ireland were backwaters that had a reputation for the clan, the bar fight, and the broken accent. When they emigrated to get away from British exclusion, they brought instruments. And some fantastic music. You can hear those traditional Scotch-Irish influence still reverberating in modern Bluegrass, Folk, and Country — it’s uncanny. Fiddles. 6/8 time signatures. Twangs and bent notes. Line dancing. Poetry about poverty and misfortune.

Scotch-Irish Americans in Appalachia have always been, and unfortunately still are, largely poor. They didn’t get into singin’ about broke down Ford trucks by exploiting anyone — just like blacks didn’t get into singin’ about the blues by exploiting anyone. So, naturally, because Appalachian whites and blacks shared the same fate — and often the same holler — they mixed cultures. Then came Rock ’n’ Roll. And when kids from nice white suburbs started buying it, a few poor whites and blacks got their American Dream.

No doubt, the social exclusion of the ’50s and ’60s had its routine influence on Rock ’n’ Roll. The critical theoretic swindle story has some merit. Black musicians, who played the same tunes as whites, were not allowed to play the same stages. Black artists got squeezed out of radio rotations by racist DJs. And so on. But Bo Diddly was no slouch. Him and a range of other blacks made it big. The racism in Rock ’n’ Roll history is arguably a sideshow to the main stage, where blacks and whites were mixing to everyone’s benefit.

Cultures have always sampled and remixed from each other’s stuff. Take for instance the remixes that came out of Celtic Western Europe in the 2nd century BC. Archeological digs have revealed that the Celts imported art from Greece (that’s a long trip!), and that they eventually made their own Greek inspired art. Here again the power and exploitation thesis fails.

The Celts were poor. The Greeks were rich. The Celts were a fledgling, diffuse band of tribes. The Greeks were a militarily and culturally superior collection of  city states. Despite their differences in power, it was the poor Celts who adopted the rich Greek’s art. They traded artifacts and traditions peacefully, and to their mutual betterment.

Cultural mixing is as old as dirt, or rather, as old as trade. It happened across powers when timid Celts met well stocked Greeks in Europe. It happened across races when dirt poor immigrants met dirt poor blacks in Appalachia. And it happened across classes when poor Rock ’n’ Roll musicians played for rich city slickers across America.

We need to think harder about where cultures come from. Cultural appropriation, the swindle story, definitely is and can be a way that upper class people reproduce their status. But even more often, the borrowing, imitating, trading, and selling of cultures has been a way people make and expand their communities, peacefully. It’s a beautiful thing, and we should, while remembering some sad missteps, celebrate cultural trade as a testament of a liberal society.

Rock n’ Roll ain’t a black or white thing. It’s a black and white thing.

Is Math or English Harder to Theory With?

By Graham Peterson

Fabio Rojas and crew got into a discussion on Twitter about whether mathematical theory in social science is more difficult than verbal theory, or as Fabio summed it up:

fixed point theorems

Everyone in the thread agreed that dense verbal theory is much harder to read than mathematical theory.  But I think they’re about the same.  (Andreas Glaeser’s opinion on Foucault is here worth mentioning [insert arms like a symphony]: “you think to yourself, ‘now this is what language can do.’“)

We have a lot of folk assumptions about the difference between “verbal” and “formal” theory in social science, and too much violence between their practitioners, but very little discussion of their actual differences or advantages.  Note quickly: both verbal and mathematical theory are “formal.”  They both aim to generalize formal structures of logic, so I’m ditching the adjective “formal” and will refer to “mathematical” theory henceforth.

Bad verbal theory suffers from the same problems bad mathematical theory does.  If you ever get mad enough at mathematics that you read Why the Professor Can’t Teach, a criticism of mathematical pedagogy and research by Morris Kline, you’ll notice that most of the problems he identifies are exactly analogous for verbal theory.  Kline laments mathematics that generalizes for the sake of generalization, and he laments the presentation of general proofs without intuition and examples.

These are, to my eye, exactly the things that make Foucault et al. extraordinarily difficult to read.  Concepts get generalized for their own sake, until the exercise becomes so meta-theoretic it is only interesting to a handful of specialists, and applicable to nothing.  It might be the case that the material world is merely a realization of the world of ideas, but I really doubt that we’re learning much from “reimagining neoliberal ontologies.”

And where are the examples?  You just know that when you’re reading Bourdieu, there’s some vignette of piano lessons dancing around in his head, while he’s drawing sweeping generalizations about cultural capital.  And he’s probably generalizing from some children’s game where one gains and loses power, while he’s talking about misrecognized exchanges of subconscious power. But without making those examples explicit, the reader cannot extrapolate to generalities in the same way Bourdieu has.

Good theorists present their ideas like recipes or step-by-step instruction manuals, not assertions of propositions and generalities.  That is, good theorists will walk you through exactly those steps they took (usually starting with a rudimentary kernel, case, or example) to arrive at a generality, rather than presenting themselves as if all their brains just trade in dancing abstractions.  We are, though, both mathematical and verbal theorists, tempted to do the opposite.

We induct from one thing or another until we think we’ve found something general.  Then we turn around and assert the generality of that proposition, and try to prove it deductively.  We (sometimes) eventually present the case or example as if it’s just a convenient afterthought or demonstration.  When in fact that kernel drove our logic the entire time!

If we can drop the pomp and pretense, and focus on communicating our thoughts in the way that we actually arrived at them, we will have much clearer and easier to read mathematical and verbal theory.

Also note that good mathematical and verbal theory do pretty much the same things.

Creativity in mathematical and verbal theory is metaphorical and analogical, not deductive.  That is, mathematical creativity comes from (say) writing down a telescoping equivalence into a proof to clean it up, or recognizing a dual from a different subfield.  In verbal theory, analogical creativity comes from (say) writing down an epidemiological metaphor in a new context, like crowd dynamics.

A creative thinker transposes the formal structure of an argument to a domain where she intuits the model will help better comprehend the situation, than whatever story is currently attached.  Full stop.  There is no difference between doing so with a fixed point theorem, entropy function, language game, or model of mutually constitutive social interactions.

Or consider that Bourbaki symbols and the Greek alphabet are not always the most precise and compact language in which to present an idea.  We have intuitions because they are computationally efficient, and it turns out that in groups, intuitive Bayesians make lots of incredibly good predictions.  It is a very strange logic and practice that justifies turning a discussion of expected utility into a derivation of the expectation operator from primitives.

We have and use grammar in natural language that defines hypotheticals and probabilities all the time, “could, should, would, may, might, ought,” and we have and use grammar in natural language that defines quantities and their relations all the time, “most, more, just as much as, lots.”  For many problems, replacing these terms with mathematical symbols would be cumbersome, obfuscatory, and useless.

Both mathematical and verbal theory cannot be reduced to some historical turf war between continental social theory and economics, or some other nonsense about professional identities and territories.  We should rise above these petty disagreements and give young theorists a better guide to which lexicon is useful in which situations, because neither natural language nor mathematics can accomplish all of the goals of theory across all domains.

The Rhetoric of Direct and Indirect Speech

By Graham Peterson

Indirect, ambiguous, vague speech is incredibly common in formal arguments, and it is incredibly ineffective at persuading anyone.

I think most of us already agree with that statement, because there are standard and good arguments against ambiguity.  It can signal that the author does not herself know exactly what she is arguing.  It can signal that the author himself is purposefully obfuscating his meaning, trying to be tricky.  It can signal that the author is overgeneralizing, without thinking hard about and looking hard at the issue in front of her.

But I want to extend the discussion, and note here two particular kinds of indirect speech, and their use in formal writing.  By indirect speech here, I mean little hinting and ambiguous comments that make inexplicit reference to a literature, an ism, a school; I mean large, category, ex cathedra assertions with strings of citations tacked on; I mean jargon that only loosely references classes of stylized findings and literatures.

Note that the fact that someone is being ambiguous or indirect isn’t necessarily a sign that he is an unfocused idiot.  Indirect speech is really useful, even (trigger warning) rational. Steven Pinker points out in an article about it, that it’s a primary way we avoid conflict.  By only alluding to what one wants, or is asserting, and allowing for other parties to interpret one’s statement in multiple ways, one has recourse to run to the least offensive of its interpretations, and can plausibly deny that one intended the unfavorable interpretation.

Additionally, indirect speech helps us maintain in groups.  Sarcastic jokes are I think the best example of this phenomenon.  I know Janet hates opera, and she knows, that I know, that she hates opera.  It’s tacit and common knowledge between us, part of the mutual constitution of our friendship.  So when she says she has a date and I ask her which opera she’s going to, we both smile and chuckle, reassured that we have a common bond.  Full blown sarcasm isn’t common in formal writing, but wink-nod comments are.

These otherwise perfectly reasonable uses of indirect speech lead to an unpersuasive mess in formal arguments.

First, the in-grouping mechanism of indirect speech.  When I base my argument on citations, jargon, and isms, instead of direct explication of the claims I am making, I convey to my reader, if she is an outsider, that she is in the company of experts and should just trust whatever ex cathedra assertions I make.  If my reader is an insider and well familiar with the common knowledge I am only alluding to, then I should ask myself why I’m arguing at all.

Whether the reader is an insider or an outsider, there is no argument, just the authority supposedly conveyed by disposition and in group boundary keeping.

Now for the ambiguity-as-conflict-avoidance mechanism of indirect speech.  When I base my argument on diffuse citations to ginormous literatures, histories, or intellectual categories, I allow for a lot of ambiguity in interpretation.  That makes my claims unassailable, because nobody really knows exactly what I’m claiming, and I’m free to hedge, dodge, and qualify my way out of making an actual claim or demonstrating it with evidence.

People tend to accuse one another, regarding ambiguity, of “purposeful obfuscation,” but I doubt that the cynical interpretation is actually what’s going on in most cases (except for maybe a few postmodern authors who get off on playing games).  People generally want to avoid conflict with one another; intellectual hierarchies and territories are wooden and violent; and being purposefully ambiguous is a great way to avoid offending territorial babies.

So here we have, I think, a little sociology of good writing.  Bad writing comes from using indirect speech to reference the authority of in groups, and it also uses indirect speech to avoid crossing boundaries between in groups and out groups.  Let’s stop it, and just have an adult conversation about difficult topics, saying exactly what we mean.

What is Ethnography Good For?

By Graham Peterson

Ethnography is good for a lot.  Like Shamus Khan and Colin Jerolmack have recently argued, ethnography is, just like the measurement of relative prices, a great way to study revealed values and motivations (sociology speak) and revealed preferences (economics speak).

People have a pretty poor self conscious understanding of the distal, structural, social-aggregate-level mechanisms that drive their behaviors.  There isn’t a social science that doesn’t try to catch people unawares, and make bird’s eye inferences about those behaviors.  So every social science needs methods that draw inferences on things that don’t come directly out of people’s mouths, pens, or keyboards.

Ethnography is good for that.  And yet, people will complain about ethnography — or rather, bad ethnography —  invoking the ideals of randomness and representativeness taught in statistics courses.  But bad ethnography is bad for a lot of the same reasons bad statistics are.

Bad ethnography comes from convenience samples of people’s personal networks, and samples on the dependent variable without comparison groups.  It replicates derivative, routine, and already established theories.  It pretends that the author didn’t know what he was going to find before he showed up, then does an elaborate dance in the write-up trying to pretend to be objective.

People who do this drop a lot of “lived experience” and “in process” and “embodied practice” bombs that are supposed to end the conversation with their sheer authority.

Bad statistics does the same things.  It comes from convenience samples drawn from few-clicks-away government data, and samples on the dependent variable without comparison or counterfactual groups.  It replicates derivative, routine, and already established theories. It pretends the author didn’t know what she was going to find before she showed up, and the write up feigns objectivity.

People who do this drop a lot of “three-asterisk” and “testable” and “control vector” bombs that are supposed to end the conversation with their sheer authority.

Now I want to argue that we need both ethnography and statistics, but not for the reasons I’ve heard some people run to.  Some people will claim that we need purely descriptive studies; they abdicate causation and tell us ethnography gives us thick descriptions.  I have only heard this argument in the context of methodological debate, though.  Any interesting ethnography I’ve ever read has made a host of causal claims, and suggested their robustness with plausible interpretations of data.

Others have argued that ethnography helps us get on the ground and witness the emergence of causal mechanisms as they unfold.  You don’t have to step back a million miles, cover your eyes and write down a null, and then make causal claims ex post.  You can actually witness and take note of an antecedent, and its consequent, as they happen.

That argument is well and fine for ethnographers and statisticians to both keep their jobs, and do their own thing at their own conferences.  But I want to argue that these people need to talk to one another, too, and for a principle reason that I don’t know how to phrase in grammar other than statistical grammar, but I bet can be translated.

Ethnography samples on the tails of distributions (imagine without loss of generality a normal population distribution of some trait or phenomenon), and statistical studies sample on measures of center.  Both measures can answer causal questions, because both have their own way of filtering out confounding noise in empirical observation, and illuminating causal mechanisms.

Ethnographers go out into the world and turn up the volume on their variable of interest, in order to increase their signal/noise ratio, by sampling on the extremities of its distribution. Note that this is the same motivation for large N inferential statistics.  The idea there is to turn up the N until you can successfully differentiate signal from noise.

So if one wants to study the mechanisms driving social mobility, one goes to a homeless shelter to study downward mobility, not a college campus.  That’s not cherry picking — it’s calibration of the measurement instrument.  And it turns out that one can turn up signal and turn down noise, both by turning up the N and turning it down, depending on which portion of a population distribution one is sampling on.

Tail sampling makes statistical thinkers nervous.  All of the nice results of the central limit theorem (which is built on successive estimates of center – not estimates of tails) fall apart. Estimators lose efficiency and become biased, on purpose.  But turning up the volume and sampling on tails is extremely effective for the same reason a caricature works — it exaggerates what is distinctive and different about a particular variable in contrast to the confounding weeds around it.*

Both methods turn up signal and turn down noise.  Both methods observe primarily behaviors — stark and nonsensical on their own — and require textual, deductive, rhetorical, analogical, and narrative inference to make any sense of them.

So neither is superior, and neither can give us a whole picture of the population distribution of a social phenomenon, because each excludes, truncates, and draws discussion away from the other’s target on that distribution.  Statistical estimation of central tendency by definition and de facto obfuscates what we know about tails, and ethnography by definition and de facto obfuscates what we know about central tendency.

I’m sure archivists, humanists, interviewers, surveyors, and other observers of self conscious narratives fit in here somehow.  I’m just not sure how yet.

*Here I have just argued that cartoons are, literally, useful scientific representations of reality.  Keep that in mind the next time you call someone’s argument a cartoon.  Cartoons are funny because they make explicit, with innuendo and misdirection, tacit common knowledge.

Image credit: behaviorgap.com

Why Did Japan Attack Pearl Harbor?

This is a guest post by Dave HackersonA previous post in this series is can be found here.

The International Dateline is truly a fascinating thing. It’s like a magic wand of time that can both give and take, depending which way you head. Each time my family and I fly back to the Midwest, the space time continuum is seemingly suspended. Leave Tokyo at 4:00 pm, touch down in the Midwest at 2:00 pm, and then reach our final destination by 5:00 pm of the same day. Over 15 hours of travel that appears to have been compressed within the span of one single hour. I still can’t wrap my head around it at times.

This dateline has a way of slightly altering our perspective of historical events. Most Americans are familiar with the following quote from President Franklin Delano Roosevelt: “December 7th, a date that will live in infamy.” The date to which he refers is the day on which the Combined Fleet of the Japanese Imperial Navy under the command of Admiral Isoroku Yamamato attacked the elements of the US Pacific fleet at Pearl Harbor. However, this is the narrative from the American side of the International Dateline. The December 24th edition of Shashin Shuhou (Photographic Weekly), a morale boosting propaganda magazine published in Japan from the 1938 until mid-1945, carried the following headline for its graphic two-page artist’s depiction of the attack: “Shattering the dawn: Attack on Pearl Harbor, December 8th”. The Japanese government christened the 8th day of each month as Taisho Hotaibi (literally means “Day to Reverently Accept the Imperial Edict”) to commemorate the great victory over the United States at Pearl Harbor and the Imperial declaration of war on the US and its allies (the day also served to regularly renew nation’s fervor and commitment to the war effort). Was Pearl Harbor a great victory for the Japanese? The answer to this question depends on the context in which the attack is viewed. From a purely military engagement view, it is safe to say that it was a resounding success, but did this single engagement succeed in shaping the course of the upcoming conflict? This is the question that the Mainichi Shinbun explored in the third installment of its series “Numbers tell a tale—Looking at the Pacific War through data” (the original, in Japanese, is here). True to the narrative on this side of the Pacific, this article was released on December 8th last year. Just as with the other installments in the series, it presents a slew of data that helps to put historical events into context.

“Did the attack on Pearl Harbor truly break the US? Japan’s massive gamble with only a quarter of the US’s national strength.” The title of the article does a nice job of setting up the exhaustive economic analysis it conducts in an attempt to answer this question. The very first thing the article does is to compare the respective GDPs of the US and Japan in 1939. At this time, Japan’s GDP stood at 201.766 billion dollars. However, this amounted to less than a fourth of the US’s GDP of 930.828 billion dollars (note that figures are not adjusted for inflation). Even the UK had a larger GDP than Japan at 315.691 billion dollars. When you combine the GDPs of the US and UK, Japan already suffered a disadvantage of greater than 6 to 1.

The next set of figures the article introduces is related to industrial capacity. The first thing it examines is iron production, and here the article makes reference to the quote by Prussian leader Otto Van Bismarck, who claimed that it was iron which made a nation. Taking Bismarck at his word, Japan’s iron production did not bode well for its position as a nation. In 1940, Japan’s national production of crude steel was 6,856,000 tons per year. In contrast, the US was producing nearly nine times that amount at 60,766,000 tons per year. Likewise, Japan lagged far behind the US in terms of electric power output and automobile ownership. Japan’s electric power output in 1940 stood at 3.47 billion kWh, but this figured was dwarfed by the US’s output of 17.99 billion kWh. The gap in automobile ownership is also especially telling. The 1920s are often considered to be the decade in which America “hit the roads” and became enamored with the automobile, and this fact is backed up the figures for automobiles owned by Americans in 1940. By that year, there were already 32,453,000 automobiles on roads in the US. Japan didn’t even come close, with only 152,000 automobiles scattered across the country.

In addition to lacking the physical resources and infrastructure to sustain a prolonged war of attrition, the makeup of Japan’s economy also posed a number of difficulties. Here the article emphasizes a major difference between Japan and other first world nations at that time: Japan was not a “heavily industrialized nation”. This fact was clearly reflected in the country’s exports. In 1940 finished metal products accounted for only 2.8% of the nation’s exports, while raw silk, textiles, and clothing products made up for more than a quarter. Likewise, only 30% of the nation’s income was generated by industry, which was less than the combined income of agriculture, retail, and transport sectors. In the 1930s, Japan made every effort to expand its heavy industries. The Truman administration dispatched an investigative committee to Japan after the war to study the effects of America’s strategic bombing on Japan and its economy. The study found that in 1930 the industrial makeup of Japan was 38.2% heavy industry and 61.8% light industry. By 1937 Japan had succeeded in reversing these percentages to 57.8% and 42.2%, but the difficulty the nation had in securing the resources it needed for industry restricted its industrial capacity. The study did not mince words in its assessment of the Japanese economy. “The nation of Japan is truly a small country in every manner of speaking, and ultimately a weak nation with an industrial infrastructure dependent on imported materials and resources, utterly defenseless against every type of modern-day attack. The nation’s economy at its core was rooted in a cycle of daily subsistence, in which people only produced what they needed for that day. This left it with no extra capacity whatsoever, leaving it incapable of dealing with potential emergencies that may arise.”

To compensate for its lack of resources, Japan cast its gaze across the waters to Manchuria. Japan had steadily expanded its interests in Manchuria since its victory in the Russo-Japanese War in 1905, and placed the South Manchuria Railway Company as the primary driver of this massive undertaking. This company was founded in 1906 upon the railway Japan received from Russia after the war, and was a national policy concern that was half-owned by the state. Japan aimed to make Manchuria the focal point of an own economic bloc that also included Korea, Taiwan, and China. While Manchuria was rich in natural resources, it was highly underdeveloped, and Japan ultimately exported far more machinery and infrastructure building equipment than the resources it imported. While Japan was able to construct some of this machinery and equipment on its own, it was dependent on material and machine-related imports from the US, UK, the Netherlands, and Australia, the very nations against which it would ultimately go to war. In 1930, Japan exported nearly 96% of its raw silk thread to the US, which would send raw cotton back the other way. Japan would then process this cotton into finished cotton products for export to British India and the UK. Using the profits from these exports, Japan would then import strategic resources from the US, UK, and the Netherlands, such as oil, bauxite to create the aluminum used in air craft, and the bronze needed for the metal casings of bullets. The problematic nature of these trade relationships was pointed out by the Japanese economist Toichi Nawa of Osaka University of Commerce (present-day Osaka City University). In his book Research on the Japanese Spinning Industry and Raw Cotton Problem, Nawa stated that “any confrontation with the UK and US would be tragic, and must be avoided.” He further elaborated on Japan’s trade issues, saying that “the more Japan rushes along its efforts to expand heavy industry and its military industrial manufacturing capacity so it can bolster its policies on the continent (Manchuria and China), the more dependent it becomes on the international market, creating cycle that leads to increased imports of raw materials. Herein lies the gravest of concerns for the Japanese economy.”

Nawa’s words proved to be all too prophetic. Japan’s aggressive agenda in China following the Marco Polo Bridge incident in 1937 brought heavy criticism from the global community. As the conflict in China escalated, Western nations retaliated with economic sanctions and restrictions on imports. The most devastating of these was the US’s decision to ban all oil exports to Japan in August of 1941. The US was the world’s largest producer of oil in 1940, accounting for over 60% of the world’s supply. The upper brass of the Imperial Japanese Navy had predicted that they had enough oil stockpiled to wage war for at least 2 and half years, but if the UK and US shut off all oil exports, they would have no other choice but to move into Dutch territory and seize the oil fields of within 4 to 5 months in order to augment their supply. The attack on Pearl Harbor occurred exactly four months later.

Did Japan truly have the capacity as a nation to wage a modern war against a nation such as the United States? As tensions rose in US-Japan relations, Japanese government and military officials took a hard look at the data available in an attempt to answer this question.

A joint military and civilian economic study group organized around army paymaster Lt. Colonel Jiro Akimaru was set up in February 1941 to undertake this task. Known as the “Akimaru Agency”, this group was split into four sections to study the total war capacity of Japan, the UK-US, Germany, and the Soviet Union. The report they compiled by the end of September 1941 made the following conclusions:

1)      The conflicting state between Japan’s military mobilization and its labor force has become fully evident. Japan has also reached its peak production capacity, and is unable to expand it any further.

2)      Germany’s war capacity is now at a critical point.

3)      Not a single flaw exists within the US’s war economy.

Even if Japan sacrificed the living standards of its populace to boost its war capacity, it still would not have the financial resources to compete with the US. Hiromi Arisawa, a member of the UK-US section who was also president of Hosei University during his lifetime, made the following remarks when reflecting back on the report the Akimaru Agency prepared:

“Japan cut national consumption by 50%. In contrast, America only reduced its national consumption by 15 to 20%. Excluding the amount of supplies they shipped to other Allied nations at that time, the savings from this reduced consumption provided them with 35 billion dollars* for real war expenditures. That was 7.5 times greater than what Japan was capable of achieving with its cuts.”

Lt. Colonel Akimaru alluded to this fact when he presented the report at an internal staff conference meeting for the Army. Gen Sugiyama, Chief of Staff of the Supreme Command, acknowledged that the report was “nearly flawless” in its analysis. After praising Akimaru for the quality of the report, he then issued the following order. “The conclusion of your report goes against national policy. I want you to burn every copy of it immediately.”

Lt. Colonel Hideo Iwakuro, founder of the Nakano School and a military intelligence expert, was dispatched to the Japanese embassy in the US and took part in the planning of unofficial negotiations between the two countries. He returned to Japan in August of 1941 and met with influential figures in the political and business world, trying to persuade them of the futility in war with the US. At the Imperial General Headquarters Government Liaison Conference, Iwakuro presented the following data based on his own personal research to demonstrate the gap between the US and Japan in terms of national strength.

Iwakuro’s conclusion was straight and to the point. “The US has a 10-1 advantage in terms of total war capacity. All the Yamato-damashii (Japanese fighting spirit) we throw at them will not change anything. Japan has no prospects of victory.” Incidentally, the next day War Minister Hideki Tojo (who later became Prime Minister) immediately ordered the transfer of Iwakuro to a unit stationed in Cambodia. Iwakuro made the following remarks to the people who came to see him off at Tokyo Station. “If I should survive this ordeal and ever make it back to Tokyo, the Tokyo Station we see here will most assuredly lie in ruins.” Those words came to fruition in the spring of 1945.

 

Admiral Yamamoto salutes Japanese pilots.
Admiral Yamamoto salutes Japanese pilots.

 

So did the attack on Pearl Harbor truly break the US? The quote made by Admiral Yamamoto at the end of the movie Tora! Tora! Tora! puts it quite succinctly: “All we have done is to awaken a sleeping giant and fill him with a terrible resolve.” Though there is debate about whether he actually uttered those words, Yamamoto was no stranger to the US having studied at Harvard and spending time as a naval attache, and he knew full well the awesome industrial might and material resources the nation possessed. Japan played a great hand with its attack on Pearl Harbor, but as Yamamoto knew, the deck was already stacked against it. The only thing that remained to be seen was how long Japan could make its kitty last.

Patricia Arquette’s “badass” Oscar Speech or: Intersectionality 101 (Again)

By Amanda Grigg

Patricia Arquette took on the pay gap and equal rights for women in her best supporting actress acceptance speech last night and the internet went wild.

“To every woman who gave birth, to every taxpayer and citizen of this nation, we have fought for everybody else’s equal rights. It’s our time to have wage equality once and for all and equal rights for women in the United States of America!”

54eaedd68a2fdf64646020f9_meryl-arquetteTwitter was all abuzz (atwitter?). Vulture called it a “Badass, feminist” speech and The Daily Beast praised it similarly as a “Badass” call for “equality for women.” It was also responsible for a Meryl Streep/Jennifer Lopez reaction that launched a thousand gifs. Some people (myself included) were a bit put-off by a white woman saying “we fought for everybody else/it’s our time” but overall the speech was a hit. The Washington Post was particularly impressed by Arquette’s emphasis on mothers, for whom the wage gap is especially prominent (more on this later). Unfortunately when Arquette elaborated on her thoughts in the press room things went downhill. She went on to say “It’s time for all the women in America, and all the men that love women and all the gay people and all the people of color that we’ve all fought for to fight for us now.” Bitch Magazine  lamented that Arquette went on to undermine her earlier statements and Rh Reality Check called the press room comments “a spectacular intersectionality fail.” She has since clarified via twitter, noting that women of color are most harmed by the pay gap and that she advocates for the equal rights of all women, and LGBT people. Of course she had already ignited yet another in a series of debates about the failures of mainstream white/celebrity feminism. Which is as good an excuse as any for a stroll through feminist history and intersectionality theory.

When we talk about intersectionality we’re talking about how oppression works and specifically, people whose identities place them at the intersection of multiple forms of oppression. So, for example, a white woman might be discriminated against because she’s a woman, and a Black man might be discriminated against because he’s Black. A Black woman, on the other hand, will experience discrimination as a result of her race AND gender. In her work coining the term intersectionality Kimberlé Crenshaw uses the imagery of an actual traffic intersection. As she explains, some accidents (oppression) might be the result of cars coming from one direction (i.e. discrimination against white women based on sexism) but some might result from cars coming from multiple directions and colliding at the intersection (i.e. discrimination against Black women as a result of their race and gender).

Equally important, the oppression that Black women experiences does not just look like a combination of sexism faced by white women and racism faced by Black men. Here we can think back to campaigns for reproductive rights in the 1960s and 70s. White women were largely fighting for their right to choose not to have children, in the form of access to safe birth control methods and abortion. At the same time women of color were experiencing forced and coerced sterilization at alarming rates, though this was largely ignored by mainstream feminist groups fighting for reproductive rights. Women of color were thus calling for greater attention to their
Sterilization_protestright to choose to have children, including freedom from forced sterilization and the material conditions necessary to reasonably choose motherhood (including calls for access to child care).[1] So while both white women and women of color experienced oppression targeting their ability to reproduce the form that oppression took looked very different depending on a woman’s race.

As evidence of the extent to which the experiences of white women (and demands of white feminists) obscure those of women of color, consider this. You’re probably familiar with Roe v. Wade, the Supreme Court case that legalized abortion. You might even remember learning about it in high school history. What about the establishment of federal regulations for sterilization? They came just a few years after Roe v. Wade, represented an equally important victory for reproductive justice, and resulted from an equally impressive grass roots campaign.[2] But you’re far less likely to have heard about them, let alone seem them in a high school history book.

And now, a history lesson (bear with me).

Consider the ideal of the female homemaker popularized around the time of the 6458802_origindustrial revolution. Scholars might refer to this set of ideals as the “cult of domesticity” or “true womanhood” or as part of the ideology of “public and private spheres” but put simply, women were seen as too pure and delicate for the harsh working world, and as naturally suited to creating a warm, welcoming home to which working men could retreat. Though women’s work was understood to be necessary, it was not valued socially or economically to the extent that remunerative work in the public sphere was valued. Today women are still far more likely remain at home with children than men, and to perform a larger portion of housework and child care duties even if both partners are employed [3].

So of course feminists have critiqued this model and worked to both open up opportunities for women outside of the home and to gain recognition for the immense value of work historically performed by women in the home.

So far so good? Well, no. Because this ideal applied (and to the extent that it still exists, applies) to a very select group of upper and middle-class white women. Black women have never been seen as too frail for work and since slavery have been forced, expected, or relegated to performing manual labor. Black women haven’t been expected to act as full time homemakers, nor have economic conditions – including formal and informal barriers and inequalities that made earning a stable family wage nearly impossible for Black men – historically permitted it. Further, where white women might be pressured to devote more time to mothering, the mothering of Black women has consistently been devalued in the United.

The reproductive labor of white women was so valued that in 1935 the federal government created a federal assistance program to allow single mothers to provide for their children while remaining in the home [4]. The program, called Aid to Families with Dependent Children, included a number of measures that had the effect of excluding or discouraging the participation poor women of color. The wide discretion of caseworkers to grant, deny, and revoke benefits, the race-based standards of “suitability” for aid, including living arrangements and cooking styles, the frequent absence of case workers who spoke Spanish, and the regular denial of black women because they were deemed to be employable and therefore not deserving, resulted in the systematic mistreatment and exclusion of Black and Latina women.[5]

In the 1960s civil rights and welfare rights organizers were successful in their efforts to end features of AFDC policy that denied assistance to poor women of color.[6] With the end of race-based access to benefits, the generosity of benefits began to be determined by race, resulting in in uneven levels of support for white and black mothers.[7] Quantitative research on AFDC benefits following expansion of access has found that states with higher proportions of black single mothers systematically provided less generous benefits than states with higher proportions of white single mothers.[8] As the racial makeup of the AFDC program changed, critics began Poor People s Campaign 1968 Welfare Rights Organization The National Welfare Rights Organization marching to end hunger. Photo from the Jack Rottier Collection.arguing that the program encouraged unwed motherhood, and that it did so among the least productive members of society.[9] By the 1990s, AFDC was much-maligned, and notably associated with black single mothers who were often assumed to be taking advantage of welfare. In response to critiques the 1996 welfare reform replaced AFDC with Temporary Assistance for Needy Families (TANF). Unlike AFDC, the TANF program required single mothers receiving aid to work outside of the home. Critics of TANF have suggested that it clearly demonstrates the devaluation of black women’ reproductive labor. In her criticism of TANF’s work requirements, Gwendolyn Mink argues that social approval and value of reproductive labor “does not exist for the care-giving work of poor single mothers, in part because they are poor and single, and in part because the poor single mother of popular imagination is Black.”[10] Similarly Dorothy Roberts has argued that “Forcing low-skilled mothers into the workforce regardless of the type or conditions of employment available to them assumes that any job is more beneficial to their families than the care they provide at home.”[11]

In light of all of this, my initial description of sexist work/family ideals appears to be woefully incomplete. More importantly, it generalizes about the experiences of “women” in a way that reinforces the notion that white women set the standard for womanhood even as it rejects that standard. In so doing it makes it far more difficult to/less likely that we will work to combat all forms of oppression as part of our fight against our specific experience of oppression. White feminists might, for example, fight to get women out of the home, but do so without challenging the division of labor in the home or the structure of the modern workplace, thus reinforcing a system in which domestic work is still undervalued but one in which poor women/women of color/migrant women perform the domestic work abdicated by privileged women for little pay, with few benefits, and in positions of immense vulnerability. Just as an example.

The same thing happens when white feminists speak about the problems facing “women” when the content of their argument is specific to middle and upper-income white women. Or, when someone speaks about the problems tumblr_mj1o77X4UU1s1x2pho1_500facing women in opposition to or as separate from the problems facing the poor, women of color, the disabled, immigrants, or any other group to which a woman could belong (enter Patricia Arquette). This doesn’t mean that women can’t work together to address a problem like the wage gap – just that we can’t assume that all women experience that problem in the same way, or that a solution that combats the problem for women positioned in one way will address it for all women and won’t actually make it worse for some women. It also doesn’t mean that women, and men, and LGBT activists, and civil rights advocates can’t work together. In fact, an intersectional understanding of oppressions as fundamentally linked would suggest that it’s vital that we do so. But it does mean that instead of telling groups who also continue to experience and combat serious discrimination on multiple fronts (and which include women) that it’s “our time” and presuming that this isn’t “their issue” we should ask, how does this issue affect you? What do you think needs to be done? How can we work together on this? And then – this part is key – we listen.

 

 

[1] See Jennifer Nelson, Women of Color and the Reproductive Rights Movement

[2] http://isreview.org/issue/91/black-feminism-and-intersectionality

[3] See: google.

[4] Gwendolyn Mink goes so far as to say “Economic provision for mothers’ care of children was once the primary purpose of welfare.” (Mink, Welfare’s End, 105)

[5] Premilla Nadasen, Welfare Warriors; Mink Welfare’s End; Mimi Abramovitz Regulating the Lives of Women

[6] Jill Quadagno, The Color of Welfare 

[7] Stephanir Moller, 2002 “Gender and Race in the U.S. Welfare State

[8] Moller 2002

[9] See: William Shockley; The Moynihan Report

[10] Mink, Welfares end. P. 120

[11] Dorothy Roberts, Boston Review http://new.bostonreview.net/BR29.2/roberts.html

It’s The Choosing We Enjoy, Not The Chosen

By Graham Peterson

Jonathan Haidt discusses in The Happiness Hypothesis a couple of troubling findings in psychological and behavioral economic research.  People become quickly overwhelmed with more choices, and the stuff they’re choosing doesn’t confer them greater happiness.  It starts to look like maybe we’d all be better off without all these choices and all this stuff.

But the measurement of subjective well being can only measure how happy people are after they’ve made a choice — the measurement assumes that we get nothing out of choosing itself.  Haidt mentions — and I think he’s correct — that it’s not the ends that we do our choosing and striving for — it’s the striving and choosing themselves that we enjoy.

In the economic analysis, it’s the weighing of costs and benefits at the margin, and ranking of our preferences in cultural dialogue, that we really enjoy – not the imaginary stream of utility that comes through once the choice problem is solved.

Haidt and other positive psychologists call it the “progress principle.”  We enjoy progressing ever forward, and the “flow” that attends to doing so.  It’s not the stuff we bring home that we enjoy; it’s the ritual of shopping.  It’s not the paycheck we enjoy; it’s the ritual of working.  It’s not the kids we enjoy; it’s the ritual of making them.

Now, although you can quickly overwhelm a lab subject with choices, the fact that there are more things in society does not imply that there are more things that individuals have to sort.  All the choices in the world don’t go into a single urn from which each person draws their choices.  The social division of labor — little friend cliques and church groups and tech startups — bracket complexity into manageable pockets.

Hence individuals don’t get overwhelmed.  Individuals, and the sub-groups they belong to, just become more unique and individual.  Such a process increases the amount of social and entry and exit from institutions like marriages and jobs, undermining the power of groups over individuals, and increasing our opportunities for striving.

The process of increasing social complexity can go on forever, even though our brains are rather constrained.  We share in the task of choosing and creating by doing it in groups.  And we experience transcendence in interacting.  As such, we shouldn’t be surprised that people continue to march forward, whipped up in the joy and flow of that progress.