The Romantic Bias Toward The Past, Again

By Graham Peterson

It turns out that the ancient Egyptians probably hauled all of those gigantic stones around by wetting the sand, which cuts friction.  We’ve got evidence two ways.  Someone ran some careful experiments at scale, with little blocks and sand.  And we have a picture of the Egyptians pouring water on sand in front of a giant statue.  How could it have remained a mystery?  Interpretation.



Reports Bonn, the principle investigator, “Egyptologists had been interpreting the water as part of a purification ritual, and had never sought a scientific explanation.”

What he means more precisely is “[material] explanation,” and the material blind spot is a much bigger problem in anthropological, sociological, historical, and archeological research than a few drawings in Egypt.  Due to the proliferation of a lot of bad theory, say Rousseau’s theory of noble savages, or the colonial pretense of early missionaries, we still favor infantalizing theories of the natives, and of people in history.

For decades the above drawing went on, apparently without question, being interpreted as a purely symbolic display.  But why wouldn’t we think that, like us, the Egyptians wrote user manuals for their inventions?  Why wouldn’t they want to represent their accomplishments accurately?

To be fair, it is a depiction of a statue that is clearly religious.  But when an electrician wires a church, he doesn’t bless his fish tape.

War in indigenous societies was interpreted similarly, symbolically — like a child’s game — for the longest time.  Theorists of war were proud of the magnificent organization of war, the superiority of imperialism, and so on.  They wouldn’t have defamed civilized war by comparing it to 40 or so naked men running at each other with spears.  And yet on closer review, it turns out the comparison is the best one out there.

Regardless multiculturalism, regardless the piling up of ethnography and archeological and historical evidence, we still haven’t made much progress ideologically.  Some portion of people want to believe, out of ethnocentricity, that our ancestors were little kids.  Some portion want to believe it out of pity or cultural sensitivity.

But what about starting with the idea that the natives and our ancestors were probably a lot like you and me, with most of the same motivations, needs, cognitive processes, and social institutions?

Lawrence Keely and Steven Pinker have caused a lot of consternation with the cultural sensitivity crowd by pointing out how violent and materially motivated indigenous people are and were.*  The Egyptology discovery above won’t blow as much ideological hair back, but the anti-materialism bias is worth considering more deeply.  The myth of our magical and infantile past, full of symbols and rituals, has to go.

People have for the majority of history broken their backs producing food, killing and stealing from each other, sometimes trading and inventing, and raising children.  We should expect their artifacts and symbolic systems to reflect the fact, not to serve as an existential reflection pool for the vanity and pity of modern intellectuals.

*I suppose the cultural sensitivity crowd wants to believe that rapacious imperialism is modern, something that rich white men invented as a warm up for bad TV and minimum wage jobs.

When Safety And Medicine Become Weapons

By Graham Peterson

Since Christina Hoff Summers gave a recent talk at Oberlin, people have been wondering why activists call ideas they don’t like “unsafe” or why they need “safe space” shelters from offensive speech.   Well, it’s an old trope, equating dangerous speech with physical danger. Jonathan Rauch belabored at length in 1993 how opponents of speech often invoke the violence metaphor to get actual, real, legalistic violence on their side.

But the new popularity of safety dialogue isn’t a conniving political maneuver.  These students are completely sincere.  They are convinced that criticism of their movements enables the harassers they oppose, harassers who drive transgender to suicide and make would-be rapists feel more comfortable.  They are completely sincere that there’s a straight line from criticism of feminist ideas to violence against women and non-conforming genders.

Criticism of feminism does not often explicitly threaten women (although some small portion of idiots on the internet send death and rape threats to feminists — thanks idiots).  But nevertheless, the idea that criticism of feminism leads to violence, and is therefore itself violence, is completely sincere and must be grappled with if we’re going to restore a reasoned dialogue.

What’s interesting — and extremely effective — about the recent violence metaphor, is that it now has the authority of the medical community on its side.  Psychiatrists and psychologists are powerful people who, mainly, decide for us who are and are not morally culpable deviants.  Thus advocates have borrowed from the psychiatric lexicon in order to borrow its authority.

Not all of the PTSD and safety dialogue is just rhetorical borrowing.  A substantial portion of the rape advocacy community are themselves actual victims.  It is altogether fair that those victims should receive more authority than they have, given the way they’ve been historically dismissed.  If victims can heal wounds and reestablish their dignity through psychological treatment — let’s have more of that.  Indeed let’s have more campus resources dedicated to it.

But activists have diagnosed themselves and one another with PTSD, and invited anyone who sympathizes with them (whether that person has been violently traumatized or not), to diagnose themselves with PTSD.  Ergot, the definition of the disorder has been expanded to include anyone who feels psychic offense, or threatened by ideas that (putatively) lead to violence.

In order to understand the importation of PTSD rhetoric into the campus rape movement, we have to understand the history of the LGBTQ movement, and how they made friends with psychiatrists.  They weren’t always.  Psychiatry once considered homosexuality pathological, garden variety moral deviance (see Freud’s essay on Narcissism).  But psychiatrists eventually decided that gay people had no control over their sexual desires, and that the desires are a biological — not lifestyle — domain.

Oila, gay rights: “give gay people freedom of choice because they have no choice in their desires.”  It’s a paradoxical legal and philosophical argument, if you think about it.  But Born This Way has been wildly effective, and it is empirically grounded.

The world would probably be better for gays if their choices were dignified as adult and free choices regardless whether their urges are inborn.  For example, many people felt recently that, “ok, gay people can’t control their sex drive so let them do who they will, but they can control their marriage drive because that is social and sacred, so we draw the line at marriage.” Denoting gays Official Victims of biological necessity helped the lobby for administrative and legal protections, but it enshrined their second class citizenship.

As a matter of tradition, because of feminism’s alliance with the LGBTQ movement, that has a recent and wildly successful alliance with psychiatry, we can’t really blame campus rape activists for borrowing from the psychological lexicon, inflating the definition of PTSD and psychic harm to include include any and all criticism of their movement.  It lends unassailable rhetorical authority to their claims, and in their view, brings into their fold all of the untold billions of victims who have been so far ignored and silenced.  It’s really genius.

But if in the first act activists and victims on campus win the point, they will in the second act lose the debate, signing up for and reifying their own permanent second-best.

Did Whites Steal Rock ‘n’ Roll From Blacks?

By Graham Peterson

Jim Morrison ain’t the final word on Rock ’n’ Roll history, but he’s a good start. In the clip below, Jim opens up a can of forgotten, but not rotten, Rock ’n’ Roll history — its white roots.

The view that Rock ‘n’ Roll was ripped off from black Rhythm & Blues is, more or less, the predominant view. It is not an uncontested view, as the Wikipedia admits. But if you grew up on the left, or around musicians and heads, you probably learned that Rock ’n’ Roll is blood money from yet another Great American Swindle. Jim agrees; of course Rock ‘n’ Roll evolved out of The Blues. But it also evolved out of early Country music, out of Bluegrass and Folk — white genres.

It’s an important point, not because white power, but because the white details of Rock ‘n’ Roll history got left on the shelf for a bad theory. Theory is a flashlight that tells you where the goods are. Unfortunately, critical theory has bad batteries and a narrow beam.

Without belaboring Horkheimer et al., the idea in critical theory is that culture fits a metaphor of exploitation, of theft. Culture is just another expression of colonial imperialism. Cultures get invaded and assimilated into a homogenous mass. It follows from this vision that black music got co-opted and assimilated into white music, in order to keep blacks down. That’s cultural appropriation. But, like Jim says, some of the main ingredients in Rock ‘n’ Roll were imported from Europe, through whites. Critical theory has no quarter for these folks.

The Europeans who brought bluegrass and folk to the United States were Scotch-Irish immigrants who settled across Appalachia. At the time, Scotland and Ireland were backwaters that had a reputation for the clan, the bar fight, and the broken accent. When they emigrated to get away from British exclusion, they brought instruments. And some fantastic music. You can hear those traditional Scotch-Irish influence still reverberating in modern Bluegrass, Folk, and Country — it’s uncanny. Fiddles. 6/8 time signatures. Twangs and bent notes. Line dancing. Poetry about poverty and misfortune.

Scotch-Irish Americans in Appalachia have always been, and unfortunately still are, largely poor. They didn’t get into singin’ about broke down Ford trucks by exploiting anyone — just like blacks didn’t get into singin’ about the blues by exploiting anyone. So, naturally, because Appalachian whites and blacks shared the same fate — and often the same holler — they mixed cultures. Then came Rock ’n’ Roll. And when kids from nice white suburbs started buying it, a few poor whites and blacks got their American Dream.

No doubt, the social exclusion of the ’50s and ’60s had its routine influence on Rock ’n’ Roll. The critical theoretic swindle story has some merit. Black musicians, who played the same tunes as whites, were not allowed to play the same stages. Black artists got squeezed out of radio rotations by racist DJs. And so on. But Bo Diddly was no slouch. Him and a range of other blacks made it big. The racism in Rock ’n’ Roll history is arguably a sideshow to the main stage, where blacks and whites were mixing to everyone’s benefit.

Cultures have always sampled and remixed from each other’s stuff. Take for instance the remixes that came out of Celtic Western Europe in the 2nd century BC. Archeological digs have revealed that the Celts imported art from Greece (that’s a long trip!), and that they eventually made their own Greek inspired art. Here again the power and exploitation thesis fails.

The Celts were poor. The Greeks were rich. The Celts were a fledgling, diffuse band of tribes. The Greeks were a militarily and culturally superior collection of  city states. Despite their differences in power, it was the poor Celts who adopted the rich Greek’s art. They traded artifacts and traditions peacefully, and to their mutual betterment.

Cultural mixing is as old as dirt, or rather, as old as trade. It happened across powers when timid Celts met well stocked Greeks in Europe. It happened across races when dirt poor immigrants met dirt poor blacks in Appalachia. And it happened across classes when poor Rock ’n’ Roll musicians played for rich city slickers across America.

We need to think harder about where cultures come from. Cultural appropriation, the swindle story, definitely is and can be a way that upper class people reproduce their status. But even more often, the borrowing, imitating, trading, and selling of cultures has been a way people make and expand their communities, peacefully. It’s a beautiful thing, and we should, while remembering some sad missteps, celebrate cultural trade as a testament of a liberal society.

Rock n’ Roll ain’t a black or white thing. It’s a black and white thing.

Is Math or English Harder to Theory With?

By Graham Peterson

Fabio Rojas and crew got into a discussion on Twitter about whether mathematical theory in social science is more difficult than verbal theory, or as Fabio summed it up:

fixed point theorems

Everyone in the thread agreed that dense verbal theory is much harder to read than mathematical theory.  But I think they’re about the same.  (Andreas Glaeser’s opinion on Foucault is here worth mentioning [insert arms like a symphony]: “you think to yourself, ‘now this is what language can do.’“)

We have a lot of folk assumptions about the difference between “verbal” and “formal” theory in social science, and too much violence between their practitioners, but very little discussion of their actual differences or advantages.  Note quickly: both verbal and mathematical theory are “formal.”  They both aim to generalize formal structures of logic, so I’m ditching the adjective “formal” and will refer to “mathematical” theory henceforth.

Bad verbal theory suffers from the same problems bad mathematical theory does.  If you ever get mad enough at mathematics that you read Why the Professor Can’t Teach, a criticism of mathematical pedagogy and research by Morris Kline, you’ll notice that most of the problems he identifies are exactly analogous for verbal theory.  Kline laments mathematics that generalizes for the sake of generalization, and he laments the presentation of general proofs without intuition and examples.

These are, to my eye, exactly the things that make Foucault et al. extraordinarily difficult to read.  Concepts get generalized for their own sake, until the exercise becomes so meta-theoretic it is only interesting to a handful of specialists, and applicable to nothing.  It might be the case that the material world is merely a realization of the world of ideas, but I really doubt that we’re learning much from “reimagining neoliberal ontologies.”

And where are the examples?  You just know that when you’re reading Bourdieu, there’s some vignette of piano lessons dancing around in his head, while he’s drawing sweeping generalizations about cultural capital.  And he’s probably generalizing from some children’s game where one gains and loses power, while he’s talking about misrecognized exchanges of subconscious power. But without making those examples explicit, the reader cannot extrapolate to generalities in the same way Bourdieu has.

Good theorists present their ideas like recipes or step-by-step instruction manuals, not assertions of propositions and generalities.  That is, good theorists will walk you through exactly those steps they took (usually starting with a rudimentary kernel, case, or example) to arrive at a generality, rather than presenting themselves as if all their brains just trade in dancing abstractions.  We are, though, both mathematical and verbal theorists, tempted to do the opposite.

We induct from one thing or another until we think we’ve found something general.  Then we turn around and assert the generality of that proposition, and try to prove it deductively.  We (sometimes) eventually present the case or example as if it’s just a convenient afterthought or demonstration.  When in fact that kernel drove our logic the entire time!

If we can drop the pomp and pretense, and focus on communicating our thoughts in the way that we actually arrived at them, we will have much clearer and easier to read mathematical and verbal theory.

Also note that good mathematical and verbal theory do pretty much the same things.

Creativity in mathematical and verbal theory is metaphorical and analogical, not deductive.  That is, mathematical creativity comes from (say) writing down a telescoping equivalence into a proof to clean it up, or recognizing a dual from a different subfield.  In verbal theory, analogical creativity comes from (say) writing down an epidemiological metaphor in a new context, like crowd dynamics.

A creative thinker transposes the formal structure of an argument to a domain where she intuits the model will help better comprehend the situation, than whatever story is currently attached.  Full stop.  There is no difference between doing so with a fixed point theorem, entropy function, language game, or model of mutually constitutive social interactions.

Or consider that Bourbaki symbols and the Greek alphabet are not always the most precise and compact language in which to present an idea.  We have intuitions because they are computationally efficient, and it turns out that in groups, intuitive Bayesians make lots of incredibly good predictions.  It is a very strange logic and practice that justifies turning a discussion of expected utility into a derivation of the expectation operator from primitives.

We have and use grammar in natural language that defines hypotheticals and probabilities all the time, “could, should, would, may, might, ought,” and we have and use grammar in natural language that defines quantities and their relations all the time, “most, more, just as much as, lots.”  For many problems, replacing these terms with mathematical symbols would be cumbersome, obfuscatory, and useless.

Both mathematical and verbal theory cannot be reduced to some historical turf war between continental social theory and economics, or some other nonsense about professional identities and territories.  We should rise above these petty disagreements and give young theorists a better guide to which lexicon is useful in which situations, because neither natural language nor mathematics can accomplish all of the goals of theory across all domains.

The Rhetoric of Direct and Indirect Speech

By Graham Peterson

Indirect, ambiguous, vague speech is incredibly common in formal arguments, and it is incredibly ineffective at persuading anyone.

I think most of us already agree with that statement, because there are standard and good arguments against ambiguity.  It can signal that the author does not herself know exactly what she is arguing.  It can signal that the author himself is purposefully obfuscating his meaning, trying to be tricky.  It can signal that the author is overgeneralizing, without thinking hard about and looking hard at the issue in front of her.

But I want to extend the discussion, and note here two particular kinds of indirect speech, and their use in formal writing.  By indirect speech here, I mean little hinting and ambiguous comments that make inexplicit reference to a literature, an ism, a school; I mean large, category, ex cathedra assertions with strings of citations tacked on; I mean jargon that only loosely references classes of stylized findings and literatures.

Note that the fact that someone is being ambiguous or indirect isn’t necessarily a sign that he is an unfocused idiot.  Indirect speech is really useful, even (trigger warning) rational. Steven Pinker points out in an article about it, that it’s a primary way we avoid conflict.  By only alluding to what one wants, or is asserting, and allowing for other parties to interpret one’s statement in multiple ways, one has recourse to run to the least offensive of its interpretations, and can plausibly deny that one intended the unfavorable interpretation.

Additionally, indirect speech helps us maintain in groups.  Sarcastic jokes are I think the best example of this phenomenon.  I know Janet hates opera, and she knows, that I know, that she hates opera.  It’s tacit and common knowledge between us, part of the mutual constitution of our friendship.  So when she says she has a date and I ask her which opera she’s going to, we both smile and chuckle, reassured that we have a common bond.  Full blown sarcasm isn’t common in formal writing, but wink-nod comments are.

These otherwise perfectly reasonable uses of indirect speech lead to an unpersuasive mess in formal arguments.

First, the in-grouping mechanism of indirect speech.  When I base my argument on citations, jargon, and isms, instead of direct explication of the claims I am making, I convey to my reader, if she is an outsider, that she is in the company of experts and should just trust whatever ex cathedra assertions I make.  If my reader is an insider and well familiar with the common knowledge I am only alluding to, then I should ask myself why I’m arguing at all.

Whether the reader is an insider or an outsider, there is no argument, just the authority supposedly conveyed by disposition and in group boundary keeping.

Now for the ambiguity-as-conflict-avoidance mechanism of indirect speech.  When I base my argument on diffuse citations to ginormous literatures, histories, or intellectual categories, I allow for a lot of ambiguity in interpretation.  That makes my claims unassailable, because nobody really knows exactly what I’m claiming, and I’m free to hedge, dodge, and qualify my way out of making an actual claim or demonstrating it with evidence.

People tend to accuse one another, regarding ambiguity, of “purposeful obfuscation,” but I doubt that the cynical interpretation is actually what’s going on in most cases (except for maybe a few postmodern authors who get off on playing games).  People generally want to avoid conflict with one another; intellectual hierarchies and territories are wooden and violent; and being purposefully ambiguous is a great way to avoid offending territorial babies.

So here we have, I think, a little sociology of good writing.  Bad writing comes from using indirect speech to reference the authority of in groups, and it also uses indirect speech to avoid crossing boundaries between in groups and out groups.  Let’s stop it, and just have an adult conversation about difficult topics, saying exactly what we mean.

What is Ethnography Good For?

By Graham Peterson

Ethnography is good for a lot.  Like Shamus Khan and Colin Jerolmack have recently argued, ethnography is, just like the measurement of relative prices, a great way to study revealed values and motivations (sociology speak) and revealed preferences (economics speak).

People have a pretty poor self conscious understanding of the distal, structural, social-aggregate-level mechanisms that drive their behaviors.  There isn’t a social science that doesn’t try to catch people unawares, and make bird’s eye inferences about those behaviors.  So every social science needs methods that draw inferences on things that don’t come directly out of people’s mouths, pens, or keyboards.

Ethnography is good for that.  And yet, people will complain about ethnography — or rather, bad ethnography —  invoking the ideals of randomness and representativeness taught in statistics courses.  But bad ethnography is bad for a lot of the same reasons bad statistics are.

Bad ethnography comes from convenience samples of people’s personal networks, and samples on the dependent variable without comparison groups.  It replicates derivative, routine, and already established theories.  It pretends that the author didn’t know what he was going to find before he showed up, then does an elaborate dance in the write-up trying to pretend to be objective.

People who do this drop a lot of “lived experience” and “in process” and “embodied practice” bombs that are supposed to end the conversation with their sheer authority.

Bad statistics does the same things.  It comes from convenience samples drawn from few-clicks-away government data, and samples on the dependent variable without comparison or counterfactual groups.  It replicates derivative, routine, and already established theories. It pretends the author didn’t know what she was going to find before she showed up, and the write up feigns objectivity.

People who do this drop a lot of “three-asterisk” and “testable” and “control vector” bombs that are supposed to end the conversation with their sheer authority.

Now I want to argue that we need both ethnography and statistics, but not for the reasons I’ve heard some people run to.  Some people will claim that we need purely descriptive studies; they abdicate causation and tell us ethnography gives us thick descriptions.  I have only heard this argument in the context of methodological debate, though.  Any interesting ethnography I’ve ever read has made a host of causal claims, and suggested their robustness with plausible interpretations of data.

Others have argued that ethnography helps us get on the ground and witness the emergence of causal mechanisms as they unfold.  You don’t have to step back a million miles, cover your eyes and write down a null, and then make causal claims ex post.  You can actually witness and take note of an antecedent, and its consequent, as they happen.

That argument is well and fine for ethnographers and statisticians to both keep their jobs, and do their own thing at their own conferences.  But I want to argue that these people need to talk to one another, too, and for a principle reason that I don’t know how to phrase in grammar other than statistical grammar, but I bet can be translated.

Ethnography samples on the tails of distributions (imagine without loss of generality a normal population distribution of some trait or phenomenon), and statistical studies sample on measures of center.  Both measures can answer causal questions, because both have their own way of filtering out confounding noise in empirical observation, and illuminating causal mechanisms.

Ethnographers go out into the world and turn up the volume on their variable of interest, in order to increase their signal/noise ratio, by sampling on the extremities of its distribution. Note that this is the same motivation for large N inferential statistics.  The idea there is to turn up the N until you can successfully differentiate signal from noise.

So if one wants to study the mechanisms driving social mobility, one goes to a homeless shelter to study downward mobility, not a college campus.  That’s not cherry picking — it’s calibration of the measurement instrument.  And it turns out that one can turn up signal and turn down noise, both by turning up the N and turning it down, depending on which portion of a population distribution one is sampling on.

Tail sampling makes statistical thinkers nervous.  All of the nice results of the central limit theorem (which is built on successive estimates of center – not estimates of tails) fall apart. Estimators lose efficiency and become biased, on purpose.  But turning up the volume and sampling on tails is extremely effective for the same reason a caricature works — it exaggerates what is distinctive and different about a particular variable in contrast to the confounding weeds around it.*

Both methods turn up signal and turn down noise.  Both methods observe primarily behaviors — stark and nonsensical on their own — and require textual, deductive, rhetorical, analogical, and narrative inference to make any sense of them.

So neither is superior, and neither can give us a whole picture of the population distribution of a social phenomenon, because each excludes, truncates, and draws discussion away from the other’s target on that distribution.  Statistical estimation of central tendency by definition and de facto obfuscates what we know about tails, and ethnography by definition and de facto obfuscates what we know about central tendency.

I’m sure archivists, humanists, interviewers, surveyors, and other observers of self conscious narratives fit in here somehow.  I’m just not sure how yet.

*Here I have just argued that cartoons are, literally, useful scientific representations of reality.  Keep that in mind the next time you call someone’s argument a cartoon.  Cartoons are funny because they make explicit, with innuendo and misdirection, tacit common knowledge.

Image credit: