Do Markets Make People Selfish?

By Graham Peterson

Jason Brennan and Peter Jaworksi have just published a clever new article in Ethics called “Markets without Symbolic Limits,” in which they throw some new light on the repugnant markets literature (think Al Roth, Michael Sandel, etc.).  The repugnant markets literature asks why people are a-OK with markets in some things (PlayStations), but not others (kidneys).

Brennan and Jaworski’s addition is clever because it goes beyond the usual arguments recommending markets, which are materialist, and come from the guys in the economics and philosophy departments, and addresses market critics on their own English department turf, in symbolic terms.  Brennan and Jaworksi have landed the first principle of persuasion — if your audience speaks symbolism, you’re not going to change ’em speaking materialism.

The bulk of the article reviews empirical literature in anthropology and sociology, literature that shows just how contextually dependent the meaning of money is.  In some cultures, giving cash is a higher honor and more intimate than hand-knit mittens.  They conclude that since there is nothing inherent in money, that attaches any particular positive or negative significance to it (contra Freud and Simmel on “filthy lucre”), we ought to think about how money has been constructed in the West.

If we can convince people there are sound empirical and moral reasons to change the meanings we attach to money and markets, let’s get right to it!  It is a really brilliant argument.  I’d love to see more libertarians and economists engaging the other side like this.

But I want to contend with the paper a bit.  They start with some definitional work, reviewing the arguments against markets.  There are material arguments against markets, like polluted river externalities or the inefficiencies that might result in a market for organs.  These are traditional, materialist, economic, political, complaints.  On the other hand, they want to separate out semiotic complaints about markets — complaints that pose markets themselves as a social force that degrades and debases objects.

I don’t see a difference.  Every one of the not-semiotic complaints that Brennan and Jaworski list derives in what is at bottom a semiotic concern, that markets run on and encourage selfishness.  People’s intuition, for better or worse, is that:

  1. Exploitation is the result of greed.
  2. Misallocation results from bosses who don’t care about employees, or because capitalists aren’t charitable.
  3. Selling people things that are bad for them is a result of carelessness and greed (the paternalistic complaint).
  4. Harmful externalities result from careless people not considering the knock-on effects of their actions.
  5. The selfishness of markets correlates to and probably generates other vices, like vanity or sloth or envy.

So, in my view, the entire enterprise of repugnant market research can be reduced to the semiotic question: “why do the lion’s share of cultures believe that markets and money taint sacred objects with the profanity of selfishness?”  Brennan and Jaworski exhort us to do “more work . . . on the psychology underlying semiotic objections to markets.”

I’ll try.

Our past was unimaginably violent relative to today.  If you think Republicans and Democrats are prone to think the other team are a bunch of sociopaths, you can only imagine what the mentality was when people routinely massacred one another’s tribes at dawn.  So, we’re understandably prone to think outgroups are selfish people who lack our ethics of reciprocity and community solidarity.  We think out groups are evil.  Trading partners by definition come from out groups.  Ergot, we associate markets–and its material totem, money–with selfishness.  By extension, we usually relegate low status people to dealing with outgroups, to doing our trading and banking.  In European history, those people were Jewish.

Now, that conflation of outgroups with nasty selfishness is not entirely irrational.  When positive sum trade between my ingroup and your outgroup breaks down, we probably will get nasty and hurt one another.  So people are right to sense the tension and conflict in negotiation, in the marketplace.  Economists mostly ignore that process.  Deals don’t break down in the economics textbook.  Best friends don’t sue each other and tank a successful business partnership, because rational agents recognize that trades are mutually beneficial.

If we recognize that people generally associate markets with greed, and maybe understandably associate greed with a lot of bad behavior, we can probably be more persuasive in enjoining people to experiment with markets.  That will mean telling a story where private insurers and for profit hospitals aren’t callous, suspender wearing fat cats.  It will mean telling a story where people running black markets in organs and babies in third world countries actually do care about the customers they are serving.  And so on, with the bankers, with the bosses, with the drug dealers, with the children’s toys manufacturers.

Protesting and beating people over the head with supply and demand diagrams, calling their ideas “economic fallacies,” probably isn’t going to do the trick.




Words Do Hurt, But Debate Protects Victims

By Graham Peterson

Free speech defenders have made the point again and again — sticks and stones break bones but words can’t hurt you. You have the right to say what you want up to actually violent words like, “I’m going to shoot you.” We are obliged to protect people from physical harm, but we have no duty to protect them from psychic harm.

The trouble is, words do hurt. So opponents of speech enjoy the moral high ground. If you support free speech, you’re a meanie. Speech advocates need to deal better with that argument. It’s only becoming more common as the social justice movement, a movement that sees the world divided neatly into meanies and victims, grows.

The sticks and stones argument is popular, maybe, not for philosophical reasons, but because it’s expedient. It’s dismissive. “LOL @ ur safe space.” The opposition won’t be so simply dismissed. It believes that ideas are just as hurtful as real violence, or lead directly to it.  In this view, it is no metaphor to talk about some speech as “safe” and other speech as “unsafe.” Gay bashing becomes gay suicide. Misogyny, rampage shootings.

Let’s listen harder.

Think about what it’s like to be a victim of abuse, a person who feels her dignity withered, who has suffered violent ramifications from ideas. How would you, then, react to, “sorry, but no. Free speech is sacred.” It’s dogmatic and insensitive. And often these days, that kind of ideological insult is exactly what activists are complaining about. Advocates would do well not to assume the role of the oppressive meanie.

So, “if you ain’t bleedin’ you ain’t hurt,” is not the way to convince people that defenders of inquiry are on the side of tolerance and wellbeing.

Advocates can make a different case. They can freely concede the minor premise, that ideas are hurtful, and maintain the major, that speech is sacred. They can concede that ideas make violence, that ideas damage people’s self esteem, or that they infect the subconscious minds of oppressors, such that oppressors inadvertently discriminate. That’s worth the cost. Because ideas, a sacred contest of ideas, is still the best way to protect victims.

These people are not wrong. Feelings and ideologies, it turns out, are real. Ideologies do have material and sometimes violent consequences.  Marxism caused mass starvation and bloodshed in 20th century Russia and China. Misogyny causes rape on 21st century college campuses. It was precisely the recognition of the gravity of ideas that compelled J.S. Mill to argue for an open contest. Ideas are the rage and insult behind the sticks and stones. They hurt.

So everyone here: Mill, speech advocates, speech critics — everyone agrees that ideas and speech are hefty, important things. Why drive one another away? What an opportunity! Speech advocates have to recognize, respect, and convey that they are listening, that there might be a kernel of legitimacy in left ideologies.  They just want to be listened to, too — that’s all free inquiry is.  That’s the classical liberal bargain.  You listen to me; I listen to you; we both walk away with different ideas (if only at their margins).

In giving up on, “ideas are just flimsy things anyway,” speech advocates can maintain the really important point, that ideas contests produce the best possible ideas. Debates over theories lead to debates over the best evidence to prove those theories.  Evidence gets better on both sides.  Confirmation biases get checked.  We learn to tolerate and respect and enjoy one another for—and not just in spite of—our differences.

At bottom, people who want to delimit speech these days are just trying to protect victims. But open inquiry is the best chance victims have of asserting their dignity and improving their outcomes.  That is the point we have to make: words do matter, so, so much, because free and open inquiry protects victims. The person who cares the most about victims should be the first in line to contend with bad ideas.


Why Are Graduate Students Miserable?

By Graham Peterson

Everybody knows graduate school is hard.  Graduate students complainbrag about how much work they have.  Shit Academics Say is a twitter feed with 122,000 followers; more than half the jokes are about procrastination, guilt, and overwork.  Now of course, things that aren’t hard aren’t worth doing, and if you’re not terrified, you’re not trying hard enough.  But the academic workload isn’t all honor and self deprecating jokes.

There is a gigantic problem with mental health in graduate school.  The statistics are startling.  One estimate says that a tenth of graduate students will contemplate suicide in a year, and 60% feel consistently hopeless. Another says that 30% of graduate students are depressed.  Then there’s imposter syndrome.  Everybody, if you ask around, has impostor syndrome.

At some threshold of incidence, a mental health problem or syndrome becomes a feature of a social institution, not an individualistic syndrome or problem.  Graduate school is one such hell hole.  Training for any job should in any world be hard.  But an institution correlated with routine hopelessness and a mudslide of confidence cannot reasonably point at psychology.

At that, senior professors will smile and reminisce about how they had it hard too, and how scholarship is honorable for its difficulty.  Anointed and deserving of the custody of veritas, they are.  And we shall be too, should we submit and devote ourselves to the same altar. But no. Becoming a research professor was in fact much easier in history than it is today.

Many professors spent most of graduate school reading around, having long conversations, planning a scholarly masterpiece.  Lots of them got jobs without having yet finished their dissertations.  When positions opened up, their advisors just called their friends and invited applications from the friend’s cabbage patch.  It’s hard to imagine 30% of people faced with such constraints being clinically depressed.

Since then the demand for academic positions has outstripped their supply, and the price of getting a position has shot up.  The same structural forces that have been driving tuitions up, have driven up the entry fees required to get into the research club.  Now in addition to the dissertation, students are expected to assemble top publications, to have written and taught courses, and to have written for research grants.

What are those structural forces?  Well universities keep donators donating and parents writing checks by keeping up their prestige, and prestige is a function of how many people universities turn away.  Thus, regardless that public demand for higher education and research have exploded, university supply hasn’t exploded.  That means no explosion of research jobs.

So the only things exploding in graduate school are confidence, social lives, families, and checking accounts.

Tenured professors have no incentive to improve matters, so matters have not improved.* If one tenth of students contemplate suicide and quit, the line to replace them is endless. So professors reproduce the version of graduate training that they grew up on, the academic vision quest .  They keep portraying scholarship like a marriage or religious devotion, and we keep feeling like a bad husband or acolyte when we don’t meet lunatic expectations.

Graduate training and the research professorate are broken.  Universities need to expand operations and find better ways to account for quality than prestige signaling.  And primary research probably needs to be divorced from teaching, so that it responds to demands for research, not for a limited number of jobs stamping adolescents with professional class credentials.

*To be fair, many have made a lot of effort to improve graduate training, both out of conscience and because it’s hard for them to place students.  But they cannot keep up with the structural tide.


The Mind Is Bayesian, Not Freudian

By Graham Peterson

Defensiveness, as I understand the term, is an emotion motivated by stored up traumas.  Traumas drive people to react disproportionately to external events, that are in a current context tangential or unrelated to those internal fears.  “You’re just being defensive,” as it goes.

Note (again), that’s an unfalsifiable accusation.  Any defense against the accusation goes toward proving it’s true. “No, I’m not.”  “See, I told you that you were being defensive.” Psychoanalytic claims are accusatory cannon fodder — and are not measurable, checkable, scientific claims.  Moreover, they’re a recipe for division and paranoia.

But enough Freud bashing.  Let’s think about an alternative theory of mind.

I’ve known some defensive people in my life.  I know a guy who basically always got his way because challenging him, even on the small stuff, was asking for a fight.  That guy was also beaten — mercilessly — by his parents, until a foster parent took him in around age ten.  So is it fair to accuse him of being defensive?  I don’t think so.  The phrasing has all kinds of prejudicial weight.  It means he carries some burden or baggage in his subconscious that he is unable to access or control.  It means he’s irrational.

But let’s think harder.  To this guy, who has experienced routine and disproportionate sanctions for his behaviors, constant criticism and attacks from people who love him — to him the correct inference to make, conditioned on a signal like “X is getting critical with me,” is that a humiliating beating is coming.

If we think of people making perfectly rational predictions about the likelihood of future events, based on a probability distribution they’ve constructed with memories and prior information, we have a Bayesian theory of mind.  Defensiveness is in this theory merely incorrect inferences one makes after switching contexts.  Mistakes in inference and behavior come from external changes in the social and physical environment.

The mind, in this view, is a machine built to interact with an environment and other people, not a machine built to manage its own internal logic, relative to some prehistorically given drives and urges.  There are three advantages to the social, Bayesian model of mind, as opposed to the individual, psychoanalytic theory of mind.  The first is scientific, the second is therapeutic, and the third is normative.

First, a Bayesian theory of mind is scientifically attractive.  It makes generalizable and mathematically tractable, probabilistic predictions about how people will react externally.  Both stimulus and response are externally observable, and are connected by an internally unobservable mechanism that makes ordered predictions.  Observing the order of responses, with respect to the order of stimuli, reveals the logic of the unobservable mechanism.

The Freudian theory of mind is on the other hand, literally, untestable story telling about a tragic struggle for control among characters inside people’s heads.  The Bayesian theory of mind is falsifiable; the psychoanalytic theory of mind is not.

Someone either learns a probability distribution based over events, and responds predictably from draws over that distribution, or they don’t.  The only way to observe the narrative struggle among the alleged characters in the Freudian story is to ask people to openly reflect.  But remember that the entire theory is predicated on the idea that they are lying to themselves and us.  “You’re in denial.”  “No, I’m not.”  “See.”  “You’re being defensive.”  “No, I’m not.”  “See.”  So there go our external observations and verifiability.

Second, a Bayesian theory of mind  recommends a dramatically different treatment regiment for folks who are having trouble.

If people are actually Bayesian, then we wouldn’t want to put people on couches and dredge up their subconscious, hoping for release.  We wouldn’t treat therapy like a tabloid expose of internalized drama.  We wouldn’t try to catch them in their own bullshit or tell them to catch themselves.  (I can’t imagine a more destructive thing to do to someone with low self esteem, who is afraid, than give her even more reason to degrade her faith in herself.)

We would talk about their histories and memories in order to establish the frequency with which patients (reasonably) expect to experience traumatizing (and joyful!) events.  We would then discuss with a patient whether or not those expectations, given the current environment the patient is in, are in fact reasonable.  If not, we would understand that he is making mistakes in estimating probabilities, not lacking some amorphous “strength” or “maturity” to “manage” his internal drives.

So we would encourage him to go out into the world and take more samples of current context, in order to update the information on his priors, not sit at home and pick himself apart, second guessing himself, hoping to reveal and unlock a big box of built up trauma.  This is in essence what I understand Cognitive Behavioral Therapy to be, and is probably why it works so well.

People recognize that their fears (priors) are irrational (incongruent with their current environment), and go out and get new experiences to retrain themselves (their priors) to react differently (make better predictions) about current stimuli.

Finally, the advantages are normative.  The Bayesian theory of mind paints a much nicer picture of people.  It assumes that people are essentially are rational and intelligent, not irrational animals trying to keep a lid on their erections and rage.  It takes for granted that people are usually in equilibrium with their environments, responding proportionally and intelligently to events.

In its role as a theory that people use to anticipate and interact with one another, the Bayesian theory generates empathy and understanding.  It encourages us to try and understand the distribution of priors that one another are working with, all slightly different, but for understandable reasons.  We stop seeing one another as nuclear reactors of internal power struggles, waiting to “take out” our “built up” issues.  And we start believing in each others, on balance, good intentions.

We stop being paranoid of one another’s subconscious demons and traumas, and most importantly — we stop being paranoid of our own subconscious demons and traumas.  We realize that we are built as inference machines, linked up to a social and physical environment.  We realize that one another and our minds are social, and that we reason together in groups, about one another, and solve problems with one another.

The Bayesian theory of mind is hopeful, charitable, and testable.  The psychoanalytic theory of mind is an untestable, derogatory cudgel.

Psychoanalysis Is An Awful Theory of Mind

By Graham Peterson

Psychoanalysis is an awful theory of mind, and we should be suspicious of claims that rely on it.  When we hear about someone, or some group, compensating for something, or about their complex, or envy, or more directly, about their narcissism and ego and phobias and so forth, we should think twice.

Note that psychoanalysis is often just a cudgel.  We use psychoanalytic theories to “pathologize” people and groups, and the pathology is often baseless.  Of course all theories can be used to bad effects, like say biology and eugenics, but psychoanalysis hasn’t had many countervailing good outcomes in the same way gene therapy has.

The first premise of most psychoanalytic theories is that there exists out there in the world some normal level of  X.  A normal number of possessions.  A normal amount of emotional security.  Someone who finds themselves with less than that (arbitrary and exogenously imposed) normal level of X experiences a lack, and sets out to compensate.  This person experiences envy and inferiority.

Maybe people do overcompensate, but compensate over what?  What sets the level of normal?  Is it something we all just know and agree on?  The psychoanalyst?  It looks like the theory is something people — usually the dominant group of people — use to assert what should be normal, than it is a device to discover what the level of normal is.

Now, there may be a way to establish the level of normal.  Maybe we can go out and take an average over some behavior.  A mode.  A median.  We could then establish how much a person or group deviates from that level.  Past some threshold, maybe they experience a lack and need to compensate.

But psychoanalysis works in the reverse.  It starts at abnormal, asserting that X is abnormal, and then defines normal in terms of abnormal.  In this view, normality chases the psychoanalyst around, trying to keep up with the list of things she has claimed are perverse.  Psychoanalysis is a fancy grammar for for stamping people and groups as moral deviants, and clearing others of it.

As much is pretty obvious if you go back and read original psychoanalysis.  The essay On Narcissism, by Sigmund Freud, starts with the blank assertion that masturbation is disgusting, that it is prototypical of all self love, that homosexuals suffer disproportionately from it, and that it is thus the basis of narcissism.  That sounds morally distasteful to modern people, but the problem is analytical.

The theory doesn’t tell us how to establish the normal level of self love that would allow us to measure deviations from it, and diagnose narcissism beyond some threshold.  In fact it says the opposite.  It says, “go out and find people whom you believe are behaving badly; attribute their behavior to self love.  From that you can derive the normal level of self love by comparing.”

It’s circular: one derives narcissism by asserting narcissism.

The same goes for phobias.  What is a phobia?  It is the assertion that a person or group is more afraid of something than they should be.  The theory cannot tell us how to determine the level of fear that is normal, allowing us to go out and measure deviations from that norm, because the norm itself is defined in terms of the phobia.  It’s again circular: one derives phobias by asserting phobias.

Psychoanalytic theory in this way has been used throughout its history to malign people.

First it was women, envying penises.  Then via Merton’s theory of anomie it was poor blacks, envying middle class consumption.  Lately it’s been homophobes and masculinity.  No matter how many times we decide that theorists have, once again, acted as society’s hit men and legitimated routine prejudice, the theory just won’t die.  Because it is extremely attractive.

People love to shame and ostracize one another.  It’s a tribal thing, and we’re still (often for the worse) tribal people.  Psychoanalytic theory gives us a very legitimate and Official Sounding basis over which to do so.  It allows us, in the first instance, to make ex cathedra assertions about the nasty nature of deviants and out groups, and in the second instance, relieves us of any responsibility to prove those claims with external evidence.

Psychoanalysis relies on assumptions about the subconscious mind, which is by definition unobservable, even to the person who possesses it.  So psychoanalysis allows us to make moral claims on people’s character, that are untestable with those people’s revealed behaviors, all the while sounding like you’re not making moral claims at all.  Moreover, it postulates that people are secretly afraid, insecure, power hungry, and envious.

It’s the apotheosis of pseudoscience, it’s an engine of paranoia and prejudice, and we should just stop it.

The Romantic Bias Toward The Past, Again

By Graham Peterson

It turns out that the ancient Egyptians probably hauled all of those gigantic stones around by wetting the sand, which cuts friction.  We’ve got evidence two ways.  Someone ran some careful experiments at scale, with little blocks and sand.  And we have a picture of the Egyptians pouring water on sand in front of a giant statue.  How could it have remained a mystery?  Interpretation.



Reports Bonn, the principle investigator, “Egyptologists had been interpreting the water as part of a purification ritual, and had never sought a scientific explanation.”

What he means more precisely is “[material] explanation,” and the material blind spot is a much bigger problem in anthropological, sociological, historical, and archeological research than a few drawings in Egypt.  Due to the proliferation of a lot of bad theory, say Rousseau’s theory of noble savages, or the colonial pretense of early missionaries, we still favor infantalizing theories of the natives, and of people in history.

For decades the above drawing went on, apparently without question, being interpreted as a purely symbolic display.  But why wouldn’t we think that, like us, the Egyptians wrote user manuals for their inventions?  Why wouldn’t they want to represent their accomplishments accurately?

To be fair, it is a depiction of a statue that is clearly religious.  But when an electrician wires a church, he doesn’t bless his fish tape.

War in indigenous societies was interpreted similarly, symbolically — like a child’s game — for the longest time.  Theorists of war were proud of the magnificent organization of war, the superiority of imperialism, and so on.  They wouldn’t have defamed civilized war by comparing it to 40 or so naked men running at each other with spears.  And yet on closer review, it turns out the comparison is the best one out there.

Regardless multiculturalism, regardless the piling up of ethnography and archeological and historical evidence, we still haven’t made much progress ideologically.  Some portion of people want to believe, out of ethnocentricity, that our ancestors were little kids.  Some portion want to believe it out of pity or cultural sensitivity.

But what about starting with the idea that the natives and our ancestors were probably a lot like you and me, with most of the same motivations, needs, cognitive processes, and social institutions?

Lawrence Keely and Steven Pinker have caused a lot of consternation with the cultural sensitivity crowd by pointing out how violent and materially motivated indigenous people are and were.*  The Egyptology discovery above won’t blow as much ideological hair back, but the anti-materialism bias is worth considering more deeply.  The myth of our magical and infantile past, full of symbols and rituals, has to go.

People have for the majority of history broken their backs producing food, killing and stealing from each other, sometimes trading and inventing, and raising children.  We should expect their artifacts and symbolic systems to reflect the fact, not to serve as an existential reflection pool for the vanity and pity of modern intellectuals.

*I suppose the cultural sensitivity crowd wants to believe that rapacious imperialism is modern, something that rich white men invented as a warm up for bad TV and minimum wage jobs.

When Safety And Medicine Become Weapons

By Graham Peterson

Since Christina Hoff Summers gave a recent talk at Oberlin, people have been wondering why activists call ideas they don’t like “unsafe” or why they need “safe space” shelters from offensive speech.   Well, it’s an old trope, equating dangerous speech with physical danger. Jonathan Rauch belabored at length in 1993 how opponents of speech often invoke the violence metaphor to get actual, real, legalistic violence on their side.

But the new popularity of safety dialogue isn’t a conniving political maneuver.  These students are completely sincere.  They are convinced that criticism of their movements enables the harassers they oppose, harassers who drive transgender to suicide and make would-be rapists feel more comfortable.  They are completely sincere that there’s a straight line from criticism of feminist ideas to violence against women and non-conforming genders.

Criticism of feminism does not often explicitly threaten women (although some small portion of idiots on the internet send death and rape threats to feminists — thanks idiots).  But nevertheless, the idea that criticism of feminism leads to violence, and is therefore itself violence, is completely sincere and must be grappled with if we’re going to restore a reasoned dialogue.

What’s interesting — and extremely effective — about the recent violence metaphor, is that it now has the authority of the medical community on its side.  Psychiatrists and psychologists are powerful people who, mainly, decide for us who are and are not morally culpable deviants.  Thus advocates have borrowed from the psychiatric lexicon in order to borrow its authority.

Not all of the PTSD and safety dialogue is just rhetorical borrowing.  A substantial portion of the rape advocacy community are themselves actual victims.  It is altogether fair that those victims should receive more authority than they have, given the way they’ve been historically dismissed.  If victims can heal wounds and reestablish their dignity through psychological treatment — let’s have more of that.  Indeed let’s have more campus resources dedicated to it.

But activists have diagnosed themselves and one another with PTSD, and invited anyone who sympathizes with them (whether that person has been violently traumatized or not), to diagnose themselves with PTSD.  Ergot, the definition of the disorder has been expanded to include anyone who feels psychic offense, or threatened by ideas that (putatively) lead to violence.

In order to understand the importation of PTSD rhetoric into the campus rape movement, we have to understand the history of the LGBTQ movement, and how they made friends with psychiatrists.  They weren’t always.  Psychiatry once considered homosexuality pathological, garden variety moral deviance (see Freud’s essay on Narcissism).  But psychiatrists eventually decided that gay people had no control over their sexual desires, and that the desires are a biological — not lifestyle — domain.

Oila, gay rights: “give gay people freedom of choice because they have no choice in their desires.”  It’s a paradoxical legal and philosophical argument, if you think about it.  But Born This Way has been wildly effective, and it is empirically grounded.

The world would probably be better for gays if their choices were dignified as adult and free choices regardless whether their urges are inborn.  For example, many people felt recently that, “ok, gay people can’t control their sex drive so let them do who they will, but they can control their marriage drive because that is social and sacred, so we draw the line at marriage.” Denoting gays Official Victims of biological necessity helped the lobby for administrative and legal protections, but it enshrined their second class citizenship.

As a matter of tradition, because of feminism’s alliance with the LGBTQ movement, that has a recent and wildly successful alliance with psychiatry, we can’t really blame campus rape activists for borrowing from the psychological lexicon, inflating the definition of PTSD and psychic harm to include include any and all criticism of their movement.  It lends unassailable rhetorical authority to their claims, and in their view, brings into their fold all of the untold billions of victims who have been so far ignored and silenced.  It’s really genius.

But if in the first act activists and victims on campus win the point, they will in the second act lose the debate, signing up for and reifying their own permanent second-best.

Did Whites Steal Rock ‘n’ Roll From Blacks?

By Graham Peterson

Jim Morrison ain’t the final word on Rock ’n’ Roll history, but he’s a good start. In the clip below, Jim opens up a can of forgotten, but not rotten, Rock ’n’ Roll history — its white roots.

The view that Rock ‘n’ Roll was ripped off from black Rhythm & Blues is, more or less, the predominant view. It is not an uncontested view, as the Wikipedia admits. But if you grew up on the left, or around musicians and heads, you probably learned that Rock ’n’ Roll is blood money from yet another Great American Swindle. Jim agrees; of course Rock ‘n’ Roll evolved out of The Blues. But it also evolved out of early Country music, out of Bluegrass and Folk — white genres.

It’s an important point, not because white power, but because the white details of Rock ‘n’ Roll history got left on the shelf for a bad theory. Theory is a flashlight that tells you where the goods are. Unfortunately, critical theory has bad batteries and a narrow beam.

Without belaboring Horkheimer et al., the idea in critical theory is that culture fits a metaphor of exploitation, of theft. Culture is just another expression of colonial imperialism. Cultures get invaded and assimilated into a homogenous mass. It follows from this vision that black music got co-opted and assimilated into white music, in order to keep blacks down. That’s cultural appropriation. But, like Jim says, some of the main ingredients in Rock ‘n’ Roll were imported from Europe, through whites. Critical theory has no quarter for these folks.

The Europeans who brought bluegrass and folk to the United States were Scotch-Irish immigrants who settled across Appalachia. At the time, Scotland and Ireland were backwaters that had a reputation for the clan, the bar fight, and the broken accent. When they emigrated to get away from British exclusion, they brought instruments. And some fantastic music. You can hear those traditional Scotch-Irish influence still reverberating in modern Bluegrass, Folk, and Country — it’s uncanny. Fiddles. 6/8 time signatures. Twangs and bent notes. Line dancing. Poetry about poverty and misfortune.

Scotch-Irish Americans in Appalachia have always been, and unfortunately still are, largely poor. They didn’t get into singin’ about broke down Ford trucks by exploiting anyone — just like blacks didn’t get into singin’ about the blues by exploiting anyone. So, naturally, because Appalachian whites and blacks shared the same fate — and often the same holler — they mixed cultures. Then came Rock ’n’ Roll. And when kids from nice white suburbs started buying it, a few poor whites and blacks got their American Dream.

No doubt, the social exclusion of the ’50s and ’60s had its routine influence on Rock ’n’ Roll. The critical theoretic swindle story has some merit. Black musicians, who played the same tunes as whites, were not allowed to play the same stages. Black artists got squeezed out of radio rotations by racist DJs. And so on. But Bo Diddly was no slouch. Him and a range of other blacks made it big. The racism in Rock ’n’ Roll history is arguably a sideshow to the main stage, where blacks and whites were mixing to everyone’s benefit.

Cultures have always sampled and remixed from each other’s stuff. Take for instance the remixes that came out of Celtic Western Europe in the 2nd century BC. Archeological digs have revealed that the Celts imported art from Greece (that’s a long trip!), and that they eventually made their own Greek inspired art. Here again the power and exploitation thesis fails.

The Celts were poor. The Greeks were rich. The Celts were a fledgling, diffuse band of tribes. The Greeks were a militarily and culturally superior collection of  city states. Despite their differences in power, it was the poor Celts who adopted the rich Greek’s art. They traded artifacts and traditions peacefully, and to their mutual betterment.

Cultural mixing is as old as dirt, or rather, as old as trade. It happened across powers when timid Celts met well stocked Greeks in Europe. It happened across races when dirt poor immigrants met dirt poor blacks in Appalachia. And it happened across classes when poor Rock ’n’ Roll musicians played for rich city slickers across America.

We need to think harder about where cultures come from. Cultural appropriation, the swindle story, definitely is and can be a way that upper class people reproduce their status. But even more often, the borrowing, imitating, trading, and selling of cultures has been a way people make and expand their communities, peacefully. It’s a beautiful thing, and we should, while remembering some sad missteps, celebrate cultural trade as a testament of a liberal society.

Rock n’ Roll ain’t a black or white thing. It’s a black and white thing.

Is Math or English Harder to Theory With?

By Graham Peterson

Fabio Rojas and crew got into a discussion on Twitter about whether mathematical theory in social science is more difficult than verbal theory, or as Fabio summed it up:

fixed point theorems

Everyone in the thread agreed that dense verbal theory is much harder to read than mathematical theory.  But I think they’re about the same.  (Andreas Glaeser’s opinion on Foucault is here worth mentioning [insert arms like a symphony]: “you think to yourself, ‘now this is what language can do.’“)

We have a lot of folk assumptions about the difference between “verbal” and “formal” theory in social science, and too much violence between their practitioners, but very little discussion of their actual differences or advantages.  Note quickly: both verbal and mathematical theory are “formal.”  They both aim to generalize formal structures of logic, so I’m ditching the adjective “formal” and will refer to “mathematical” theory henceforth.

Bad verbal theory suffers from the same problems bad mathematical theory does.  If you ever get mad enough at mathematics that you read Why the Professor Can’t Teach, a criticism of mathematical pedagogy and research by Morris Kline, you’ll notice that most of the problems he identifies are exactly analogous for verbal theory.  Kline laments mathematics that generalizes for the sake of generalization, and he laments the presentation of general proofs without intuition and examples.

These are, to my eye, exactly the things that make Foucault et al. extraordinarily difficult to read.  Concepts get generalized for their own sake, until the exercise becomes so meta-theoretic it is only interesting to a handful of specialists, and applicable to nothing.  It might be the case that the material world is merely a realization of the world of ideas, but I really doubt that we’re learning much from “reimagining neoliberal ontologies.”

And where are the examples?  You just know that when you’re reading Bourdieu, there’s some vignette of piano lessons dancing around in his head, while he’s drawing sweeping generalizations about cultural capital.  And he’s probably generalizing from some children’s game where one gains and loses power, while he’s talking about misrecognized exchanges of subconscious power. But without making those examples explicit, the reader cannot extrapolate to generalities in the same way Bourdieu has.

Good theorists present their ideas like recipes or step-by-step instruction manuals, not assertions of propositions and generalities.  That is, good theorists will walk you through exactly those steps they took (usually starting with a rudimentary kernel, case, or example) to arrive at a generality, rather than presenting themselves as if all their brains just trade in dancing abstractions.  We are, though, both mathematical and verbal theorists, tempted to do the opposite.

We induct from one thing or another until we think we’ve found something general.  Then we turn around and assert the generality of that proposition, and try to prove it deductively.  We (sometimes) eventually present the case or example as if it’s just a convenient afterthought or demonstration.  When in fact that kernel drove our logic the entire time!

If we can drop the pomp and pretense, and focus on communicating our thoughts in the way that we actually arrived at them, we will have much clearer and easier to read mathematical and verbal theory.

Also note that good mathematical and verbal theory do pretty much the same things.

Creativity in mathematical and verbal theory is metaphorical and analogical, not deductive.  That is, mathematical creativity comes from (say) writing down a telescoping equivalence into a proof to clean it up, or recognizing a dual from a different subfield.  In verbal theory, analogical creativity comes from (say) writing down an epidemiological metaphor in a new context, like crowd dynamics.

A creative thinker transposes the formal structure of an argument to a domain where she intuits the model will help better comprehend the situation, than whatever story is currently attached.  Full stop.  There is no difference between doing so with a fixed point theorem, entropy function, language game, or model of mutually constitutive social interactions.

Or consider that Bourbaki symbols and the Greek alphabet are not always the most precise and compact language in which to present an idea.  We have intuitions because they are computationally efficient, and it turns out that in groups, intuitive Bayesians make lots of incredibly good predictions.  It is a very strange logic and practice that justifies turning a discussion of expected utility into a derivation of the expectation operator from primitives.

We have and use grammar in natural language that defines hypotheticals and probabilities all the time, “could, should, would, may, might, ought,” and we have and use grammar in natural language that defines quantities and their relations all the time, “most, more, just as much as, lots.”  For many problems, replacing these terms with mathematical symbols would be cumbersome, obfuscatory, and useless.

Both mathematical and verbal theory cannot be reduced to some historical turf war between continental social theory and economics, or some other nonsense about professional identities and territories.  We should rise above these petty disagreements and give young theorists a better guide to which lexicon is useful in which situations, because neither natural language nor mathematics can accomplish all of the goals of theory across all domains.

The Rhetoric of Direct and Indirect Speech

By Graham Peterson

Indirect, ambiguous, vague speech is incredibly common in formal arguments, and it is incredibly ineffective at persuading anyone.

I think most of us already agree with that statement, because there are standard and good arguments against ambiguity.  It can signal that the author does not herself know exactly what she is arguing.  It can signal that the author himself is purposefully obfuscating his meaning, trying to be tricky.  It can signal that the author is overgeneralizing, without thinking hard about and looking hard at the issue in front of her.

But I want to extend the discussion, and note here two particular kinds of indirect speech, and their use in formal writing.  By indirect speech here, I mean little hinting and ambiguous comments that make inexplicit reference to a literature, an ism, a school; I mean large, category, ex cathedra assertions with strings of citations tacked on; I mean jargon that only loosely references classes of stylized findings and literatures.

Note that the fact that someone is being ambiguous or indirect isn’t necessarily a sign that he is an unfocused idiot.  Indirect speech is really useful, even (trigger warning) rational. Steven Pinker points out in an article about it, that it’s a primary way we avoid conflict.  By only alluding to what one wants, or is asserting, and allowing for other parties to interpret one’s statement in multiple ways, one has recourse to run to the least offensive of its interpretations, and can plausibly deny that one intended the unfavorable interpretation.

Additionally, indirect speech helps us maintain in groups.  Sarcastic jokes are I think the best example of this phenomenon.  I know Janet hates opera, and she knows, that I know, that she hates opera.  It’s tacit and common knowledge between us, part of the mutual constitution of our friendship.  So when she says she has a date and I ask her which opera she’s going to, we both smile and chuckle, reassured that we have a common bond.  Full blown sarcasm isn’t common in formal writing, but wink-nod comments are.

These otherwise perfectly reasonable uses of indirect speech lead to an unpersuasive mess in formal arguments.

First, the in-grouping mechanism of indirect speech.  When I base my argument on citations, jargon, and isms, instead of direct explication of the claims I am making, I convey to my reader, if she is an outsider, that she is in the company of experts and should just trust whatever ex cathedra assertions I make.  If my reader is an insider and well familiar with the common knowledge I am only alluding to, then I should ask myself why I’m arguing at all.

Whether the reader is an insider or an outsider, there is no argument, just the authority supposedly conveyed by disposition and in group boundary keeping.

Now for the ambiguity-as-conflict-avoidance mechanism of indirect speech.  When I base my argument on diffuse citations to ginormous literatures, histories, or intellectual categories, I allow for a lot of ambiguity in interpretation.  That makes my claims unassailable, because nobody really knows exactly what I’m claiming, and I’m free to hedge, dodge, and qualify my way out of making an actual claim or demonstrating it with evidence.

People tend to accuse one another, regarding ambiguity, of “purposeful obfuscation,” but I doubt that the cynical interpretation is actually what’s going on in most cases (except for maybe a few postmodern authors who get off on playing games).  People generally want to avoid conflict with one another; intellectual hierarchies and territories are wooden and violent; and being purposefully ambiguous is a great way to avoid offending territorial babies.

So here we have, I think, a little sociology of good writing.  Bad writing comes from using indirect speech to reference the authority of in groups, and it also uses indirect speech to avoid crossing boundaries between in groups and out groups.  Let’s stop it, and just have an adult conversation about difficult topics, saying exactly what we mean.