Would You Rather Be Rich in the Past or ‘Comfortable’ Today?

By Kindred Winecoff

Scott Sumner:

In a recent post I suggested that one could argue that the entire increase in per capita income over the past 50 years was pure inflation (and hence that real GDP per capita didn’t rise at all.) But also that one could equally well argue that there has been no inflation over the past 50 years. The official government figures show real GDP/person rising slightly more than 150% since 1964, whereas the PCE deflator is up about 6-fold. …

Here’s one thought experiment. Get a department store catalog from today, and compare it to a catalog from 1964. (I recently saw Don Boudreaux do something similar at a conference.) Almost any millennial would rather shop out of the modern catalog, even with the same nominal amount of money to spend. Of course that’s just goods; there is also services, which have risen much faster in price. OK, so ask a millennial whether they’d rather live today on $100,000/year, or back in 1964 with the same nominal income. Recall the rotary phones and bulky cameras. The cars that rusted out frequently. Cars that you couldn’t count on to start on a cold morning. I recall getting cavities filled in 1964, without Novocaine. Not fun. No internet. Crappy TVs, where you have to constantly move the rabbit ears on top to get a decent picture. Lame black and white sitcoms, with 3 channels to choose from. Shorter life expectancy, even for the affluent. No Thai restaurants, sushi places or Starbucks. It’s steak and potatoes. Now against all that is the fact that someone making $100,000/year in 1964 was pretty rich, so your social standing was much higher than that income today. So it’s a close call, maybe living standards have risen for people making $100,000/year, maybe not. Zero inflation in the past 50 years may not be right, but it’s a reasonable estimate for a millennial, grounded in utility theory. In which period does $100,000 buy more happiness? We don’t know.

I think if we really don’t know the answer to this question then it’s only because happiness is subjective. To me it’s obvious that a $100,000/year salary is worth more today than it used to be. For one thing, in 1964 tax rates in basically every Western economy were absurdly high, so that that $100,000 would really be somewhere from $10,000-30,000. George Harrison wasn’t exaggerating; how would you like to live in a country where your best artists and creators were forced into (or simply chose) tax exile?

But let’s leave that aside for now. In 1964 a $100,000 salary would make you an elite, but your real income would actually be much smaller than that because of all of the 2014 goods you could not purchase at any price. Sumner runs many of them down, but the point is that $100,000 is still enough to live quite well in this country — even in the expensive cities — but the range of choice has exploded, and many of the modern choices now come at very low cost.

Let’s not forget that politics was quite different in 1964 as well: segregation persisted, the Cold War was raging, and even in the U.S. the “elite” were defined as much by their pedigree as income. We weren’t far removed from McCarthy, and were in the midst of a succession of assassinations of American political leaders and overt revolutionary threats in many Western societies. No birth control, no abortion, few rights for women and homosexuals in general. Being an elite in that world would likely feel very uncomfortable, and of course this blog (and essentially all media I consume) wouldn’t exist. So for me 2014 is the obvious choice.

Tyler Cowen has a more interesting question:

But here’s the catch: would you rather have net nominal 20k today or in 1964? I would opt for 1964, where you would be quite prosperous and could track the career of Miles Davis and hear the Horowitz comeback concert at Carnegie Hall. (To push along the scale a bit, $5 nominal in 1964 is clearly worth much more than $5 today nominal. Back then you might eat the world’s best piece of fish for that much.)

I’m still not sure. $20k/year back then wouldn’t be enough to make you very well off, and the marginal cost of culture consumption today has sunk almost to zero. Was Miles Davis really so much better than anyone working today? For everyone in the world who does not live in NYC, is it better to be able to watch his concerts on YouTube now, and on demand, than not to have seen them at all? Lenny Bruce was still active in 1964 but almost no one ever saw him (for both technological and political reasons). I might still take the $20k today, and I’ve lived on less than that for my entire adult life until last year, so this is an informed choice. But I agree that it’s a much more difficult decision.

It is an interesting question, mostly because it reveals what people value most. It’s a mutation of the “veil of ignorance”. So what would you choose?

14 Reasons Susan Sontag Invented Buzzfeed!

By Seth Studer

41wboBULMFLIf you’re looking for a progenitor of our list-infested social media, you could do worse than return to one of the most prominent and self-conscious public intellectuals of the last half century. The Los Angeles Review of Books just published an excellent article by Jeremy Schmidt and Jacquelyn Ardam on Susan Sontag’s private hard drives, the contents of which have recently been analyzed and archived by UCLA. Nude photos have yet to circulate through shadowy digital networks (probably because Sontag herself made them readily available – Google Image, if you like), and most of the hard drives’ content is pretty mundane. But is that going to stop humanists from drawing broad socio-cultural conclusions from it?

Is the Pope Catholic?

Did Susan Sontag shop at Sephora?

Sontag, whose work is too accessible and whose analyses are too wide-ranging for serious theory-heads, has enjoyed a renaissance since her death, not as a critic but as an historical figure. She’s one of the authors now, like Marshall McLuhan or Norman Mailer, a one-time cultural institution become primary text. A period marker. You don’t take them seriously, but you take the fact of them seriously.

Sontag was also notable for her liberal use of lists in her essays.

“The archive,” meanwhile, has been an obsession in the humanities since Foucault arrived on these shores in the eighties, but in the new millennium, this obsession has turned far more empirical, more attuned to materiality, minutia, ephemera, and marginalia. The frequently invoked but still inchoate field of “digital humanities” was founded in part to describe the work of digitizing all this…stuff. Hard drives are making this work all the more interesting, because they arrive in archive pre-digitized. Schmidt and Ardam write:

All archival labor negotiates the twin responsibilities of preservation and access. The UCLA archivists hope to provide researchers with an opportunity to encounter the old-school, non-digital portion of the Sontag collection in something close to its original order and form, but while processing that collection they remove paper clips (problem: rust) and rubber bands (problems: degradation, stickiness, stains) from Sontag’s stacks of papers, and add triangular plastic clips, manila folders, storage boxes, and metadata. They know that “original order” is something of a fantasy: in archival theory, that phrase generally signifies the state of the collection at the moment of donation, but that state itself is often open to interpretation.

Microsoft Word docs, emails, jpegs, and MP3s add a whole slew of new decisions to this delicate balancing act. The archivist must wrangle these sorts of files into usable formats by addressing problems of outdated hardware and software, proliferating versions of documents, and the ease with which such files change and update on their own. A key tool in the War on Flux sounds a bit like a comic-book villain: Deep Freeze. Through a combination of hardware and software interventions, the Deep Freeze program preserves (at the binary level of 0’s and 1’s) a particular “desired configuration” in order to maintain the authenticity and preservation of data.

Coincidentally, I spent much of this morning delving into my own hard drive, which contains documents from five previous hard drives, stored in folders titled “Old Stuff” which themselves contain more folders from older hard drives, also titled “Old Stuff.” The “stuff” is poorly organized: drafts of dissertation chapters, half-written essays, photos, untold numbers of .jpgs from the Internet that, for reasons usually obscure now, prompted me to click “Save Image As….” Apparently Sontag’s hard drives were much the same. But Deep Freeze managed to edit the chaos down to a single IBM laptop, available for perusal by scholars and Sontag junkies. Schmidt and Ardam reflect on the end product:

Sontag is — serendipitously, it seems — an ideal subject for exploring the new horizon of the born-digital archive, for the tension between preservation and flux that the electronic archive renders visible is anticipated in Sontag’s own writing. Any Sontag lover knows that the author was an inveterate list-maker. Her journals…are filled with lists, her best-known essay, “Notes on ‘Camp’” (1964), takes the form of a list, and now we know that her computer was filled with lists as well: of movies to see, chores to do, books to re-read. In 1967, the young Sontag explains what she calls her “compulsion to make lists” in her diary. She writes that by making lists, “I perceive value, I confervalue, I create value, I even create — or guarantee — existence.”

As reviewers are fond of noting, the list emerges from Sontag’s diaries as the author’s signature form. … The result of her “compulsion” not just to inventory but to reduce the world to a collection of scrutable parts, the list, Sontag’s archive makes clear, is always unstable, always ready to be added to or subtracted from. The list is a form of flux.

The lists that populate Sontag’s digital archive range from the short to the wonderfully massive. In one, Sontag — always the connoisseur — lists not her favorite drinks, but the “best” ones. The best dry white wines, the best tequilas. (She includes a note that Patrón is pronounced “with a long o.”) More tantalizing is a folder labeled “Word Hoard,” which contains three long lists of single words with occasional annotations. “Adjectives” is 162 pages, “Nouns” is 54 pages, and “Verbs” is 31 pages. Here, Sontag would seem to be a connoisseur of language. But are these words to use in her writing? Words not to use? Fun words? Bad words? New words? What do “rufous,” “rubbery,” “ineluctable,” “horny,” “hoydenish,” and “zany” have in common, other than that they populate her 162-page list of adjectives? … [T]he Sontag laptop is filled with lists of movies in the form of similar but not identical documents with labels such as “150 Films,” “200 Films,” and “250 Films.” The titles are not quite accurate. “150 Films” contains only 110 entries, while “250 Films” is a list of 209. It appears that Sontag added to, deleted from, rearranged, and saved these lists under different titles over the course of a decade.

“Faced with multiple copies of similar lists,” continue Schmidt and Ardam, “we’re tempted to read meaning into their differences: why does Sontag keep changing the place of Godard’s Passion? How should we read the mitosis of ‘250 Films’ into subcategories (films by nationality, films of ‘moral transformation’)? We know that Sontag was a cinephile; what if anything do these ever-proliferating Word documents tell us about her that we didn’t already know?” The last question hits a nerve for both academic humanists and the culture at large (Sontag’s dual audiences).

Through much of the past 15 years, literary scholarship could feel like stamp collecting. For a while, the field of Victorian literary studies resembled the tinkering, amateurish, bric-a-brac style of Victorian culture itself, a new bit of allegedly consequential ephemera in every issue of every journal. Pre-digitized archives offer a new twist on this material. Schmidt and Ardam: “The born-digital archive asks us to interpret not smudges and cross-outs but many, many copies of almost-the-same-thing.” This type of scholarship provides a strong empirical base for broader claims (the kind Sontag favored), but the base threatens to support only a single, towering column, ornate but structurally superfluous. Even good humanist scholarship – the gold standard in my own field remains Mark McGurl’s 2009 The Program Era – can begin to feel like an Apollonian gasket: it contains elaborate intellectual gyrations but never quite extends beyond its own circle. (This did not happen in Victorian studies, by the way; as usual, they remain at the methodological cutting edge of literary studies, pioneering cross-disciplinary approaches to reading, reviving and revising the best of old theories.) My least favorite sentence in any literary study is the one in which the author disclaims generalizability and discourages attaching any broader significance or application to the study. This is one reason why literary theory courses not only offer no stable definition of “literature” (as the E.O. Wilsons of the world would have us do), they frequently fail to introduce students to the many tentative or working definitions from the long history of literary criticism. (We should at least offer our students a list!)

In short, when faced with the question, “What do we do with all this…stuff?” or “What’s the point of all this?”, literary scholars all-too-often have little to say. It’s not that a lack of consensus exists; it’s an actual lack of answers. Increasingly, and encouragingly, one hears that a broader application of the empiricist tendency is the next horizon in literary studies. (How such an application will fit into the increasingly narrow scope of the American university is an altogether different and more vexing problem.)

Sontag’s obsession with lists resonates more directly with the culture at large. The Onion’s spin-off site ClickHole is the apotheosis of post-Facebook Internet culture. Its genius is not for parody but for distillation. The authors at ClickHole strip the substance of clickbait – attention-grabbing headlines, taxonomic quizzes, and endless lists – to the bone of its essential logic. This logic is twofold. All effective clickbait relies on the narcissism of the reader to bait the hook and banal summaries of basic truths once the catch is secure. The structure of “8 Ways Your Life Is Like Harry Potter” would differ little from “8 Ways Your Life Isn’t Like Harry Potter.” A list, like a personality quiz, is especially effective as clickbait because it condenses a complex but recognizable reality into an index of accessible particularities. “Sontag’s lists are both summary and sprawl,” write Schmidt and Ardam, and much the same could be said of the lists endlessly churned out by Buzzfeed, which constitute both an structure of knowledge and a style of knowing to which Sontag herself made significant contributions. Her best writing offered the content of scholarly discourse in a structure and style that not only eschewed the conventions of academic prose, but encouraged reading practices in which readers actively organize, index, and codify their experience – or even their identity – vis a vis whatever the topic may be. Such is the power of lists. This power precedes Sontag, of course. But she was a master practitioner and aware of the list’s potential in the new century, when reading practices would become increasingly democratic and participatory (and accrue all the pitfalls and dangers of democracy and participation). If you don’t think Buzzfeed is aware of that, you aren’t giving them enough credit.

No Work Makes Jack A Malcontented Boy

By Kindred Winecoff

In “Economic Possibilities for Our Grandchildren” John Maynard Keynes wrote that by 2030 or so humans could spend most of their time pursuing leisure:

For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!

In many respects this echoed Marx nearly eighty-five years earlier, in The German Ideology:

For as soon as the distribution of labour comes into being, each man has a particular, exclusive sphere of activity, which is forced upon him and from which he cannot escape. He is a hunter, a fisherman, a herdsman, or a critical critic, and must remain so if he does not want to lose his means of livelihood; while in communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.

For contemporary treatments of similar ideas see John Quiggin and Ronald Dworkin. (Both of these are well worth reading in full.)

You may accept these goals or dismiss them. I would just like to note that we’ve basically achieved them, at the societal level. The Bureau of Labor Statistics reports that the average American spends 3.19 hours per day working. Obviously this mostly means that the distribution of working hours is highly unequal as is the renumeration from work. And the U.S. is hardly the world in this respect.

Still, if you squint hard enough from a high enough perch, we might be working about as much as we should be from a Utopian perspective. Even if you tack on the 1.74 hours per day we spend on “household activities” — from food preparation to lawn care — we’re basically in the realm that Marx envisioned. We spend 2.83 hours per day watching television. Marx really was a 19th century thinker whose outlook does not map easily onto 21st century realities but again: it’s worth knowing where we stand.

Our biggest crisis remains a jobs crisis, locally and globally. People seem to want to work even if their most basic needs are met. They want to work even if it means they would have to forego hunting in the morning or fishing in the afternoon or blogging in the evening. They seem to want to acquire and consume and improve their lives ever more. Keynes viewed this as avarice — a bit strange for him to say, given his relatively luxurious lifestyle — but maybe it isn’t. And if it isn’t then some basic planks of Utopian political theory might need re-thinking.

A Test Designed to Provoke an Emotional Response

By Kindred Winecoff

For several years now Black Mirror has been my favorite television show despite the fact that U.S. audiences could only view it using, erm, “less-legal” methods. Apparently the show is now airing on something called the Audience Network on DirecTV and I’d encourage folks to give it a try.

Slate has a Slate-y take on the series, but here is the gist of what you need to know: each episode has a completely different cast and crew. There is no recurring plot. There are no returning characters. The writers and directors are all different from show-to-show as well. The only consistency is the techno-dystopian theme of each episode, which has some resonance in the age of Snowden and Facebook face-recognition algorithms.

In some ways Black Mirror‘s closest analogue is The Twilight Zone, but with one key difference: there is little surreal or absurdist about the premise of the episodes. The show is futuristic but just barely: the worlds in the show look functionally the same as our own, except that technology is extrapolated two or three short steps beyond where it presently is. There are no phasers or teleportation devices, just slightly better artificial intelligence. In some episodes the entire narrative is possible given existing technology. The show’s name refers both to an unpowered LCD screen and to an Arcade Fire song… tangible things that presently exist.

Refreshingly, the show also refuses to be dystopian in any one particular way. The first episode involves a terrorist plot to humiliate a head of state. Another imagines one possible future of Google Glass: the ability to revisit video of every event in your life’s past… no more need for hazy memories to settle a he-said-she-said dispute. To bear the loss of a loved one why not download a lifetime’s social network data into a replicant body? It’d be like they never left. In several cases the characters believe they have overcome part of the human condition via technology, only to realize that problems frequently require something other than a technical solution.

But that is not the fault of the technology. The show’s creator, Charlie Brooker, is an avid user of Twitter and a casual technology optimist. His chosen medium is television, not print. The takeaway from the show is not to turn off the smartphone, disconnect from Facebook, and re-learn your penmanship. The technology is never the real problem. The people are. It is a point that frequently gets lost in discussions over the relationship between technology and society. And that is why the show is such a needed interjection into the culture.

 

Remember the Internet?

2015, as imagined by 1995
2015, as imagined by 1995

Matthew Yglesias is the latest and most intelligent blogger to drudge up a bad ’90s-era prediction about the future of the Internet. Yglesias’s target is a 1995 Newsweek article by Clifford Stoll entitled “Why the Web Won’t Be Nirvana.” He draws attention to Stoll’s opening paragraphs, which dismiss the digital realm’s then much-hyped world-changing potential as “baloney.” Back then, Stoll wrote:

After two decades online, I’m perplexed. It’s not that I haven’t had a gas of a good time on the Internet. I’ve met great people and even caught a hacker or two. But today, I’m uneasy about this most trendy and oversold community. Visionaries see a future of telecommuting workers, interactive libraries and multimedia classrooms. They speak of electronic town meetings and virtual communities. Commerce and business will shift from offices and malls to networks and modems. And the freedom of digital networks will make government more democratic.

Baloney. Do our computer pundits lack all common sense? The truth in no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works.

Consider today’s online world. The Usenet, a worldwide bulletin board, allows anyone to post messages across the nation. Your word gets out, leapfrogging editors and publishers. Every voice can be heard cheaply and instantly. The result? Every voice is heard. The cacophany more closely resembles citizens band radio, complete with handles, harrasment, and anonymous threats. When most everyone shouts, few listen. How about electronic publishing? Try reading a book on disc. At best, it’s an unpleasant chore: the myopic glow of a clunky computer replaces the friendly pages of a book. And you can’t tote that laptop to the beach. Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we’ll soon buy books and newspapers straight over the Intenet. Uh, sure.

Stoll’s exasperation at the notion of “books on disc” and his confidence that “no online database will replace your daily newspaper” are knee-slappers, to be sure. Of the industries that digital technology has felled, print media is the Goliath. The music and publishing industries, like the Philistines of old, are self-imploding in the wake. And if you scan through the utopian scribblings of early ’90s techies, you will find thousands of articles predicting these events. By dismissing the Internet’s industry-toppling potential, Stoll not only lacked foresight, he lacked good sense. When Nicholas Negroponte (who knows whereof he speaks) offers his thoughts on the direction of media technology, best to rebut with something more substantial than “Uh, sure.” (Of course, you’d sacrifice the knowing insouciance that makes Newsweek-brow editorials so much fun!)

But Stoll’s article draws attention to a fact that those of us laughing at him in hindsight often overlook. The Internet in 1995 wasn’t the Internet.

The Ford Model T resembles a Tesla Model S more than the WorldWideWeb Stoll surfed resembles the networked world we inhabit in 2013. Today, the separation between digital life and analogue life has essentially dissolved. Even the word “Internet” sounds a little quaint, and it survived several terminology purges (RIP World Wide Web, Information Superhighway, “the ‘Net,” etc.). The Internet is fast becoming for us what Christianity was for medieval Europeans: not a religion or ideology but a totalizing epistemology that is almost impossible to imagine your way out of.

Now I just use my iPhone's built-in translator.
Today I just use my iPhone’s built-in translator.

Now think about 1995: Bill Clinton was still in his first term (he an Al Gore’s emphasis on Internet connectivity in public classrooms inspired ridicule and laughs in conservative media). Many Americans had only recently purchased their first computers (if they owned one at all). And computer ownership didn’t guarantee connectivity (my cousin’s computer seemed to exist for the sole purpose of playing Where in the World is Carmen Sandiego?). AOL CD-ROMs weren’t ubiquitous, much less used as coasters, frisbees, and microwave sparklers. Given the vast technologico-cultural chasm between Stoll and us, it’s actually surprising how well he describes the Internet of 2013.

Here’s a bit of Stoll’s article that Yglesias doesn’t quote:

What the Internet hucksters won’t tell you is that the Internet is one big ocean of unedited data, without any pretense of completeness. Lacking editors, reviewers or critics, the Internet has become a wasteland of unfiltered data. You don’t know what to ignore and what’s worth reading. Logged onto the World Wide Web, I hunt for the date of the Battle of Trafalgar. Hundreds of files show up, and it takes 15 minutes to unravel them—one’s a biography written by an eighth grader, the second is a computer game that doesn’t work and the third is an image of a London monument. None answers my question, and my search is periodically interrupted by messages like, “Too many connections, try again later.”

Obnoxious tone aside, this complaint remains applicable to our post-Google cyborg existence. Even if you don’t believe the Internet is “a wasteland of unfiltered data,” you can imagine why someone might feel that way. And if this qualifies as a “prediction,” then Stoll is a digital prophet.

And rhetorically, he’s nothing compared to the Internet’s early boosters.

As the title of Stoll’s article indicates, predictions about the Internet’s future were highly exaggerated. It was supposed to change and enhance all realms of human experience – a “nirvana” (very ’90s). Living room virtual reality was always just around the corner. The Internet would eliminate trips to the doctor, to school, to the library, to the post office (in the ’90s, we were very concerned with short-distance trips). The Internet would cause, or correct, the Y2K apocalypse. These predictions, like all predictions, were framed by their moment. Stoll’s article imagined the Internet solving (or failing to solve) ’90s problems.

Even in 2013, nobody can imagine reading a digital book…if your idea of a book is fixed in 1995.

But Internet technologies continued to develop over time, solving newly emerging problems, then responding to the problems created by the solutions. Speculation about the future persisted among the keyboard class, but few major developments were accurately predicted by anyone outside Cambridge, Silicon Valley, the U.S. military, or the major telecommunication companies.

Today, the predictions that get the most attention are those that dramatically underestimate digital technologies. In the embarrassingly late year of 1998, Paul Krugman famously predicted that by 2005, “the Internet’s impact on the economy [will have] been no greater than the fax machine’s.” As for social networking, he argued that “most people have nothing to say to each other!” And then there’s this:

To be fair, Yglesias does give Stoll some credit:

…the Web hasn’t lived up to the full maximum capacity of its dreams. Relatively few people are full-time telecommuters, for example, and efforts to genuinely replace traditional teaching with online instruction have been disappointing so far. But already we’re at a point where computer networks have changed the way government works enough that the inability to execute a major IT procurement initiative correctly has been the dominant political story of the fall. The publishing and media industries have been completely transformed. Almost everyone who learns things uses the Web as a useful supplement to classroom instruction.

So basically, the publishing industry is the big trophy, the slayed giant. Other industries and institutions are simply modified. The mechanics of corporate structure have changed, but the basic structure remains the same. As for education: politicians and digital engineers have promoted the Internet’s potential to dramatically reform education since the early 1990s. Yet a widespread reform has yet to occur. (Not for lack of experiments, which have so far produced very mixed results. Even Sebastian Thrun admits that MOOCs aren’t working.) And the government is still the damned government, even if Obama uses websites and Chuck Grassley uses Twitter.

Does this mean Internet-based reforms will never occur? Of course not. But despite the radical change the Internet has wrought, Stoll’s pessimistic 1995 forecast appears prudent and consevative at worst, prescient at best.

Meanwhile, the lesson Yglesias draws from Stoll’s one or two premature and comically wrong predictions is troubling. “I think [Stoll’s article] serves as a useful antidote,” he writes, “to a certain genre of writing popular on the Internet today where people poke fun at excessive techno-hype from Silicon Valley types.” I’ve dabbled in that genre, but even if I wrote these posts while teaching MOOCs from my tax-free seastead, I’d still argue that Yglesias is drawing the wrong conclusions from Stoll’s piece. True, Stoll’s article is editorializing run amuck (remember back when someone was an expert on everything because they worked for a newspaper?). His tone is grating. But he was responding to the ’90s version of what Yglesias calls “excessive techno-hype.” The hype is more dangerous than the predictions: it leads to bubbles and recessions, to booms and busts. If given the choice between joining the hype or poking fun, I’ll poke fun – even at the risk of making a few ridiculous predictions.

Secession Anxiety

By Seth Studer

1. Impostors

Okay, so feeling perpetually lost and out of the loop is a symptom of grad school. My department, like so many others, has the 2008 New York Times article on “Impostor Syndrome” posted in the grad student lounge. In my case, however, grad school merely exacerbated a preexisting condition. I am always out of the loop. I am always behind. These feelings are especially acute regarding technology. Unlike most of my middle-class peers, I grew up without video games or computers; my first experience with the Internet was relatively late (boo-hoo, I know). And I never caught up. I didn’t realize Cupertino, CA was an important place until I purchased my first iPhone in 2012 and read the clock, which was set to Cupertino time (my wife had to explain Google to me). Worst of all, my ability to navigate the Internet’s black markets is severely limited. My last illegal download probably occurred sometime in 2004; I never really figured out how BitTorrent worked. That same year, I went through a significant break-up with a girl from California. I felt two steps behind the entire world.

Time heals all wounds and grad school consumes all time, so I got over the break-up and never learned BitTorrent. But imagine my horror last week when I learned that some tech guy (is he important? how would I know?!) at Y Combinator’s Startup School conference (is that important??) advocated for Silicon Valley’s secession from the Union. Secession! I scanned two full pages of headlines on GoogleNews, each heralding with alarm Silicon Valley’s intention to secede.

I slouched in my chair, despondent.

Cool computers from California were dumping me.

Y Combinator's Startup School 2014
Y Combinator’s Startup School 2014

Balaji Srinivasan might put it differently. I’m not being dumped, I’m “opting-out.” I live in the “Paper Belt.” Borrowing language from Albert O. Hirschman, Srinivasan would say I’m “loyal” (perhaps against my own will), that I’m not utilizing my “voice” and I refuse to “exit.” Srinivasan’s speech at the 2013 Startup School, an annual conference for techie entrepreneurs, was covered in the most dramatic language: it was “brazen.” The Valley was “roused.” Srinivasan had proposed a “city-state” (Valley-state?). Practically every article covering the speech scolded Srinivasan’s (or the Valley’s) hubris. But they also emphasized his militant language: he described Godfather-style violence against industries, spoke of hit lists against American cities, and discouraged all-out war with the United States only because “they have aircraft carriers, we don’t.” Subtext: “not yet!” Is an arms race brewing? Srinivasan suggested that 3-D printers could be used to build drones. He spoke admiringly about Peter Thiel, whose investments in seasteading test the limits of U.S. sovereignty. So Thiel is the John Calhoun to Srinivasan’s Siliconfederacy. And make no mistake, Srinivasan is proposing secession. That word appears in every. single. headline.

Except that in his speech, Srinivasan never once used the words “secession” or “secede.”

Responses to the speech in tech and industry blogs were mild. The first response (apparently written as the speech ended) came from CNET’s Nick Statt, who called Srinivasan’s vision “utopian,” akin to Thiel’s. For Statt, the speech’s content was speculative. A few other tech blogs chimed in. At some point, the word “secession” was dropped, and then larger blogs and media began reporting the speech. By then, the coverage was absolutely fevered. Srinivasan had declared the intent to secede on behalf of his entire industry.

“Secession” is a tricky and troubling word in the United States. Beyond its most obvious association – the American Civil War – secession stirs imaginations and tests loyalties. For many African-Americans, “secession” is code for anti-black violence. In parts of the South and in Texas, patriotism requires fierce commitment to both the nation and the right to secede from it. In its history, South Carolina has threatened secession at least three times. New England considered secession before the War of 1812, as did New York City during the Civil War. And as much as I’d like to treat the Union Army 1861 – 1865 as the fourth branch of government, forever settling the issue, secession remains an improbable but available option to any group of malcontent Americans.

Mostly it’s all talk. But the language of secession is powerful: conservative populists (with no Canada to flee to) use it to excite supporters and agitate opponents. Political commentators are tantalized by it, gleeful that Todd Palin or Rick Perry might have a little secessionist in them. Most Americans are fascinated by secessionist movements beyond our borders: the end of the Cold War was a riot of new atlases. George Clooney couldn’t stay away from South Sudan. I’m always kind of rooting for Quebec to secede from Canada, even though I think the results would be disastrous. Break-ups are messy and fun to watch.

And this is why so many bloggers and journalists appended the word “secession” to what is essentially a TED Talk. Secession gets a reaction. It sends a chill down your spine.

2. “…the point is to change it.”

Srinivasan’s speech is not a call to secession, and the crowd is hardly raucous. Srinivasan is advocating “exit” over “voice” (terms borrowed from Hirschman), and he describes the plasticity of those strategies. Exit can take various forms. Secession is one, although Srinivasan seems ambivalent about nation-building. He emphasizes emigration. But no GoogleNews headline declared “Silicon Valley Emigrates!” (Sidebar: this critique is a substantive, not alarmist, take on the emigration issue. It introduces two other terms Srinivasan doesn’t use: expat and exurb.) Still, even his immigration/emigration language is problematic. He makes emigration sound easy. Given his biography, he surely understands it is not. He describes a society in which people “opt in” or “opt out” of whatever superior social format the startups create. If you like it, come. If you don’t, go. Hirschman’s notion of “loyalty” – especially involuntary loyalty – is left unexamined. “Voice” (change from within) is dismissed.

Once he adds “exit” to the already full lexicon of terms to describe post-analogue life, Srinivasan’s speech is merely confident speculation. Smart. Predicting the future of new technologies in public is foolish; if you must, it’s better to be broad and speculative (like Srinivasan) than narrow and specific (Paul Krugman). Srinivasan’s voice has that cocky patter common among tech industry males, a patter that grows more assured in close proximity to Cupertino. But his tone is cautious. He is generous with the parameters of “opting in,” allowing for degrees: someone as digitally illiterate as me can “opt in,” partially. On the one hand, I have no idea how to pirate Sherlock. On the other, I would literally incinerate a $100 banknote on the first day of every month rather than pay for cable television. Consequently, I pay slightly less than $100 each month to Amazon, Hulu, and Netflix for content. According to Srinivasan, that means something.

The best moment in the speech comes at the beginning, when Srinivasan unfavorably compares the U.S. government to Microsoft. The comparison is more apt than he lets on: as a software company, Microsoft’s market share remains enormous. In developing nations, their share of the smartphone market is a threat to Apple. Whatever sexy, streamlined product Silicon Valley rolls out, Microsoft will accommodate it or produce a crappier version (at profit). Much like the dinosaurs in Jurassic Park, Microsoft finds a way. Heck, I’m using MSWord on my refurbished MacBook right now. Quantity, ubiquity, monopoly, saturation: the virtues that made Bill Gates rich are the same virtues that made America a global superpower.

The Independent Liberaltarian Tax Shelter of Silica
The Independent Liberaltarian Tax Shelter of Silica

Srinivasan’s conception of American power is skewed. His “hit list” (LA, NYC, Boston, DC) overstates reality: China has altered Hollywood’s business model far more than Cupertino has. The U.S. dollar and the U.S. government aren’t realistic targets. Yes, newspapers are dead (thanks for that, btw!). Yes, higher ed is on shaky ground (but demand is still high, and universities aren’t newspapers). His hit list also focuses disproportionately on media, culture, and government, hardly the only sources of U.S. power. Where’s the exit from agribusiness? Big energy? The pharmaceutical industry? Can I 3-D print my own car? Sure, I can screw Columbia Pictures into charging Netflix less than they’d charge Regal Cinemas for White House Down. Viva la sécession! So how can I screw Monsanto? I don’t want to join a hyper-organic CSA, I want to do to Monsanto what Netflix did to Blockbuster! Where’s the start-up for that?

I’m sure it’s coming.

Thiel’s seasteading has always reminded me of George Pullman’s well-intended experiment with a totally corporate community. The politics differ, but both projects begin with the way things ought to be, rather than the way things are. Srinivasan isn’t as radical as Thiel, but both rely too much on “obsolescence” as an operative concept. Obsolescence in software and obsolescence in government are two different things. I doubt whether obsolescence is even applicable to societies or cultures. And whatever your political grievances against the United States, “voice” is surely preferable to “exit.”

Whenever my thoughts or temper turn radical, I remind myself of Benjamin Disraeli’s haunting declaration: England cannot begin again. I’ll happily accept new tools and new programs, but they must accommodate rather than abandon what is. Most problems and conflicts in the world are embedded in social and cultural institutions that only change incrementally. Srinivasan describes small nations like Estonia that innovate, and argues for more; he doesn’t mention the many small nations whose institutions are totally dysfunctional. Meanwhile, nations like China and the United States are not obsolete by any measure. Their preexisting governments, policies, and laws can accommodate gradual, stable change. Technological innovation must be part of that. Industrialism ended the Atlantic slave trade and made abolition possible – over time. But changing a whole culture is like building a medieval cathedral: you pass the work down from generation to generation, enduring the pace. I mean, it’s been 150 years and the South still goes on about secession.