The Politics of Bitcoin: Expanded Bibliography with Live Links

Production constraints and editorial guidelines required The Politics of Bitcoin, in both its print and electronic versions, to include only the base URLs of online materials referenced in the book, and even in the electronic version these aren’t live links. In addition, space constraints meant that some work valuable to me in composing the book had to be cut out. What follows is a fuller bibliography of the works referenced or that I’d have liked to have referenced in the book, complete with working URLs to online materials.

Politics of Bitcoin

Posted in bitcoin, cyberlibertarianism | Tagged , , , , , , , | Leave a comment

Trump, Clinton, and the Electoral Politics of Bitcoin

My new book, The Politics of Bitcoin, is not directly about electoral politics, but rather the political and political-economic theories that inform the development of Bitcoin and its underlying blockchain software. My argument does not require that there be direct connections between promoting Bitcoin and supporting one candidate or party or another.

Rather, what concerns me about Bitcoin is how it contributes to the spread of deeply right-wing ideas in economics and political philosophy without those right-wing associations being made at all explicit.  Call it, “moving the Overton window to the right” (although I find the concept of the “Overton window” troubling, not least for its own origins on the political right) especially along some axes that may not even be altogether legible to many in the general public. So many people have heard of Bitcoin and the blockchain as technologies that promote “freedom” and “democratization,” and resist interference by “central authorities”; many fewer understand what those words mean in relation to Bitcoin and the blockchain, where the words are used almost identically to the way extremists like Alex Jones and the John Birch Society use them.

Nevertheless, these foundational politics do at times intersect with ordinary electoral politics. Though this isn’t really what The Politics of Bitcoin is about, when people in social media saw the title they quickly presumed that was what I meant, and some of those comments prompted me to reflect a bit on how the politics of Bitcoin and the blockchain are intersecting with the current US presidential election.

* * *

First, the GOP. A Bitcoin supporter responded to some positive comments about the book by others on Twitter by writing:

There are several interesting ways that this comment strikes me as symptomatic. First, it tries to manage the narrative—defining the critique I’m offering, despite the fact that the tweet writer admits not to having read the book—by suggesting what the book does not, that right wing ideologues like Trump directly promote Bitcoin, or vice-versa. Second, it offers the very familiar story being promulgated by Google and others that we need to be very worried now about a superpowerful AI, which itself is a product of what thinkers like Dale Carrico and I think is already a profoundly conservative discourse, for reasons I won’t go into here.

The Trump comment is particularly interesting. There is certainly a fair amount of support for Trump among Bitcoin enthusiasts, though I’m not aware of any polling that would allow us to break that down into numbers. But it is pretty funny to be told that I shouldn’t be worried about Trump right now, because I am very worried, and I think anyone with a remote interest in politics and the—to put a point on it—fate of democracy itself should be worried about Trump. And to the degree that Bitcoin helps to spread the right-wing economic ideology that my book is really about, then I do think that there are connections between Bitcoin and Trump. Of course Bitcoin didn’t cause Trump—but the kinds of false, angry, other-targeting ideologies on which the Trump phenomenon depends can be readily found in all the kinds of online communities that create and promote the frightening range of right-wing political action we see everywhere today. We should be very worried about Trump, and we should be worried about how Bitcoin and other parts of online discourse feed the hate and studied ignorance that make so many people support him.

This site might be a parody but I don’t think it is.

It’s also fascinating that as we get down to the wire, Trump is sounding more and more like his ardent supporter Alex Jones, propagating the same falsehoods about “global financial powers” that we see in Bitcoin discourse (and, not unironically, being himself an incredibly wealthy person who made most of his money by cheating the system). In an October 13 speech in West Palm Beach, Florida, Trump stated that  “Hillary Clinton meets in secret with international banks to plot the destruction of U.S. sovereignty in order to enrich these global financial powers, her special interest friends, and her donors.”

* * *

So that’s the ostensibly mainstream right. What about the mainstream left? Here the story is even more interesting. Daniel Latorre, a Twitter friend and civic technologist, tweeted the following:

and

which links to an excellent piece Latorre wrote in 2015 called “Why Our Tech Talk Needs A Values Talk.”

Dan pointed me to video of the “Connectivity” session at the Clinton Global Initiative (CGI) 2016 conference (yes, the annual conference sponsored by the famous foundation), where around minute 32 two speakers appear talking about the blockchain. The first is Jamie Smith, Global Chief Communications Officer of the Bitfury Group, “your leading full service Blockchain technology company,” who gives a very brief introduction that is full of some very serious imploring of the audience, quite a few buzzwords that don’t really seem to go together, and graphics like this one:

blockchain transformation CGI

It is really an insult to the intelligence of the audience of a charitable organization to distribute venture capital promotional materials like these as if they mean something very concrete and beneficent—let alone the fact that the putative top benefit of the blockchain, distributed control of a verifiable ledger that all users can examine—has not even been mooted for phones of any sort, let alone inexpensive phones, so that whatever Smith is advertising here is at best a derivative of blockchain technology.

Like so many Bitcoin and blockchain promoters (and, to be fair, digital utopians and salespeople everywhere) Smith, too, engages in some serious management of the narrative via rhetorical sleight-of-hand: “the missing piece of the internet,” “the blockchain is the most transformational technology since the internet,” “without going through a trusted emissary.” Words like these are deployed to mystify, or to mislead, or both, but not to explain.

Managing the narrative: Smith says, “I can see why you think that [Bitcoin isn’t promising] because the coverage has not been great”: that is, because the coverage has in part accurately focused on the most popular uses of Bitcoin in Dark Web markets for illegal products like the long-shuttered Silk Road, and on the almost shocking frequency with which individuals lose the money they put into Bitcoin—which would be shocking even if one of the major advertised benefits of Bitcoin wasn’t its supposed superior safety vs. other forms of payment.

It’s worth dwelling on one statement Smith makes: “the Bitcoin Blockchain has never been hacked.”

Really? Remember that her talk is focused on the blockchain, not Bitcoin per se. But are blockchains unhackable? Does the CGI audience notice that the vital word in that construction is not “blockchain” but “Bitcoin,” because other blockchains most certainly have been hacked—most famously the first “autonomous” “smart contract” blockchain, TheDAO—which was very famously hacked no more than four months ago.

And of course, “hack” is used in a particular technical sense in the talk, since as I discuss in the book, putting money into Bitcoin is one of the riskiest things a person can do, both because of Bitcoin’s own wild volatility, and the ease with which Bitcoin exchanges can be and have been repeatedly hacked—by some estimates including up to a third of all exchanges—and millions of dollars vanished into thin air, or more often into the pockets of scam artists.

This is part of how political ideology gets promulgated—as opposed to actual political work getting done. Wildly contradictory sentiments are offered with outsize passion, imploring audiences to take action and support whatever scheme the speaker is suggesting, but not actually to research the question for themselves, not to ask whether what the speaker is promising actually makes sense.

* * *

Yet Smith is only the first speaker. At the end of her talk she introduces arguably the star of the blockchain panel, Peruvian economist Hernando de Soto.

Hernando de Soto.

Hernando de Soto, one of the world’s biggest blockchain promoters.

Hernando de Soto, one of the chief architects of the actual economic plans critics like Philip Mirowski call neoliberalism. And this is core, right-wing neoliberalism, meaning direct involvement with the most poisonous and influential figures and institutions of neoliberalism—the Mont Pelerin society, Friedrich Hayek, Milton Friedman, and many others—not simply the varieties of “outer shell” neoliberalism that does not always even know their own name. This is far-right, world-dominating economics.

Hernando de Soto. Vocal opponent of the highest-profile left economist in the last decade, Thomas Piketty. Author of two books, The Other Path: The Economic Answer to Terrorism (1989), which argues the solution to the problems that created Peru’s Shining Path was found in entrepreneurship and deregulation, and The Mystery of Capital (2000), described as “an elaborate smokescreen to hide the uglier truth” that corporations and wealthy individuals “run [developing] countries for the maximum extractive benefit of the west” which de Soto’s “solutions” may exacerbate much more than ameliorate.

Hernando de Soto. who talks glowingly about “reglobalizing the world.” Who lumps together ISIS and progressive anti-globalization protestors (though he is careful not to say they are the same–after he lumps them together in the first place). Who “spins the Arab Spring not as a populist opposition to dictators (most of whom are backed by the lynchpin of capitalism, the United States), but a scrappy revolt of entrepreneurs against state interference in commerce.” Who received the 2004 Milton Friedman prize for Advancing Liberty. Whose work in Peru was “the first and most successful outcome” (Mitchell 2009, 396) of the work of the Atlas Foundation for Economic Research, not just ideologically but historically directly connected to Hayek, the Cato Foundation, and the Mont Pelerin Society.

Being promoted at the charitable foundation of the Democratic candidate for President.

Under the name of blockchain.

Without anybody standing up and saying, “what is the ‘Friedrich Hayek of Latin America’ doing speaking for the Democratic candidate for president? And why is the Democratic party promoting it, without even noting who this person is and where his ideas come from?”

The product de Soto says he is making is one that uses the blockchain—the “public blockchain,” he and Bitfury call it, although the only current candidate for that is Bitcoin, and he does not explain whether or how the service he describes could run on the Bitcoin blockchain, or who would host the “public blockchain” he talks about. The product is one that will help to fulfill de Soto’s lifelong plan to record property rights for the poorest people in the world. (See “Hillary Clinton and the Blockchain” for what reads an awful like that a sales presentation of this and other blockchain projects the Clinton campaign has under consideration.)

That sounds noble, unless you are familiar with political science and political economy, in which the focus on property rights is precisely the hallmark of rightist politics going all the way back to what is now called the “classical liberalism” associated with Locke, although that term is now used to describe what is essentially right libertarianism and the relationship between these doctrines is profoundly fraught. Then it sounds like another version of the neoliberal, neo-colonial extractive development plans that have, in the opinion of many anti-globalization activists, been the cause of significant destruction to lives and property the world over, and contributed significantly to the poverty they claim to be helping (Gravois 2005, Johnson 2016, Mitchell 2009).

Here’s a bit on de Soto’s work from a piece by journalists Mark Ames and Yasha Levine (2013)

De Soto’s pitch essentially comes down to this: Give the poor masses a legal “stake” in whatever meager property they live in, and that will “unleash” their inner entrepreneurial spirit and all the national “hidden capital” lying dormant beneath their shanty floors. De Soto claimed that if the poor living in Lima’s vast shantytowns were given legal title ownership over their shacks, they could then use that legal title as collateral to take out microfinance loans, which would then be used to launch their micro-entrepreneurial careers. Newly-created property holders would also have a “stake” in the ruling political and economic system. It’s the sort of cant that makes perfect sense to the Davos set (where de Soto is a star) but that has absolutely zero relevance to problems of entrenched poverty around the world.

To be clear, de Soto could speak, and has spoken, to audiences from the center-left—the “left neoliberals,” the Tony Blairs and Bill and Hillary Clintons of the world—for a long time. Bill Clinton himself has called de Soto “the world’s greatest living economist,” and yet there he is the recipient of equally fulsome praise from figures like George HW Bush and Ronald Reagan. I don’t argue that Bitcoin is unique, or uniquely destructive; on the contrary part of the point is how easily and how transparently it fits into the shift toward the right we see in so many places today.

Still, when you listen to de Soto’s speech, it’s remarkable to hear him say things like “Western imperialism did a good job” or asking whether “blockchain can save globalization”—and that nobody in the audience raises objections to this sort of thing. You expect that on the right, not the left.

But, welcome to blockchain.

So yes, the politics of blockchain should make you worried about Trump, and the people who support Trump, and the rightward shift of all electoral politics today.

Works Cited

 

Posted in bitcoin, cyberlibertarianism, rhetoric of computation | Tagged , , , , , , , , , , , , , | 17 Responses

“Neoliberalism” Has Two Meanings

The word “neoliberalism” comes up frequently in discussions on and of digital media and politics. Use of the term is frequently derided by actors across the political spectrum, especially but not only by those at whom the term has been directed. (Nobody wants to be called a neoliberal and everyone always denies it, much as everyone denies being a racist or a misogynist: it is today an analytical term applied by those who disagree with it, although it has been used for self-identification in the past.) Sometimes the derision indicates genuine disagreement, but even more frequently it is part of an outright denial that there is any such thing as “neoliberalism,” or that the meaning of the term is so fuzzy as to make its application pointless.

There are many causes for this, but one that can be identified and addressed is fairly straightforward once it’s identified: neoliberalism has two meanings. Of course it has many more than two meanings, but it has two important, current, distinct, somewhat related meanings, and they get invoked in close enough proximity to each other so as to sometimes cause serious confusion. (The correct title for this post should really be “‘Neoliberalism’ Has (at Least) Two Meanings,” but the simpler version sounds better.) The existence of these two meanings may even explain some of the denials that the term means anything at all.

Meaning 1: “Neoliberal” as a modifier of “liberal” in the largely recent political sense of US/UK party politics, where the opposite is “conservative.” In this sense, the term is typically applied to people even more than political programs or dogma. Examples of “neoliberals” in this sense: Tony Blair, Bill Clinton, (arguably) Barack Obama, (arguably) Hillary Clinton.

This version of “neoliberal” is meant to identify a tendency among the political left to accommodate policies, especially economic policies, associated with the right, while publicly proclaiming identification with the left. Sometimes this is called “left neoliberalism.”

The best recent piece I know on this version of neoliberalism is Corey Robin’s “The First Neoliberals,” Jacobin (April 28, 2016), which includes pointers at the (brief) time when the term was introduced by those who described themselves that way:

[Neoliberalism is] the name that a small group of journalists, intellectuals, and politicians on the Left gave to themselves in the late 1970s in order to register their distance from the traditional liberalism of the New Deal and the Great Society.

The original neoliberals included, among others, Michael Kinsley, Charles Peters, James Fallows, Nicholas Lemann, Bill Bradley, Bruce Babbitt, Gary Hart, and Paul Tsongas. Sometimes called “Atari Democrats,” these were the men — and they were almost all men — who helped to remake American liberalism into neoliberalism, culminating in the election of Bill Clinton in 1992.

These were the men who made Jonathan Chait what he is today. Chait, after all, would recoil in horror at the policies and programs of mid-century liberals like Walter Reuther or John Kenneth Galbraith or even Arthur Schlesinger, who claimed that “class conflict is essential if freedom is to be preserved, because it is the only barrier against class domination.” We know this because he so resolutely opposes the more tepid versions of that liberalism that we see in the Sanders campaign.

It’s precisely the distance between that lost world of twentieth century American labor-liberalism and contemporary liberals like Chait that the phrase “neoliberalism” is meant, in part, to register.

We can see that distance first declared, and declared most clearly, in Charles Peters’s famous “A Neoliberal’s Manifesto,” which Tim Barker reminded me of last night. Peters was the founder and editor of the Washington Monthly, and in many ways the eminence grise of the neoliberal movement.

It’s important to say that, while this usage of the term may well be the one that is frequently applied in social media, and it’s certainly the one that gets mentioned most often in the context of electoral politics (see, for example, Nina Illingworth’s hilarious “Neoliberal Wall of Shame”), it isn’t really the one that scholars tend to use.

This usage came up recently in the controversy surrounding the #WeAreTheLeft publicity campaign, in which critics from the left, with whom I generally agree in this regard, criticized the organizers of that campaign as neoliberals: see Jeff Kunzler, “Dear #WeAreTheLeft, You Are Not the Left: The Rot of Liberal White Supremacy,” Medium (July 13, 2016) and Meghan Murphy, “#WeAreTheLeft: The Day Identity Politics Killed Identity Politics,” Feminist Current (July 14, 2016). Not surprisingly, this critique was met with the characteristic denial that the word means anything:

Meaning 2: “Neoliberal” as a modifier of “liberal” in the economic sense of the word “liberal,” as used for example in “liberal trade policies.” Also understood as a modifier of the liberalism associated with philosophers like Locke and Mill, which is itself frequently taken to be largely economic in nature. (The source I like best on the relationship between economic liberalism and rightist political programs is Ishay Landa’s The Apprentice’s Sorcerer: Liberal Tradition and Fascism, Studies in Critical Social Science, 2012.)

This is a movement of the political right, not of the left; its opposite would be something like “protectionism” or “planned economies” or even “socialism,” although in caricatured senses of those terms.  The lineage here begins with Hayek and von Mises, through the Mont Pelerin Society and Chicago School economics. Philip Mirowski is the go-to theorist of this movement. Examples: Hayek, Mises, self-identified right-wing “libertarians” like the Koch brothers, Murray Rothbard, Lew Rockwell, Ron and Rand Paul, and also hard-right politicians like Reagan and Thatcher, Scott Walker and Ted Cruz. These figures are better understood not with neoliberal as a term of personal identification (that is, “Ted Cruz is a neoliberal” doesn’t really mean much), but rather as advocating neoliberal policies or providing foundational theory for them. In contrast to the first meaning, this is sometimes referred to as “right neoliberalism.”

This is what scholars like David Harvey, Wendy Brown, Aihwa Ong, Will Davies, and Philip Mirowski mean when they talk about neoliberalism, even if the details of their usages of the term differ slightly. It’s the usage most often employed by scholars across the board. It’s the meaning the Wikipedia entry on neoliberalism currently invokes, barely mentioning the other.

When Daniel Allington, Sarah Brouillete and I wrote a piece called “Neoliberal Tools (and Archives): A Political History of the Digital Humanities,” in the Los Angeles Review of Books in May, it is this meaning we had in mind.

Neoliberalism in this sense is often understood, not entirely inaccurately, as free-market fundamentalism. But as Mirowski in particular explains it (Davies is also very good on this), neoliberalism has just as much to do with taking the reins of state power so as to favor commercial interests while publicly disparaging the idea of governmental power (or at least of democratic control of state power). Although this is most clearly explained in his book Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown (Verso, 2013), the summary he provides in “The Thirteen Commandments of NeoliberalismThe Utopian (June 19, 2013) explains a lot:

Although many secondhand purveyors of ideas on the right might wish to crow that “market freedom” promotes their own brand of religious righteousness, or maybe even the converse, it nonetheless debases comprehension to conflate the two by disparaging both as “fundamentalism”—a sneer unfortunately becoming commonplace on the left. It seems very neat and tidy to assert that neoliberals operate in a modus operandi on a par with religious fundamentalists: just slam The Road to Serfdom (or if you are really Low-to-No Church, Atlas Shrugged) on the table along with the King James Bible, and then profess to have unmediated personal access to the original true meaning of the only (two) book(s) you’ll ever need to read in your lifetime. Counterpoising morally confused evangelicals with the reality-based community may seem tempting to some; but it dulls serious thought. It may sometimes feel that a certain market-inflected personalized version of Salvation has become more prevalent in Western societies, but that turns out to be very far removed from the actual content of the neoliberal program.

Neoliberalism does not impart any dose of Old Time Religion. Not only is there no ur-text of neoliberalism; the neoliberals have not themselves opted to retreat into obscurantism, however much it may seem that some of their fellow travelers may have done so. You won’t often catch them wondering, “What Would Hayek Do?” Instead they developed an intricately linked set of overlapping propositions over time — from Ludwig Erhard’s “social market economy” to Herbert Giersch’s cosmopolitan individualism, from Milton Friedman’s “monetarism” to the rational-expectations hypothesis, from Hayek’s “spontaneous order” to James Buchanan’s constitutional order, from Gary Becker’s “human capital” to Steven Levitt’s “freakonomics,” from the Heartland Institute’s climate denialism to the American Enterprise Institute’s geo-engineering project, and, most appositely, from Hayek’s “socialist calculation controversy” to Chicago’s efficient-markets hypothesis. Along the way they have lightly sloughed off many prior classical liberal doctrines — for instance, opposition to corporate monopoly power as politically debilitating, or skepticism over strong intellectual property, or disparaging finance as an intrinsic source of macroeconomic disturbance — without coming clean on their reversals.

George Monbiot’s “Neoliberalism—The Ideology at the Root of All Our Problems,” The Guardian (April 15, 2016), offers an excellent if slightly less scholarly primer to the history of the term and the best-known instances of neoliberal policies, though he doesn’t include the crucial Mirowski/Davies insight about neoliberalism’s capture of state power:

As Naomi Klein documents in The Shock Doctrine, neoliberal theorists advocated the use of crises to impose unpopular policies while people were distracted: for example, in the aftermath of Pinochet’s coup, the Iraq war and Hurricane Katrina, which Friedman described as “an opportunity to radically reform the educational system” in New Orleans.

Where neoliberal policies cannot be imposed domestically, they are imposed internationally, through trade treaties incorporating “investor-state dispute settlement”: offshore tribunals in which corporations can press for the removal of social and environmental protections. When parliaments have voted to restrict sales of cigarettes, protect water supplies from mining companies, freeze energy bills or prevent pharmaceutical firms from ripping off the state, corporations have sued, often successfully. Democracy is reduced to theatre.

Monbiot also points out that this usage too comes from the promoters of the doctrine themselves: “In 1951, Friedman was happy to describe himself as a neoliberal. But soon after that, the term began to disappear. Stranger still, even as the ideology became crisper and the movement more coherent, the lost name was not replaced by any common alternative.” Mirowski and his colleagues explain this history in much more detail.

It’s important to note that, as Monbiot suggests, but as Manuela Cadelli, President of the Magistrates’ Union of Belgium, says outright, “Neoliberalism Is a Form of Fascism” (French original). If one sees the corporate-State nexus as a critical component of Fascism as a political-economic system, it’s not at all hard to see the connection (Landa’s book is the key source on this).

neoliberalism

***

Is there a relationship between the two meanings? Arguably, the most obvious relationship is that the Mont Pelerin Society’s long-term plan to turn the entire country (and frankly the entire world) toward a rightist solidifying of corporate power can’t help but entail the capitulation of the left toward those goals. We certainly read and hear much less of the Clintons and Blair bashing democratic governance as an ideal, even though their actions have tended toward the Mont Pelerin program. No doubt, these are prongs of the same movement at some level, but they have very different profiles and effects in the world at large.

In a recent piece in Counterpunch, “The Time is Now: To Defeat Both Trump and Clintonian Neoliberalism” (July 19, 2016), Mark Lewis Taylor writes: “Trumpian authoritarianism and Clintonian neoliberalism are actually co-partners in a joint system of rule. Trump’s authoritarianism is often a hidden bitter fruit of Clintonian neoliberalism. Social movements for democracy must fight them both together.”

It is interesting to note that the term “neoconservative” as it is typically invoked (heavy reliance on military power to pursue what are largely economic objectives) is not properly speaking in opposition to either meaning of “neoliberal”; figures like GW Bush and Cheney easily fall under the second definition of neoliberal as well as the typical definition of neoconservative. Tony Blair looks a lot like a neoliberal in the first sense and also a neocon in the same Bush-Cheney sense, although perhaps slightly less self-starting (maybe).

Nothing I’m saying here is new. Most—though unfortunately not all—academic studies of neoliberalism note this issue, often using the right/left terminology. But critiques of the use of the word often appear to blur the two meanings, using the fact that some figures fall into one group or the other as a means of disqualifying use of the term altogether. And as Mirowski among others says, the argument that “neoliberalism does not exist” appears to do important work in solidifying the Mont Pelerin program.

When I write, I almost always intend the second meaning, but I recognize that I haven’t always been as clear about that as I might have been, even if I’ve been quoting Mirowski in the process. I plan to try to distinguish my uses of these meanings in the future and I can’t help thinking it would be useful if more people did.

Posted in cyberlibertarianism, theory | Tagged , , , , , , , , , , , , , , | 2 Responses

Code Is Not Speech

Brief version

Advocates understand the idea that “code is speech” to create an impenetrable legal shield around anything built of programming code. When they do this they misunderstand, or misrepresent, free speech law (and rights law in general), which rarely creates such impenetrable shields, the principles that underlie that law, and the ways those principles should and might apply to code. The idea that government cannot regulate things because they are made of code cannot be right. That principle not only lacks support in most theories of freedom of speech, but is actively rebutted in the very case law that advocates claim to be marshaling in favor of their position. Further, in promoting this position, advocates misrepresent—in a manner it is hard not to see as willful—the very nature of the programming code they care so much about.

In an excellent piece in MIT Technology Review, law professor Neil Richards goes to the foundations of this question, and wisely calls the view that “code is speech” a “fallacy,” a “fantasy,” a “mistake,” and “wrong.” As a general take on the question I think this is more correct than not, although there are details that need to be gone into at some length. Code can and does have speech-like aspects. But in general, code is much closer to action than it is to speech. The demand that governments not regulate code becomes a demand that governments not regulate action—that is, that governments not regulate at all. That would be less troubling if there were not widespread, repeated, and effective demands for that proposition by the world’s most powerful moneyed interests. Further, the major target of government regulation—corporations—are also the major users of code in the world today. Those corporations, including Apple, whose filings in the #AppleVsFBI case are the most recent instance prompting these discussions, have a deep vested interest in blunting the ability of governments to regulate.

While its relation to speech may certainly be important part of many judicial and legislative actions regarding code, there is no general principle equating code and speech that can or should be relied on to structure those decisions. The cyberlibertarian understanding of “code is speech” contributes to a profoundly conservative assault on the rights of citizens, by depriving the state of the power to regulate and legislate against the corporations that exist only at the state’s pleasure in the first place. This is why “code is speech” has been so powerfully advocated for decades among crypto-anarchists and cypherpunks. Yet at least these groups are, for the most part, explicit about their desire to shrink governmental power and expand the power of capital. Today the view that “code is speech” is far more widespread, but it is no less noxious, than the explicit crypto-anarchist doctrine. Yet civil rights were not created for corporations. They are for individual citizens. They are supposed to protect us against abuses of power, not license them. The fact that Apple is today trying to sell products that openly display their rebuke to government oversight should frighten anyone to the left of Murray Rothbard. That today’s “privacy activists” and “free speech advocates” promote Apple’s actions as a realization of civil rights is an incredibly clear sign of the power of cyberlibertarianism to turn well-understood principles and rights against themselves, claiming to stand for the “little guy” while in fact doing the opposite.

Full version

“Code Is Speech” is one of many articles of faith that make up the cypherpunk and/or crypto-anarchist creed. The acceptance and repetition of those articles of faith among so many, even those who claim not to be on board with the overall cypherpunk ethos, is one way of cashing out the notion of cyberlibertarianism. As with the other cypherpunk articles of faith, a profoundly ambiguous slogan is taken to be a black-and-white statement of principle with a simple and clear meaning. The problem is that it is no such thing.

That simple meaning, as I infer it, would go something like this:

  • The First Amendment to the US Constitution absolutely prohibits the US Government from regulating anything that can be said to be made of code: “Congress shall make no law.” Code is inherently the same kind of thing as political speech, and so the government may not legislate against or even regulate it. In particular, governments cannot legislate against the running of code.

This is how many advocates appear to understand the principle (e.g., the Electronic Frontier Foundation’s Executive Director in Time, EFF on its own blog, and a prominent “digital rights” activist/computer scientist). When I refer to “code is speech” in what follows, this is what I mean. By writing “code is not speech” I don’t mean to imply, as I make clear below, that code never has any speech-like features or should never be seen in that context by courts; I mean only that the above proposition is far more wrong than it is right. It is also deeply misleading and ultimately very destructive to democracy, and that should be no surprise: those who have been pushing this perspective the longest, and who arguably developed it, make no secret of their contempt for democratic governance.

The most recent invocation of “code is speech,” and a particularly telling one, is found in Apple’s February 25 motion to vacate the court order in the #AppleVsFBI case:

Under well-settled law, computer code is treated as speech within the meaning of the First Amendment. See, e.g., Universal City Studios, Inc. v. Corley, 273 F.3d 429, 449 (2d Cir. 2001); Junger v. Daley, 209 F.3d 481, 485 (6th Cir. 2000); 321 Studios v. Metro Goldwyn Mayer Studios, Inc., 307 F. Supp. 2d 1085, 1099–1100 (N.D. Cal. 2004); United States v. Elcom Ltd., 203 F. Supp. 2d 1111, 1126 (N.D. Cal. 2002); Bernstein v. Dep’t of State, 922 F. Supp. 1426, 1436 (N.D. Cal. 1996).

The Supreme Court has made clear that where, as here, the government seeks to compel speech, such action triggers First Amendment protections. (section B1; page 32)

The problems with this statement are surprisingly numerous given how brief it is. The case law is by no means “settled” for any number of reasons; to the degree that these cases even address specific issues under the “code is speech” penumbra, they are not the ones that Apple advocates; finally, courts have absolutely not made clear what Apple alleges they have, even in the narrow sense Apple claims—to the contrary, they have tended to hold the opposite position. In its filing Apple takes the general principle that “code is speech,” applies it in a new and legally unprecedented fashion, and then Apple and its supporters mock those who find any problems with the argument. This is typical of the imprecise and black-and-white way that digital enthusiasts think about “code is speech” (and other rights questions, for that matter). But it is both misleading and destructive.

I’ll explain why in four parts: 1) the law is not settled in any sense, but represents a patchwork of cases heard in various courts taking up different aspects of a complex issue, several of them remarkably inconclusive, and none of them taking the issue head-on mentioned above as the main understanding of “code is speech”; 2) government can and does, in fact, regulate speech; 3) in general, it is clear that across the board, code is not speech in the same sense that ordinary speech (or even expressive works of art and other media) is speech, because the primary purpose of code is to take action, and action, speaking generally, is exactly what government can and should regulate; 4) the specific argument Apple makes is entirely novel, demonstrating the deep ambiguity of the phrase “code is speech,” and rests on much shakier legal ground than are some of the other claims under that heading. This overreaching reinterpretation is typical of the ways digital evangelists try to turn legal principles against themselves to their own advantage. Finally, I’ll briefly explain why “code is speech” diminishes civil rights rather than strengthening them.

i. The Law Is Not Settled

First, and simplest, despite what Apple and EFF and other advocates say, the law about the relationship of code and speech is not settled. It is not true that the cases that have been decided so far add up to a clear articulation of principle, because they approach small parts of the problem from disparate angles; and the most central elements of that principle have never been addressed head-on. The issue raises fundamental questions about the nature of the First Amendment in its application to a new form of technology. Such questions can only approach being “settled” if and when the Supreme Court issues a ruling on a case, or at least affirms a lower court ruling by failing to issue cert on a petition for appeal. That has never happened in any of these cases. Only two cases have reached the Supreme Court that address aspects of this question at all: Brown and Sorrell. Neither of these cases takes the question head-on, and for good reason. Apple doesn’t even include them in its litany of cases. (see Appendix 1 for summaries of what each of the cases say).

The lower court cases, some of which get closer to the general question of code and speech, cannot possibly be said to have settled the law. For example, Bernstein v DOJ—the case that even EFF claims is the one that “established” the view that “code is speech”—which is arguably the case that took the general question of code and its relation to speech most directly, was vacated during the appeals process when the government decided not to enforce the relevant regulations due to changing facts: it is no longer considered valid precedent at all. Further, Apple refers to a lower court ruling in Bernstein v Dept of State in 1996 (as does EFF), but the later appellate rulings actually don’t fully support the reasoning the lower court offers, and raise serious questions about the core claims advocates extract from the cases (see Appendix 2). Finally, the cases all address different aspects of the relationship of code to speech, and several of them, to the extent they try to extract a general principle from the issue, actually argue against what “code is speech” advocates want the cases to say. A more accurate assessment of the legal landscape in the US would be: no court has ever issued a general opinion that could lead to the principle that “code is speech.” In Appendix 1 I step through each of the cases Apple cites, as well as the other relevant cases, and show in detail that they do not touch the core question, and that they do not settle many of the open legal questions, let alone the core question of the relationship of code to speech.

The relationship of code and speech needs to be examined thoroughly, in detail, with experts on all aspects of that relationship weighing in—not exclusively tech industry lawyers, engineers, corporate lobbyists and advocates with a vested and one-sided interest in magnifying their own power and the power of those they represent. The blanket belief that “code is speech” leads to rank overextensions of rights talk like the view that “data is speech” (Bambauer 2014), or that Google’s search engine results deserve First Amendment protection (Volokh and Falk 2012, ably dissected in Grimmelman 2014 and Pasquale 2012a, 2012b); it also fuels the truly disturbing First Amendment shield Google has tried to create around the EU “right to be forgotten” ruling. It is possible and desirable to think carefully about what the principles of freedom of the press and freedom of speech are supposed to mean and how those mesh with new technologies; Richards (2016) and Tutt (2012) give good overviews of some of the issues there, and the great First Amendment scholar Jack Balkin has been thinking about these questions for several decades (see e.g. Balkin 2004). But there is no “settled law” whatsoever about what that relationship should be.

ii. Government Can and Does Regulate Speech

Even if code were speech across the board—which it is not—it simply is not the case that the First Amendment means, in US law, or for that matter, freedom of speech laws elsewhere, that “if it’s speech, the government can’t pass laws about it.” Freedom of speech is usually taken as a prohibition on censorship or “prior restraint” on bona fide speech, of which the core exemplar is explicit political speech. Richards is particularly good on this topic. Yet what is so often overlooked in these discussions is that the First Amendment does not create a blanket prohibition on censorship; rather, actually settled case law has taken it for over half a century (including the period when most of the significant First Amendment case law has been developed) to mean that, in various contexts, different tests must be applied to determine whether or not a given law does or does not violate the First Amendment. These are typically framed in terms of levels of scrutiny, where scrutiny refers to the burden required for the government to overcome First Amendment protections. Scrutiny tests are used not just for the First Amendment, but in many circumstances where fundamental rights are implicated.

For example: it’s long been settled that “content-neutral” prohibitions on speech—that is, laws that prohibit any speech, regardless of content, in certain venues or at certain times—encounter a lower level of scrutiny than would censorship of political editorials. In general, for the government to censor a political editorial, it must meet the test of “strict scrutiny,” meaning that if the government can demonstrate a “compelling government interest” in censoring the content (and meet a few other criteria), it may be found constitutional. In general, for the government to issue content-neutral speech restrictions—for example, when a local government issues noise limits for certain periods of time, or declares certain parts of the city off-limits for protest—the standard is much lower, “intermediate scrutiny.” These “scrutiny” tests are all over First Amendment law and other rights law; they have become the standard way of mediating between rights interests and government power since at least the 1960s, in part due to considerations arising from the Equal Protection Clause of the 14th Amendment (for a more detailed account see Siegel 2006). Simply saying that something is covered by the First Amendment does not mean that government is unable to regulate it. For kinds of speech that are less central, lower tests apply—“intermediate scrutiny,” and in the least central instances, the “rational basis test.” I’ve added more information about scrutiny tests, including the fact that they are a routine part of legal education taught in introductory Constitutional Law classes (Appendix 5).

One doesn’t have to look far to see the court system making this abundantly clear, even in the very cases Apple and others cite as if it supports their position. This topic is addressed with particular force in the lower court ruling in Universal Studies v. Corley, in a part of the decision that was not overturned by subsequent rulings of the Appeals Court:

All modes of expression are covered by the First Amendment in the sense that the constitutionality of their “regulation must be determined by reference to First Amendment doctrine and analysis.” Regulation of different categories of expression, however, is subject to varying levels of judicial scrutiny. Thus, to say that a particular form of expression is “protected” by the First Amendment means that the constitutionality of any regulation of it must be measured by reference to the First Amendment. In some circumstances, however, the phrase connotes also that the standard for measurement is the most exacting level available. (Universal Studios v Reimerdes at IIIa [14])

In both the Universal Studios v Reimerdes and Universal Studies v. Corley decisions the judges go to great lengths to indicate the levels of scrutiny that various kinds of speech regulation require. They repeatedly reject the opinion that simply labeling something “speech” means government must keep its hands off. Further, it’s clear, as Neil Richards says, that the general trend of these tests is to put political speech by individuals at the core, demanding the highest level of scrutiny to justify legislation, and other forms of speech are seen in relation to it.

Apple’s filing in #AppleVsFBI overlooks this fact entirely—Apple writes as if the fact that “code is speech” simply blocks all action by the government. It doesn’t. Even if a Court were to apply the strict scrutiny test to a case like this one, it’s entirely plausible that using a legal warrant to “speak” by writing some computer code in the service of the investigation of a crime where many people were killed and many more were injured would be properly viewed as “compelling government interest” and a “narrowly tailored measure.” Although it is fashionable to dismiss everything the government says in this case and others like it, it’s notable that the Department of Justice makes exactly this point in its own filing (Appendix 4).

iii. In General, Code Is More Properly Viewed as Action than as Speech

This is truly the heart of the matter. Advocates love to gloss over the fact that the cases mentioned here look at one or another aspect of code, but actively reject the wholesale equation of code and speech. The reasons for this are obvious. Code clearly does have some speech-like qualities. In certain limited contexts, code can be used to express ideas between people, whether through the archetypal example of “Perl poetry” or the more prosaic argument that publication of Daniel Bernstein’s Snuffle code was meant to convey the idea of his encryption algorithm to other programmers. This is what advocates love to harp on, and some at times talk as if the fact that code can have expressive features itself triggers full First Amendment protections for any and all code. This is, simple, false.

The digital revolution is characterized by fallacious arguments that take the form “completely different and exactly the same”: a phenomenon is exactly the same as some existing phenomenon X, so critical objections to it are invalid; the phenomenon is completely different from X, which is why consumers would want it and developers will build and sell it. This is particularly true with regard to code. Code is one of the most explosively new phenomena society has ever encountered, particularly in the past 50 years. It has literally transformed enormous parts of the world. It is deeply connected to existing phenomena (especially formal logic and mathematics), but it is different from all of them. Code is why computers exist. Code is one of the main reasons that computers are remarkable, one of the main reasons we have a “digital revolution.” That is why it is incredibly disingenuous and self-serving that the most vigorous proponents of the transformative power of the digital—and in this sense of code—are the very same people telling us that code is so much like “language” (no doubt, because it is largely made of language and language-like symbols) that the law must treat it as if it were language

This is most easily seen when we think about the main use to which code is put. The reason we have and pay so much attention to code is because it is executed. Execution is not primarily a form of communication, not even between humans and machines. Execution is the running of instructions: it is the direct carrying out of actions. It is adding together two numbers, or multiplying ten of them, or looping back to perform an operation again, or incrementing a value. This is what code does. Lots of code does not even look like language at all—it looks, if anything, like giant arrays of numbers that mean very little to anyone but the most highly-trained programmer. All of it executes, or at least can be executed. That is what it is for. That is what it does. That’s what makes it different from lots of other things, like most language and expressive media, all of whose primary function is to convey thoughts and ideas and feelings between persons.

Code can and often does serve these expressive functions. But this fact has been used, often very cynically, by digital advocates to push the legal system into misapprehending expression as the primary purpose of code, which it is not. Then, decision based in part or in whole on these expressive functions are taken as “proof” of some sort that code cannot be regulated. This is cynical because the advocates know that what they mean by code is not the freedom to express ideas with code, but to run it. Thus the courts hear “code is speech” as a doctrine about the fact that code can be used to express ideas between people, and that it therefore can’t be restricted on First Amendment grounds. Engineers and the EFF and others hear these decisions as resulting in a dictum that the government cannot restrict the running of programs because running programs is like speaking. This is just, as near as one can be about legal matters, false on its face. And it results in abhorrent doctrine: it says that as long as a corporate actor takes action using programming code, the government cannot restrict it because of the First Amendment.

It’s important to note that two of the cases most often cited in this context, Bernstein and Universal Studios, both make this point. Bernstein explicitly restricts itself to the publication of code because code in execution is so different from ordinary forms of expression, and the dissent in Bernstein does not even accept this (Appendix 2); the court in Universal Studios considers at length “the functional aspects of code”—that is, its use in execution—and considers them to raise different questions from its expressive characteristics (Appendix 3).

iv. Apple’s Argument Expands the Idea of “Code Is Speech” in an Unprecedented and Antidemocratic Manner

Apple claims that “code is speech” is settled law; it isn’t. Then it claims that “code is speech” prohibits the government from compelling Apple to write code because that would be compelling it to speak. But that assertion rests on a something that is settled law: that the government can compel corporations to speak, much differently from how it treats individuals (see “Compelled Speech”). In its filings and press releases, Apple refers to itself as if it were a natural person. Natural persons have many more First Amendment (and other Bill of Rights) protections than do corporations, although the boundaries between the two are more porous and murky than they should be (see Greenwood’s essays, and Pollman 2011). Many lawyers and legal scholars (and even dissenting Supreme Court justices) feel that the twin leaps of logic that come to fruition in Citizens United and Hobby Lobby—that corporations are people and that money spent (by corporations) is speech and that corporations can have religious beliefs—depend on truly untenable extensions of constitutional rights to corporations. But even there, the Court has not ruled that corporations have exactly the same rights under the First Amendment that natural persons do.

Further, Apple nowhere mentions that the question of corporate compelled speech is one that the Supreme Court has adjudicated, largely against the position it takes. Corporations are artificial entities that only exist at the pleasure and license of the government. In the very act of incorporation, corporations agree to participate in certain aspects of law and regulation that natural persons do not have to.

Corporations can be compelled to speak. For example, corporations are not allowed to promulgate false advertising. This is different from the fact that individuals can sue after the fact for libel and slander. The FTC is allowed to censor advertisements it deems to be false, along with other tools at its disposal. In addition, the Court has long ruled that the FDA and other regulatory bodies can demand that products be labeled for potential harm to human life (e.g. poison labels on pesticides, warning labels on pharmaceuticals and tobacco products), for informational purposes related only generally to health (nutrition information on food), and even for purposes of general engineering safety and operability (such as the engineering labels that Apple itself includes inside its products and in the literature surrounding them—in other words, Apple is already being “compelled” to speak millions of times each day, much more publicly than it would have been in #AppleVsFBI, and does not complain about it).

Apple’s public portrayal of itself as a natural person whose rights to speak are being infringed by the government—and the way advocates unquestioningly repeat this representation—shows the danger of accepting “code is speech” as a general principle. Its effect is not at all what it appears to be. That makes sense: the speech of individuals is already protected at a very profound level by First Amendment jurisprudence, and if and when it appears that an individual is “speaking” through code—as at least the majority opinion in Bernstein found—courts have adequate resources to confront that issue. Just because “code is speech” is so vague on the one hand, and so badly grounded in actual free speech doctrine on the other, it leads to objectionable restatements of legal principle that are easily exploited by those who have the strongest vested interest in controlling how governments regulate power.

v. “Code Is Speech” Diminishes Civil Rights

Richards, Goodman, and others (Boston Globe, Fein and Gifford) have done good work in nailing this down, so I’m not going to belabor it, except to make a relatively obvious point that is repeatedly echoed in the case law. Most code is action. A huge amount of code is promulgated by corporations, not individuals. The effect of embracing “code is speech” is to say that governments cannot regulate what corporations do. That might seem like hyperbole, but it is 100% on board with the Silicon Valley view of the world, the overt anarcho-capitalism that many of its leaders embrace, and the covert cyberlibertarianism that so many more accept without fully understanding its consequences. It is profoundly anti-democratic. This is part of what makes it so confounding that so many see Apple as some kind of civil rights actor, especially when its avowed mission is to sell products that block the serving of legal, targeted warrants, and when it already makes such outrageous statements regarding the corporate taxes it clearly would owe if not for the existing successful capture of regulatory and legislative bodies that enable the kinds of corporate inversions and other tax dodges we see on display in the Panama leaks, among many other places.

The most tireless legal scholar on this point is Hofstra professor Daniel J. H. Greenwood, whose works, cited in the bibliography below, I cannot recommend enough. Greenwood (1998, 2005, 2013; Greenfield, Greenwood and Jaffe 2007) has long argued that “nothing in the structure or language of the bill of rights suggests that the traditional rights of American citizens apply to corporations” (Greenwood 2013, 14). Coates (2015), Miller (2011), Pollman (2011), and quite a few others have worked on it as well. To most scholars who don’t have a vested financial interest in the success of one company or another (e.g. those who don’t work directly for corporations or for corporate-funded think tanks), the encroachment of corporations into rights language has been one of the signal failures of US democracy. This is not to say corporations never should have any rights, or even that the notion of “artificial person” is entirely bankrupt (it seems to do important work, for example, in the rights of both corporations and natural persons to act as equal parties in contracts and to use the civil courts to adjudicate breaches of contract), but that the general expansion of corporate personhood and identification of corporations as the locus of constitutional rights is among the most significant dangers facing democratic governance today.

Consider this truly jarring statement regarding #AppleVsFBI from EFF’s Executive Director, Cindy Cohn:

The Supreme Court has rejected requirements that people put “Live Free or Die” on their license plates or sign loyalty oaths, and it has said that the government cannot compel a private parade to include views that organizers disagree with. That the signature and code in the Apple case are implemented via technology and computer languages rather than English makes no difference. For nearly 20 years in cases pioneered by EFF, the courts have recognized that writing computer code is protected by the First Amendment.

EFF is mostly staffed by lawyers. Cohn is an attorney who has a long pre-EFF history of working for civil and human rights—and actually worked on Bernstein, which EFF persists in mischaracterizing in several critical ways. Yet rather than making clear the fact that she is making a truly novel argument about a corporation being compelled to speak—or, to be much more honest, to take action—she purposely blurs the lines between between code as action and as speech, and between individuals and corporations. She writes that “the FBI should not force Apple to violate its beliefs,” but the only case that the Supreme Court has ever decided that even suggests that corporations have beliefs is the horribly right-wing 2014 Hobby Lobby decision, which nobody outside of far-right ideologues should endorse, and which itself depends on the fact that Hobby Lobby is a family-owned private corporation, not a public company like Apple. It is fine to endorse this view, I suppose, but to frame it in terms of loyalty oaths is really dirty pool. This is right-wing politicking of the highest order, demanding that corporations be extended the full panoply of rights which the framers and almost all non-technology and non-right-wing thinkers have always thought apply only to natural persons. That it can somehow be mounted in terms of “human rights” and “freedom” is really shocking. As a principle, “code is speech” does not represent a natural extension of rights, but rather a significant curtailing of rights, by putting ordinary actions outside the penumbra of legal regulation. Hopefully, should the matter ever be fully adjudicated by the Supreme Court, sanity will prevail (which is obviously asking a lot), and this will be made as clear as it should be.

code is speech

Image Source: ShutterStock/Mclek via canmua.net

Bibliography

Appendices

APPENDIX 1: The Cases Do Not Add Up to “Settled Law”

US Court Cases Cited By Apple in Its #AppleVsFBI Motions

  • Universal City Studios, Inc. v. Corley, 273 F.3d 429, 449 (2d Cir. 2001); this case is sometimes referred to as Universal Studios v Reimerdes; the Corley name got attached only on the 2001 appeal. This case is different from the encryption cases, because it asks whether the First Amendment prohibits the government from regulating the distribution of a software program designed to circumvent legal copyright protections (under the DMCA). Unlike some of the other cases, this one does not turn on the First Amendment question. Although it is used by advocates as if it does, in fact the language in this string of cases, especially the appeal, rebuts both the legal and factual arguments about “code is speech” much more than it supports them. Links: Wikipedia; decisions: Corley (appeal), Reimerdes (lower court).
  • Junger v. Daley, 209 F.3d 481, 485 (6th Cir. 2000); this case builds largely on the Bernstein rulings that were eventually overruled and then vacated. Further, like so many of the cases mentioned here, language in the decision rebuts rather than supports the expansive reading of “code is speech.” This case is also about the publication (and execution?) of encryption software. Links: Wikipedia; case archive.
  • 321 Studios v. Metro Goldwyn Mayer Studios, Inc., 307 F. Supp. 2d 1085, 1099–1100 (N.D. Cal. 2004); another case (like Universal Studios) about the DMCA, and another case that rebuts rather than supports the general “code is speech” equivalence: as Wikipedia puts it, the court “did not agree that enforcing the DMCA in this case would regulate computer code on the basis of content. The court held that only the functional element of the computer code was barred, and so the DMCA did not suppress the code based on its content. As such, the court applied an intermediate scrutiny standard in evaluating the restriction of speech in this case.” And it’s important to add: 321 Studios, the litigant who made the First Amendment argument, lost the case: “The court held that both of DVD Copy Plus and DVD-X Copy violated the DMCA and that the DMCA was not unconstitutional. The court enjoined 321 Studios from manufacturing, distributing, or otherwise trafficking in any type of DVD circumvention software.” Links: Wikipedia.
  • United States v. Elcom Ltd., 203 F. Supp. 2d 1111, 1126 (N.D. Cal. 2002); this case may be the biggest stretch of all those listed. The case was a criminal prosecution of software developer Dmitry Sklyarov under the DMCA. Sklyarov was acquitted by the jury, so there was no appeal possible. There is no judicial decision to refer to, and it is not at all clear that the case has anything to do with the First Amendment. Links: Wikipedia.
  • Bernstein v. Dep’t of State, 922 F. Supp. 1426, 1436 (N.D. Cal. 1996): this case is more properly referred to as Bernstein v. U.S. Dept. of Justice, 176 F.3d 1132 (9th Cir.1999) because “the Ninth Circuit ordered that this case be reheard by the en banc court, and withdrew the three-judge panel opinion.” It’s no accident that Apple cites the 1996 decision, as do advocates like EFF, because that decision says something about “code is speech” that is actually superseded in the 1999 appeals decision that overrules the 1966 decision (see Appendix 2). Further, this entire stream of cases was vacated in 2003 and was never fully tested in the courts; it has no binding force: “On October 15, 2003, almost nine years after Bernstein first brought the case, the judge dismissed it and asked Bernstein to come back when the government made a ‘concrete threat.’” Although Bernstein is the case “code is speech” advocates cite most routinely as if it supports their case (it does not), it is the least like settled law among all of them. Links: Wikipedia; EFF archive; Bernstein’s archive.

Relevant US Supreme Court Cases Not Cited By Apple in Its #AppleVsFBI Motions

  • Sorrell v. IMS Health Inc., No. 10-779 131 S.Ct. 2653 (2011). Apple does not cite this case, which is odd since it is the only one to have advanced to the Supreme Court, the only one to have turned directly on First Amendment questions, and the only case to possibly have what is usually meant by the phrase “settled law.” Of course this case is not directly about code: it is about whether governments can prevent corporations from selling data they have collected. Like Citizens United, this case was written by the right wing of the Court, and along with that case is frequently referred to by everyone to their left as a truly disturbing extension of First Amendment rights to corporations. Links: Wikipedia; decision.
  • Brown v. Entertainment Merchants Ass’n, 564 U.S. 08–1448 (2011). The other Supreme Court case that appears to touch on code as speech. In Brown, the Supreme Court ruled 7-2 that video games deserve the same protections as do any other forms of cultural expression. Andrew Tutt summarizes the holding: “As a threshold matter, the Court had to decide whether video games were speech. Rather than reach beyond video games to software generally, the Court zeroed in on video games and held that they were speech because they communicated ideas through familiar literary devices. The Court reasoned that video games were speech because they expressed ideas in familiar ways: ‘Like the protected books, plays, and movies that preceded them, video games communicate ideas—and even social messages—through many familiar literary devices (such as characters, dialogue, plot, and music) and through features distinctive to the medium (such as the player’s interaction with the virtual world).’” This sidesteps rather than addresses the main code-as-speech question: it is hard to argue that cultural products made with computer code are any different from any other cultural products, so they deserve identical kinds of protection to those other products. The case is clearly about things built with code, rather than code itself; Tutt is particularly good on the fundamental differences between the two. Links: Wikipedia; decision.
  • Citizens United v. Federal Election Commission, No. 08-205, 558 U.S. 310 (2010). The case in which the Supreme Court famously ruled that laws constraining campaign-related expenditures, even when those expenditures were made by corporations, violated the First Amendment. Relevant to this case because even the ACLU sided with the majority view here, which is typically summarized as “money is speech.” Citizens United extends, and to some extent depends upon, the expansion of First Amendment rights to entities other than natural persons (see e.g. Park 2014 for a catalog of some of the other relevant cases), but even those reasonable First Amendment advocates who agree with the basic thrust of the decision did much more damage than necessary to the Constitutional fabric due to “opportunistic overreach”: “In Citizens United, the Court was presented with a narrow question about the constitutionality of campaign finance rules as applied to a nonprofit’s on-demand video, but it transformed the case into an opportunity to rule with a broad brush, putting essentially all future regulation of campaign finance in conspicuous jeopardy” (Tribe 2015, 476-7). Links: Wikipedia; decision.
  • Burwell v. Hobby Lobby, 573 U.S. ___ (2014). In some ways even more than Citizens United this is the case that should disturb civil liberties advocates with regard to the “code is speech” claim. In Hobby Lobby the Court decided that corporations—although, in this case, a very specific kind of corporation that is closely held by a family—can have the kind of religious beliefs that the First Amendment was intended to protect under the free exercise clause. Almost all commentators, even some who support Citizens United, feel that in Hobby Lobby the court went far beyond the question put to it in the case. As one prominent legal commentator puts it, the “unfounded claims” made by the majority in Hobby Lobby “disregarded the fundamental feature of state corporate law: separation of ownership from the entity”: “Far from being ‘quite beside the point,’ legal separateness is the point of creating a corporation,” (Garrett 2014, 145-6). In her dissent, Justice Ruth Bader Ginsburg wrote: “the exercise of religion is characteristic of natural persons, not artificial legal entities” (Hobby Lobby at 2794, Ginsburg, J., dissenting). It’s notable how much the “code is speech” defense mounted by Apple portrays corporations as natural persons with the kinds of beliefs, attitudes and rights that many outside the far right think only attach to natural persons. Links: Wikipedia; slip opinion.

APPENDIX 2: Bernstein Explicitly Restricts Itself to the Publication and Expressive Features of Code

The part of a lower court ruling EFF and others love to cite, but that is superseded by appeals:

This court can find no meaningful difference between computer language, particularly high-level languages as defined above, and German or French….Like music and mathematical equations, computer language is just that, language, and it communicates information either to a computer or to those who can read it…
-Judge Patel, April 15, 1996

This seems mistaken for many reasons, among them that it creates an entirely new principle that music, math equations, and code are just language, which is obviously false. It also relies on the completely false equivalency between “programming languages” and “[human] languages,” another characteristic digital fallacy which requires separate treatment, but if you want some sense of why this is wrong, ask a linguist.

From the much more limited appeals ruling by Betty Fletcher:

Cryptographers use source code to express their scientific ideas in much the same way that mathematicians use equations or economists use graphs. Of course, both mathematical equations and graphs are used in other fields for many purposes, not all of which are expressive. But mathematicians and economists have adopted these modes of expression in order to facilitate the precise and rigorous expression of complex scientific ideas. Similarly, the undisputed record here makes it clear that cryptographers utilize source code in the same fashion. (Fletcher 1999 at 4233)

Judge Nelson’s dissent is even stronger, and I think it’s correct, and I hope that any future rulings, especially by the Supreme Court, follow its reasoning. Nelson writes that he is

inevitably led to conclude that encryption source code is more like conduct than speech. Encryption source code is a building tool. Academics and computer programmers can convey this source code to each other in order to reveal the encryption machine they have built. But, the ultimate purpose of encryption code is, as its name suggests, to perform the function of encrypting messages. Thus, while encryption source code may occasionally be used in an expressive manner, it is inherently a functional device. (Nelson, dissent, 4245-6)

APPENDIX 3: Both Universal Studios Rulings Say Unambiguously that Code Can Be Regulated

Universal Studios v Reimerdes:

These considerations suggest that the DMCA as applied here is content neutral, a view that draws support also from City of Renton v. Playtime Theatres, Inc. The Supreme Court there upheld against a First Amendment challenge a zoning ordinance that prohibited adult movie theaters within 1,000 feet of a residential, church or park zone or within one mile of a school. Recognizing that the ordinance did “not appear to fit neatly into either the ‘content based or the ‘content-neutral’ category,” it found dispositive the fact that the ordinance was justified without reference to the content of the regulated speech in that the concern of the municipality had been with the secondary effects of the presence of adult theaters, not with the particular content of the speech that takes place in them. As Congress’ concerns in enacting the anti-trafficking provision of the DMCA were to suppress copyright piracy and infringement and to promote the availability of copyrighted works in digital form, and not to regulate the expression of ideas that might be inherent in particular anti-circumvention devices or technology, this provision of the statute properly is viewed as content neutral. (at 21)

And further,

This analysis finds substantial support in the principal case relied upon by defendants, Junger v. Daley. The plaintiff in that case challenged on First Amendment grounds an Export Administration regulation that barred the export of computer encryption software, arguing that the software was expressive and that the regulation therefore was unconstitutional. The Sixth Circuit acknowledged the expressive nature of computer code, holding that it therefore was within the scope of the First Amendment. But it recognized also that computer code is functional as well and said that “[t]he functional capabilities of source code, particularly those of encryption source code, should be considered when analyzing the governmental interest in regulating the exchange of this form of speech.” Indeed, it went on to indicate that the pertinent standard of review was that established in United States v. O’Brien, the seminal speech-versus-conduct decision. Thus, rather than holding the challenged regulation unconstitutional on the theory that the expressive aspect of source code immunized it from regulation, the court remanded the case to the district court to determine whether the O’Brien standard was met in view of the functional aspect of code. (at 24)

APPENDIX 4: DoJ Response to Apple’s #AppleVsFBI Filing

On the difference between corporate & individual speech, & quoting two of the exact cases Apple cites to show that they do not support its argument:

Apple’s claim is particularly weak because it does not involve a person being compelled to speak publicly, but a for-profit corporation being asked to modify commercial software that will be seen only by Apple. There is reason to doubt that functional programming is even entitled to traditional speech protections. See, e.g., Universal City Studios, Inc. v. Corley, 273 F.3d 429, 454 (2d Cir. 2001) (recognizing that source code’s “functional capability is not speech within the meaning of the First Amendment”). “[T]hat [programming] occurs at some level through expression does not elevate all such conduct to the highest levels of First Amendment protection. Doing so would turn centuries of our law and legal tradition on its head, eviscerating the carefully crafted balance between free speech and permissible government regulation.” United States v. Elcom Ltd., 203 F. Supp. 2d 1111, 1128-29 (N.D. Cal. 2002). (US DoJ 2016, p32)

On the question of which scrutiny standard should apply:

Even if, despite the above, the Order placed some burden on Apple’s ability to market itself as hostile to government searches, that would not establish a First Amendment violation because the Order “promotes a substantial government interest that would [otherwise] be achieved less effectively.” Rumsfeld, 547 U.S. at 67. There is no question that searching a terrorist’s phone—for which a neutral magistrate has found probable cause—is a compelling government interest. See Branzburg v. Hayes, 408 U.S. 665, 700 (1972) (recognizing that “the investigation of a crime” and “securing the safety” of citizens are “fundamental” interests for First Amendment purposes). As set forth above, the FBI cannot search Farook’s iPhone without Apple’s assistance, and Apple has offered no less speech-burdensome manner for providing that assistance.

For all of these reasons, Apple’s First Amendment claim must fail. (US DoJ 2016, p34)

APPENDIX 5: Scrutiny Tests in Constitutional Jurisprudence

One of several principles involved in #AppleVsFBI that, unlike “code is speech,” deserves the label of “settled law” is the application of so-called “scrutiny tests” to cases involving possible governmental breaching of fundamental civil rights. While it is most familiarly applied in First Amendment cases, it is used across the board in a wide variety of rights cases, and is frequently associated with the Equal Protection clause of the 14th Amendment (though its connection to that clause is not entirely clear; see Siegel 2006 for a thorough analysis).

To show how basic scrutiny tests are to US rights law, here’s a characteristic passage from a Con Law textbook I was able to find online:

The determination that “speech” is involved is just the beginning. It means that the case will be decided under the First Amendment. However, it does not guarantee the outcome. The right to speak is not absolute. A society in which the government was powerless to restrain citizens from speaking at any time or place, on any subject, however loudly they pleased, would be an insufferable place to live. The First Amendment does not strip the government of power to regulate speech; it prohibits the government from “abridging freedom of speech.” Deciding when a restriction “abridges freedom of speech” is what First Amendment jurisprudence is about; this determination calls for complex value judgments. (Kanovitz 2010, 45; my emphasis)

These value judgments are made through the use of a well-established concept that one never reads in the pro-“code is speech” evangelism: “scrutiny.” Scrutiny is part of what Con Law students also learn in their first year. It refers to the kinds of tests courts must apply to determine whether a law or regulation is allowable. The closer a speech act is to the core case—core political speech, such as an editorial endorsing a political candidate or passing of a law—the higher the level of scrutiny that the courts must apply. This highest level is called “strict scrutiny”; in these cases, the government must show a “compelling governmental interest” in the goal of the regulation, and that the regulation is “narrowly tailored” to accomplish this goal. Even in cases of core political speech, regulations that meet the “strict scrutiny” tests have passed and will continue to pass Supreme Court muster. The most familiar example is in content-based prohibitions on political speech within a certain distance of polling places on election days. Remember, that is an example of the most important kind of speech on anyone’s philosophy, and yet Courts have routinely ruled that, within these very limited contexts, actual prohibition on speech is fully legal.

Here’s material from a Con Law resource at the University of Missouri-Kansas that goes into some detail about the varying application of scrutiny tests with regard to the Equal Protection Clause:

Legislation frequently involves making classifications that either advantage or disadvantage one group of persons, but not another.  States allow 20-year-olds to drive, but don’t let 12-year-olds drive.  Indigent single parents receive government financial aid that is denied to millionaires.  Obviously, the Equal Protection Clause cannot mean that government is obligated to treat all persons exactly the same–only, at most, that it is obligated to treat people the same if they are “similarly circumstanced.”

Over recent decades, the Supreme Court has developed a three-tiered approach to analysis under the Equal Protection Clause.

Most classifications, as the Railway Express and Kotch cases illustrate, are subject only to rational basis review.  Railway Express upholds a New York City ordinance prohibiting advertising on commercial vehicles–unless the advertisement concerns the vehicle owner’s own business.  The ordinance, aimed at reducing distractions to drivers, was underinclusive (it applied to some, but not all, distracting vehicles), but the Court said the classification was rationally related to a legitimate end. Kotch was a tougher case, with the Court voting 5 to 4 to uphold a Louisiana law that effectively prevented anyone but friends and relatives of existing riverboat pilots from becoming a pilot.  The Court suggested that Louisiana’s system might serve the legitimate purpose of promoting “morale and esprit de corps” on the river.  The Court continues to apply an extremely lax standard to most legislative classifications.  In Federal Communications Commission v Beach (1993), the Court went so far as to say that economic regulations satisfy the equal protection requirement if “there is any conceivable state of facts that could provide a rational basis for the classification.”  Justice Stevens, concurring, objected to the Court’s test, arguing that it is “tantamount to no review at all.”

Classifications involving suspect classifications such as race, however, are subject to closer scrutiny. A rationale for this closer scrutiny was suggested by the Court in a famous footnote in the 1938 case of Carolene Products v. United States (see box at left). Usually, strict scrutiny will result in invalidation of the challenged classification–but not always, as illustrated by Korematsu v. United States, in which the Court upholds a military exclusion order directed at Japanese-Americans during World War II. Loving v Virginia produces a more typical result when racial classifications are involved: a unanimous Supreme Court strikes down Virginia’s miscegenation law.

The Court also applies strict scrutiny to classifications burdening certain fundamental rights. Skinner v Oklahoma considers an Oklahoma law requiring the sterilization of persons convicted of three or more felonies involving moral turpitude (“three strikes and you’re snipped”). In Justice Douglas’s opinion invalidating the law we see the origins of the higher-tier analysis that the Court applies to rights of a “fundamental nature” such as marriage and procreation. Skinner thus casts doubt on the continuing validity of the oft-quoted dictum of Justice Holmes in a 1927 case (Buck v Bell) considering the forced sterilization of certain mental incompetents: “Three generations of imbeciles is enough.”

The Court applies a middle-tier scrutiny (a standard that tends to produce less predictable results than strict scrutiny or rational basis scrutiny) to gender and illegitimacy classifications.

There is nothing to the claim that the fundamental rights listed in the Bill of Rights or the UDHR simply block governments from all legislation and regulation. Among other things, such categorical principles would make it impossible to adjudicate cases where rights and responsibilities come into conflict—which, again contrary to the rhetoric of crypto-anarchists and other black-and-white thinkers, happens more often than not. Wikipedia links: Strict Scrutiny; Intermediate Scrutiny

Posted in cyberlibertarianism, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , , | 5 Responses

Are “Backdoors” Real or Virtual? The Logical Flaw in #AppleVsFBI

I’ve been working for quite a while on a longer piece about the argument that “backdoors make us less secure,” an article of faith among cryptographers, hackers and computer scientists that is adhered to with such condescension, vehemence (and at times, venom) that I can’t help but want to subject it to the closest scrutiny (I’ve previously written a bit about these issues with regard to the financial technology communications system Symphony and full-scaled secrecy systems like Tor).

Leaving the more general question for a later time, I’ve noticed something in the discussion of Apple’s refusal to comply with a Federal court order regarding an iPhone used by alleged San Bernardino mass shooter Syed Rizwan Farook that puzzles me a bit, and that ties to the more general ideology that underlies the cypherpunk & cyberlibertarian ideologies that seem to me everywhere visible in these debates (which I and a few others think makes these debates much weaker than they should be).

Namely: when we speak of “backdoors” that “make us less secure,” is the point that a) the creation of an actual backdoor will make systems less secure, because that backdoor could be discovered by opponents–or, probably more to the point, released by an untrustworthy actor inside the vendor itself?

Or is the point b) that a system in which it is possible to create a backdoor is already inherently backdoored, whether or not the backdoor has been created—that is, it is virtual and not actual backdoors (or at least in addition to actual backdoors) that make systems vulnerable?

The idea of a “backdoor” is metaphorical. What it means in any particular system and in any given instance may be similar to or different from other backdoors. So we are talking at a level of very general principle; but then again the notion that “backdoors make us less secure” is asserted at just this level.

As a general principle, the point of the “backdoors make us less secure” argument seems to be that if you build a system with a “backdoor” in it, then anyone will be able to find it, not just the people whom you want to have access to it.

Leaving aside this general issue, the situation in #AppleVsFBI is not that one. Nobody is talking about modifying the released system software on existing phones. Nobody is talking about creating a new version of iOS that includes a new “backdoor” for the Feds to use: they are instead talking about Apple building a tool to modify the existing software. That tool will not directly affect existing phones in any way whatsoever.

Here is how Apple descries the situation in its Feb 16, 2016 “Letter to Our Customers.” Apple claims that creating the new version of its software the Court order requires would itself be a general backdoor, since the Order specifically requires that it be usable only on the specific iPhone:

The FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation. In the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession.

The FBI may use different words to describe this tool, but make no mistake: Building a version of iOS that bypasses security in this way would undeniably create a backdoor. And while the government may argue that its use would be limited to this case, there is no way to guarantee such control.

The government suggests this tool could only be used once, on one phone. But that’s simply not true. Once created, the technique could be used over and over again, on any number of devices.

Apple Vs FBI protest

Photo from Portland #AppleVsFBI protest (image source: MIke Bivens on Twitter)

While the FBI asks to be given the tool to do this itself, their request and the judge’s order clearly allow Apple to do this entire operation within its own physical location (see the second part of paragraph 3 of the judge’s order and footnote 4 of the FBI’s motion).

So Apple’s argument is, if we allow our own technicians to develop a technique to hack one iPhone, and that hack never gets outside of Apple’s internal security, the very fact that it exists would mean that a “backdoor” has been introduced into the iOS ecosystem and all users would be put at risk—not because anyone actually got access to the modified OS, but merely because the creation of the technique—this is the specific word Apple uses—would inherently weaken all the existing iPhones that have not been touched in any way.

The problem with this argument is subtle until you see it.

Apple itself admits that the FBI is not asking Apple to introduce backdoored software into all its phones—the traditional meaning of “backdoor” as the “backdoors make us less secure argument” would have it. It is, instead, asking Apple to develop a technique that could allow Apple and Apple alone to hack into the iOS software as it currently exists (or more accurately, to replace the OS on a given phone).

But either iOS is hackable through this method or it isn’t. There is plenty of reason to think that Apple has at least determined whether it is technically possible or not, and it is notable that Apple does not claim in its letter that what the FBI wants is technically impossible.

Let’s call the new version of iOS “hOS.”

Apple’s argument is: the mere creation of hOS creates a backdoor in all iPhones, even though nobody has access to hOS itself outside of Apple (and possibly the FBI).

Because remember, Apple has not argued that either its own in-house security, or the FBI’s internal security, are poor enough that the specific instance of hOS would get loose.

The problem is that either hOS is possible or it isn’t.

The problem is that if hOS is possible, it’s already possible. Nothing about actually creating it makes it more or less possible.

Therefore Apple’s master argument fails. Apple says that if hOS is possible, its products have weakened security, and everyone suffers.

Yet Apple has already admitted that hOS is possible. Nobody’s written it yet (as far as we know), but someone could—presumably, even someone with the requisite technical skill outside of Apple, like the at least moderately lunatic John McAfee.

If simply admitting that hOS is possible somehow tips off hackers to the insecurity of iOS and makes hacking it suddenly much more likely, then Apple’s own “Letter to Our Customers” inadvertently does just this.

My own view is that the “backdoors make us less secure” argument is far less determinative and airtight than its loudest advocates want to claim, for reasons I hope to explore later, in addition to the ones offered here. But in this case, Apple has already admitted that it can create hOS.

Its only remaining argument is that its own internal security is so weak that it can’t guarantee that, once hOS is written for one phone, its own employees won’t figure out ways to apply it to others and to sell that capability to others. But if this is true, by its own admission, such a breach is already possible. Maybe actually writing it adds a tiny amount more risk, but it’s hard to see how much risk it would add, especially since, for all the reasons we wouldn’t know whether someone inside Apple has “stolen” hOS after it was written, we also don’t know whether someone has already written it. The “backdoor” in this case is virtual, not real: it is the mere possibility of its existing that is the danger Apple points at—a possibility that Apple itself seems to acknowledge openly in its letter, thereby itself creating the “backdoor” that it claims to be defying the court order in order to forestall.

Ironically, the case itself gives some evidence against all of this. If the simple fact that iOS is virtually hackable means it is actually hacked, the FBI would not need to be going to much trouble to get Apple to hack it. Further, this is one of the places where encryptionists want to have it both ways, suggesting that the FBI can hack it (or that NSA can) but is working through the courts for some conspiratorial reason or other—which, if true, makes this entire conversation moot, although the encryptionists never want to admit that. I see no reason to accept this contorted logic. It seems to me much more plausible that writing hOS is hard, is best done by those with direct expertise in iOS, and that Apple’s security is perfectly adequate to make sure that others outside Apple don’t learn to spread and copy hOS and apply it to other phones. And note that if this is wrong, we are already in the bad place Apple claims that writing hOS would put us in, because Apple can’t trust its own security enough to make sure that employees aren’t selling its secrets to hackers—or that some of its own employees might be hackers, which is probably the case.

A final note on a related point: few seem to have noticed that along the way, it’s been made clear that Apple routinely provides access to encrypted backups of iPhone data in its iCloud service (see, e.g., discussion here). If it is the case that hOS constitutes a backdoor, then it is even more clearly the case that the existing ability to decrypt any iCloud backup of a phone counts as a backdoor. Yet nobody is screaming about this existing backdoor; they are instead arguing that the creation of this new backdoor, despite being targeted to a single device, would produce a devastating loss in security, despite the fact that it is still less pervasive than what currently exists in iCloud and what is not, as far as I know, even being proposed for conversion to a non-decyrptable system. None of the dangers we are told result from the “weakening of encryption” associated with “backdoors” appear to have resulted; iCloud backups remain a relatively secure and private service, although of course some technologists recommend against using them. Even the fact that Apple’s push toward total encryption seems to have been initiated in part due to prior breaches of iCloud, especially hacking of celebrity accounts, do not appear to have necessitated a version of iCloud Apple itself can’t access; we’ve had many fewer breaches of the system, as far as my scanning of the news feeds tells me, in the years following these breaches.

So the argument (attributed here to a Berkman Center founder) that “if Apple says yes to the U.S. government, it will make it harder to say no in countries with very different values” is an odd one to make, since we are already in that situation across the board: it is only Apple’s creation of system to which law enforcement might not be able to get access that changes things.

So what? As I’ve often said, an iterative approach to these matters—one that is consistent with other Silicon Valley practices, as opposed to the blanket “no backdoors” dictum—makes a lot of sense, especially given that certain schemes, such as the new absolutely undecryptable (at least given current technologies and resources) iMessage system, which law enforcement is in my opinion very justifiably concerned about, are being developed and sold. These systems advertise themselves as putting communication channels outside and above all legal, targeted legal investigation—whether for regulation or law enforcement—and are supported by people whose hatred for the US Government translates quickly into a hatred of all government, which licenses building a system that no government, no matter how “good,” could penetrate. One sees marks of that hatred in Apple’s own recent public discourse, especially when the same Tim Cook who speaks so strongly about #AppleVsFBI says that claims the company should pay more taxes than it does are “total political crap.”

This is cyberlibertarianism in action: the unacknowledged importation of far-right concepts and themes, in this case the idea that “government” and “evil” are absolute synonyms—into the discourse of digital technologists who do not see themselves as aligned with the far right. One can only support systems like iMessage if one believes that the very idea of government is offensive—not just our government, but any government whatsoever. Despite the many ways in which such a view is contradictory and incoherent, it remains a widespread crie de couer among many on the far right, and they do their best to spread that message widely. If you think government should not exist (a view I find largely incoherent, but we need to talk about that on its own terms), we should have that political discussion. We should not be having companies build tools whose partly-stated reason for being is to disable vital functions of government without quite saying so.

Posted in "hacking", cyberlibertarianism, surveillance | Tagged , , , , , , , , , , | 1 Response

The Volkswagen Scandal: The DMCA Is Not the Problem and Open Source Is Not the Solution

tl;dr

The solution to the VW scandal is to empower regulators and make sure they have access to any and all parts of the systems they oversee. The solution is not to open up those systems to everyone. There is no “right to repair,” at least for individuals. Whether or not it deserves to be called a “freedom,” the “freedom to tinker” is not a fundamental freedom. The suggestion that auto manufacturers be forced to open these systems is wrongheaded at best and disingenuous at worst. We have every reason to think that opening up those systems would make matters worse, not better.

full post

Volkswagen’s manipulation of the software that runs the emissions control devices in its cars has rightly produced outrage, concern, and condemnation. Yet buried in the responses have been two very different lessons—lessons that may at first sound very similar, but on closer examination are as different as night and day. Some writers have not been careful to distinguish them, but this is a huge mistake, as they end up embodying two entirely different philosophies that in most important ways contradict each other.

The two philosophies will be familiar to my readers:

  • The cyberlibertarian, cypherpunk, FOSS perspective: we can prevent future VWs by mandating all such software be available for inspection and even modification by users;
  • The democratic response: regulators like the EPA should have access to proprietary code like that in VW vehicles.

These two responses may sound similar, but they are radically different. One suggests that the wisdom of the crowd will result in regulations and laws being followed. The other puts trust in bodies specifically chartered and licensed to enforce regulations and laws.

One says—and this is everywhere in the discussions of the topic—that it’s the fault of the Digital Millennium Copyright Act (DMCA) and EPA’s resistance to the proposed grant of an exemption for users to access automobile software, an exemption for which the Electronic Frontier Foundation was a principle advocate. It is hard to find an account of this story that does not excoriate EPA for opposing that exemption and even blame that response for the long period of time it took to uncover VW’s cheating. Contained within that sentiment is the inherent view that regulators can’t do their jobs and that ordinary citizen security researchers can and will do it better; this typical cyberlibertarian contempt and narcissistic lust-for-power is visible in much of the discourse adopting this view. This, by far, has been the dominant response to the story. The only real challenge to this narrative has come from the estimable Tom Slee, whose “Volkswagen, IoT, the NSA and Open Source Software: A Quick Note” is, along with Jürgen Geuter’s “Der VW-Skandal Zeigt unser Vertrauensproblem mit Software” (approximately: “The VW Scandal Demonstrates Our Confidence Problem with Software”) the best thing I’ve read on the whole scandal, and with which I am in strong agreement.

The other says that certain social functions are assigned to government. for good reason. From this perspective, one might want to look at the massive defunding of regulatory agencies and the political rejection of regulation engineered by not just the right in general but the digital technology industries themselves as a huge part of the problem. Critically, from this perspective, the DMCA just has nothing to do with this issue at all. Regulators can and do look inside of products that are covered by trade secrecy and other intellectual property agreements. They have to.

These aren’t just abstract differences. They embody fundamentally different ways of seeing the world, and in how we want the world to be organized.

I think the first view is incoherent. It says, on the one hand, we should not trust manufacturers like Volkswagen to follow the law. We shouldn’t trust them because people, when they have self-interest at heart, will pursue that self-interest even when the rules tell them not to. But then it says we should trust an even larger group of people, among whom many are no less self-interested, and who have fewer formal accountability obligations, to follow the law.

If anything, history shows the opposite. The more we make it optional to follow the law—and to be honest, the nature of an “optional law” is about as oxymoronic as they come, but it is at the core of much cyberlibertarian thought—the more we put law into the hands of those not specifically entrusted to follow it, the more unethical behavior we will have. Not less. That’s why we have laws in the first place—because without them, people will engage even more in the behavior we are trying to curtail.

Now consider the current case. What the cyberlibertarians want, even demand, is for everyone to have the power to read and modify the emissions software in their cars.

They claim that this will eliminate wrongdoing. In my opinion, and there is a lot of history to back this up, it will encourage and even inspire wrongdoing.

This is where the cyberlibertarian claim turns into pure fantasy, of a sort that underlies much of their thinking in general. Modifying cars has a significant history. You don’t need to go far to find it.

Show me the history of car owners modifying their cars to meet emissions and safety standards when they don’t otherwise meet them.

VW Super Beetle (1972)

VW Super Beetle (1972). Image source: conceptcarz.com.

Because what I’ll show you is the overwhelming majority of car modifications, in so far as they deal with regulatory standards, are performed to bypass standards like those and many others.

You don’t have to read far in the automotive world to see how deeply car owners want to bypass those standards, in the name of performance and speed. You’d have to read much farther and much deeper to find evidence of automobile owners selflessly investigating whether or not their vehicles are meeting mandates.

Not only that: we don’t have to look far to find this pattern directly regarding diesel Volkswagens. A recent story on the VW scandal at the automotive-interest site The Truth About Cars notes that

the aftermarket community has released modifications for the DPF and Adblue SCR systems long before there was any talk of reduced power and economy coming from a potential fix for the emissions scandal. They looked to gain more power and better fuel economy by modifying or deleting the DPF system.

These “aftermarket tuner” kits like DPF and Adblue SCR have to be marketed “as off-road only as they violate federal emissions laws.” These are the selfless regulation-focused folks we should rely on to protect our environment? Seriously?

In fact, EPA has already studied the specific question of software modification to emissions systems (which is part of what makes me wonder whether those who have excoriated EPA’s response have actually read the letter):

Based on the information EPA has obtained in the context of enforcement activities, the majority of modifications to engine software are being performed to increase power and/or boost fuel economy. These kinds of modifications will often increase emissions from a vehicle engine, which would violate section 203(a) of the CAA (Clean Air Act), commonly known as the “tampering prohibition.” (2)

It is beyond ironic that this scandal has been taken to demonstrate that “we need to open up the Internet of Things,” or that “DMCA exemptions could have revealed Volkswagen fraud,” or that the scandal makes clear the “dangers of secret code.” I would argue that the lesson is entirely different: people will cheat. Making it easy for people to cheat means they will cheat more. Regulators need access to the code that runs things, but just because people will cheat, ordinary people should not have that access. They should not have access to the code that runs medical devices, to the code that runs self-driving cars, to the code that runs airplanes, or to the code that controls security systems in our houses.

Rather than showing that EPA was wrong to oppose the DMCA exception and that people like Eben Moglen are right about opening up proprietary software, we would do better to observe what he himself did about the elevator that the New York Times writes about in their paean to him and his work. That story begins and frames itself around a discussion of elevator safety. Here is the elevator anecdote in its entirety, from a 2010 talk by Moglen:

In the hotel in which I was staying here, a lovely establishment, but which I shall not name for reasons that will be apparent in a moment, there was an accident last week in which an elevator cable parted and an elevator containing guests in the hotel plummeted from the second story into the basement. When you check in at the hotel you merely see a sign that says “We are sorry that this elevator is not working. And we are apologetic about any inconvenience it may cause.” I know that the accident occurred because a gentleman I met in the course of my journey from New York to Edinburgh earlier this week was the employer of the two people who were in the car. And in casual conversation waiting for a delayed airplane the matter came out. I have not, I admit, looked into the question of elevator safety regulation in the municipality. But in every city in the world where buildings are tall (and they have been tall here in proportion to the environment for longer than they have in most parts of the world) elevators safety is a regulated matter, and there are periodic inspections and people who are independent engineers, working at least in theory for the common good, are supposed to conduct such tasks as would allow them to predict statistically that there is a very low likelihood of a fatal accident until the next regular inspection.

While it is taken as an argument for user access to the code that runs elevators, it is actually anything but. It is an argument for regulators having access to that code, period. I do not want the hackers in my building to have access to the elevator code, and neither should you. I do not want them to have access to the code in voting machines.

Moglen made this remarkable statement in the New York Times article:

If Volkswagen knew that every customer who buys a vehicle would have a right to read the source code of all the software in the vehicle, they would never even consider the cheat, because the certainty of getting caught would terrify them.

I don’t know about terror, but I would be distinctly concerned, as EPA is, that this “right” would mean a regime of emissions cheating by individuals that would not only far outflank what Volkswagen has apparently done, but, by dint of its being realized in a thousand different schemes for software alteration, would make those modifications virtually impossible to check. What is particularly striking is that this reasoning, which builds on obvious, well-understood facts, could be jettisoned in favor of an idealistic and obviously false view of human political conduct for which virtually no evidence can be generated.

In fact, to the degree that we have evidence, we know that the opposite is true: Linux, Android, and many other OS projects are routinely attacked by hackers, while the Apple iPhone operating system—contained in its famous “walled garden”—continues to be one of the safest software environments around. (Reports have indicated that up to 97% of all mobile malware is found on the open source Android system.) Contrast this to the “jailbroken” iOS software, which is pretty much the best way to ensure that your iPhone gets malware. (In fact, just this week we have the first-ever report of malware on iPhones that aren’t jailbroken.) Really? Opening things up protects us? Who’s zooming who?

Not only that: we have plenty of evidence that even in small, isolated cases that are critical to security and that thousands of coders care deeply about (I am specifically thinking of OpenSSL, as Geuter discusses in his article), open source still does not produce secure products—certainly no more secure than closed source does.

All of this should really raise questions for people about the motivations behind the demand that security software be opened: I think, just as in this case, that selfless desire to improve the world for everyone is at best what motivates some of those involve in this question. Just as prominent—perhaps more prominent—is an egotistical drive to control and to deny the legitimacy of any authority but oneself. That attitude is exactly the one that leads to regulation-flouting modifications and the production of malware, not to combating them. Like everything else in the world of the digital, it relies on an extremely individualistic, libertarian conception of the self and its relationship to society.

One additional thing that I find a bit dispiriting about this is that one of the best books to come out in recent years about digital politics, Frank Pasquale’s Black Box Society, is specifically focused on the question of technological and particularly algorithmic “black boxes.” Pasquale specifically argues that regulators must be given not just the power (some of which they already have, much of which—for example in the case of algorithms used by Facebook, Google, and Acxiom—they do not) but also the capability (which means resources) to see into these algorithmic systems that affect all of us. Pasquale makes a long and detailed argument and an impassioned plea for a “Federal Search Commission,” parallel to the FDA and EPA, that would be able to see into important technologies whether or not they are protected by trade secrets. Pasquale has been suggesting this for a very long time. He is among the most prominent legal theorists addressing these issues. How is it that when an event occurs that should cause at least some well-informed commentators to show how it validates that thought, virtually nobody does? And worse: the New York Times actually writes a story saying that the scandal validates the work of Eben Moglen, who might well be thought of as the political opposite of Pasquale—despite the fact that Moglen’s apparent version of this story makes very little sense and contradicts his own analysis of similar situations.

That is part of why cyberlibertarianism must be understood as an ideology, not an overt political program. Like all ideologies, it twists issues into parodies of themselves in order to advance its agenda, even when the facts point in exactly the opposite direction.

Postscript

In a response to this piece on Twitter, Jürgen Geuter said that he read me to be saying that closed source is more secure than open source. I don’t mean to be saying that; I mean to make only the more modest claim that open source is not inherently more secure than closed source. As for which is more secure, I am not sure that question has an answer on the open/closed axis, but I really don’t know. In fact I take that to be part of the lesson of Geuter’s excellent recent piece in Wired Germany that I link to above. The number of people who can accurately evaluate any project for its total security profile is somewhere between “very small” and “nonexistent.” The Android vs. iOS example I use is meant only to show that open source projects are not inherently more secure; there is much more to that story than open vs. closed, and of course iOS is itself at least partly based on the Free Software Unix operating system BSD, as are other Apple operating systems. But it is fair to say that Apple’s “walled garden” has long been a target of ire from the developer community, and has yet been one of the most secure platforms available–at least so far. I do draw from this fact the conclusion that the demands by FOSS advocates that all systems be opened because it will make them more secure are at best unfounded and at worst dishonest–dishonest because a significant number of people in those communities want that access not to increase security but specifically to learn how to defeat it more easily. And I do think, whether it makes the systems as a totality less secure or not, that exposing the complete internals of systems to everyone gives attackers an informational advantage no matter how you slice it.

Posted in "hacking", cyberlibertarianism, materality of computation | Tagged , , , , , , , , , , , , , , , , , | 3 Responses

Encryption and Responsibility: A Note on Symphony

Typically, those of us concerned about the widespread use of encryption and anonymization technologies like Tor are depicted by crypto advocates as “anti-encryption” or “freedom haters” or “mind-murdering censors” or worse. Despite the level of detail these people can bring to technological matters, they often portray the political options as very stark: either “encryption” or “no encryption.” Like so many other things today, it can be like arguing with the proverbial wall to get our opponents to see that we do not want “no encryption.” All encryption and anonymization schemes are not the same. We don’t want encryption not to be used. We want encryption (and anonymization) schemes that make sense. We want them to be used responsibly.

Finally we have a fairly clear case at which we can point to make this clear. Over the past year, a coalition of investment banks have been working on a comprehensive secure communications package called Symphony. The makers of Symphony state that the software provides “a platform for communities of financial services professionals to communicate securely and efficiently using compliant standards and end-to-end encryption.”

Let me be as clear as possible: I think this is a good idea. It is necessary. It is appropriate.

But the Symphony marketing literature included some statements that made people like me worry, for just the reason I worry about services like Tor. It looked as if the service was structured not just to protect the integrity of bank communications, but to hide them from regulators. The marketing language distinguished Symphony from previous communication tools for the financial industry that were “limited in reach and effectiveness by strict regulatory compliance,” while Symphony would “prevent government spying” and would “guarantee that data deletion is permanent.” These promises go well beyond encryption per se.

This caused the New York Department of Financial Services (NYDFS), one of the major regulators of the financial industry, and Senator Elizabeth Warren, one of the leading consumer advocates in the US, to raise concerns about Symphony. NYDFS has to be able, in the proper legal context, to see any and all communications in which the banks it regulates engage. They do not and should not need warrants to see those communications. Even the “legal fiction” of corporations being persons does not go so far as to grant them, qua corporation, the full protection of the Bill of Rights. Regulators can, do, and must examine corporate communications according to the regulatory rules in place, not under criminal or even civil warrant. The companies exist according to certain rules with which they agree to comply, including regulatory oversight. That is the law. It is a good law. In many cases, the application of this law is the only thing that has uncovered major misdealing in the financial industry, including, as both Warren and NYDFS point out, the Libor price-fixing scandal. If anything, “freedom” as I understand it requires much more thorough and rigorous oversight of the financial industry, not less. Among other things, banks are not, in general, allowed to delete any of the data generated in the course of doing business, in order that regulators can backtrack through their actions to ensure compliance.

symphony-large-mod

Despite Symphony appearing to advertise itself specifically to bypass regulatory oversight, at least one well-known crypto advocate attacked Warren for daring to question any part of the Symphony system, as if regulatory oversight of corporations is an affront to “freedom,” while the use of encryption is such an absolute right, even for corporations, that the integrity of such a widely-praised consumer advocate as Warren could be called into question for daring to say anything that even smacked of concern about encryption.

Well, now we have a resolution to this story, one that I hope gives clarity to what “people like me” want, and why encryption is something we should be concerned about while at the same time not wanting to eliminate it. On Monday, NYDFS announced a settlement agreement with Symphony and four of the banks sponsoring it. The agreement allows the project to move forward almost as originally proposed, with the following provisos, relating specifically to what concerned both NYDFS and Elizabeth Warren:

  • Symphony will retain for seven years a copy of all e-communications sent through its platforms to or from the four banks;
  • The four banks will store duplicate copies of the decryption keys for their messages with independent custodians (i.e., custodians not controlled by the banks).

Among the many interesting things about this development is the second point constitutes a form of key escrow. Key escrow is one of the technologies that crypto advocates frequently dismiss as destructive of security; one of the most prominent and reasonable crypto advocates, Matthew Green of Johns Hopkins University, is no fan of key escrow. I have so far found the arguments against it unpersuasive, in part because they take such a big-picture view of the world that they suggest there might be one giant escrow authority holding all the keys to everything. Here, although the details haven’t been made public, Symphony and the banks appear to have agreed to create an escrow authority specific to their software platform. Perhaps that will introduce vulnerabilities into their system; perhaps not. We have a good test case from which to observe. Observing from the outside, it is hard not to think that Symphony and the banks would not have agreed so quickly to something that the numerous cryptography experts on Symphony’s (and the banks’) payrolls thought made them vulnerable. If this works, as I suspect it will, we have a model that might be applicable elsewhere.

This agreement sounds like exactly what I hope for. Encryption is widely used to secure communications in an appropriate fashion. But it is not deployed so as to put the powerful, especially corporations, above the law.

One thing this shows is that all encryption and anonymization schemes are not the same. Responsible encryption schemes are not just welcome; they are necessary. But irresponsible encryption schemes really do threaten fundamental political principles, especially including the rule of law. Despite the fact that many crypto advocates appear to strongly endorse it, I remain very concerned about Apple’s iMessage encryption, which is designed to make the service of all warrants impossible, and which the New York Attorney General and others have claimed has blocked a variety of fully-legal warrants in the few months since it has become available (Matthew Green has posted some comprehensive discussions of the iMessage system, though I think he gives too much credence to the crypto advocates’ typical excessively paranoid skepticism toward all statements made by law enforcement officers). Many crypto advocates have promoted this system for reasons that I find incomprehensible within our system of constitutional governance. All encryption is not the same. Symphony may be a responsible encryption scheme, while iMessage may not be.

It is hard for me not to wonder whether DFS and others have noticed the part of Tor’s promotional materials where they boast that “business executives use Tor.” The arguments for using Tor inside businesses have never made much sense to me, since most businesses are required, contractually and/or legally, to be aware of and record all relevant communications that take place under their name.

Thus, when what must be a luddite and technophobic company called IBM recommended in August that businesses should block Tor, one of Tor’s original developers weighed in on this discussion on the Tor-Talk list, not to take IBM’s concerns seriously, but instead to point out reasons why “your company would have a reason for you to use Tor.”

This is exactly one of the main things that has had worried me about Tor all along. Most of the people involved with the Tor projects have become political advocates, pressing hard for one side of a debate that should be nuanced and of which the other side should be taken very seriously. (Aside to certain people who have asked: when I use the term “politics” like this I mean it in the sense used by political scientists and other academics: matters that affect the arrangements of power that structure society and social institutions. I do not mean “politics” in the sense of being a Republican or Democrat, although the kinds of politics I’m talking about certainly can have consequences for these more formal party politics). Personally, I hope that NYDFS decides to look into the use of Tor by the same banks that use Symphony; exactly for the reason that Symphony has to be configured so as to fit into sensible regulatory requirements, Tor, which cannot (as far as I know) be configured in this way, or at least does not come that way out of the box (i.e., in the Tor Browser Bundle) should be blocked by banks, and by all corporations that have regulatory oversight. So should all tools that enable unrecordable. undecryptable, electronic transactions (which goes far beyond “communication”). The fact that spoken word conversations not held on the phone are not recordable (but also not encrypted) does not somehow entail that we should, let alone that we must, proliferate tools that expand this capability over distance and time. Nobody who loves “freedom” should want corporations to conduct their business outside the law.

Posted in cyberlibertarianism, materality of computation, privacy, surveillance | Tagged , , , , , , , , , , , , , , , | 1 Response

Right Reaction and the Digital Humanities

A while back, I had an encounter that struck me at the time, and continues to strike me, as perfectly emblematic of the Digital Humanities as an ideological formation. While it includes a kind of brutal incivility that I associate with much of the politics that persists very near the “nice” surface of DH (of which one needs no more shocking example than the recent deeply personal and brutally mocking responses by two people I had thought were her close friends and colleagues to a perfectly reasonable and non-personal piece by Adeline Koh), I try to avoid such directly uncivil tactics if I can, and so I have deliberately let a significant amount of time pass so as to remove as much as possible the appearance of personal attack in writing this up. I have also omitted a significant amount of information so as to (hopefully) obscure the identities or institutional affiliations or even professional specializations of the persons involved, including avoiding all pronouns that would reveal the gender of any of the speakers, as I am much less interested in criticizing one individual than in showing how this person’s conduct represents a significant part of the psychology and politics that drives parts of DH.

The bare bones of the story are as follows. I am the co-leader of a “Digital Humanities and Digital Studies Working Group” (DHDS) at Virginia Commonwealth University (VCU), where I teach in the English Department. The group usually proceeds by reading and discussing texts, although sometimes we look at projects, consider projects by group participants, and have visits from outside speakers. Recently, a DHDS meeting was scheduled with a group of speakers who had been invited to campus for other reasons. These speakers included fairly well-known members of the DH community. One of them, to whom I’ll refer as A, occupies a position of some significant seniority and power in that community. (The other speakers, who don’t play much of a role in what follows, I’ll refer to as B and C.) Nevertheless, I had never met, talked to, or read anything by A prior to this meeting, in part because A has published very little about DH.

The meeting was attended by the group of speakers, a few faculty members from VCU, and a half-dozen PhD students from the MATX program in which I am a core faculty member, all of whom I had worked or was currently working with in some form or another.

The meeting began with the convener who had organized this event asking me to speak about a symposium we held at VCU a few years ago called “Critical Approaches to Digital Humanities,” about my own experience in DH, and about the overall course of discussions we had had to that point in the DHDS working group. I spoke for just over 5 minutes. I gave a brief overview describing how I came to the views I hold and how the symposium came into being. My main focus was my own experience: I mentioned that I was one of the first two people (along with Rita Raley of UCSB) hired as a “digital humanist” in the country and that despite being employed as a “Digital Humanities professor” since 2003, and despite a large number of projects and publications, my name does not occur in any of the typical journals, conferences, list, organizations, etc. I described my view, familiar to those who know my work or me, that DH is seen at least as profitably as a politics than it is a method, and that as a politics its function has been to unseat other sites of authority in English departments and to establish alternate sites of power from existing ones, and in no small part to keep what I broadly call “cultural studies of the digital” out of English departments, and generally to work against cultural studies & theoretical approaches, while not labeling itself as such. I discussed how frequently I am published in forums devoted to debating the purpose of DH, but that as far as DH “proper” goes, the unacceptability of my work to that community has been a signal and defining part of my career—despite my continuing to be employed as a professor of DH. Needless to say, it was clear that none of A, B, or C had ever heard of me or read anything I’ve written, which is fine: for just the reasons that make me so skeptical of DH as an enterprise, the main part of my work is not the sort of stuff that interests DHers, although it does seem to be of significant interest to those who see studying the digital per se to be important, which I am of course glad about.

B and C first responded to what I’d said for a while, saying something positive about the concerns I’d raised.

Then A started talking, with a notably hostile tone, which I found remarkable in itself given that A was in part my guest and that I’d said nothing whatsoever directed at or about A (it’s also worth noting that A is not in English). “I have to take issue with what David has said,” A said. “DH is not a monolith.” I hadn’t described it as a monolith (I had said it is profitably viewed as a politics as well as a method) and as usual the point of this familiar claim wasn’t clear (“not a monolith” suggests that my critique is valid for some parts of DH, but that there are others of whom it isn’t true; but A went on immediately, as do almost all of those who use the “not a monolith” response, to dispute every allegation I’d made across the board), except that I was very wrong. Yet the disrespect and hostility emanating from A were palpable. So was A’s complete dismissal of my own reports of my experience, and perhaps even more stunningly, of my own work as a scholar, with which A was clearly entirely unfamiliar, but whose quality A had already assessed based on my brief story.  I saw my students looking at me with jaws agape—they had heard my skepticism and critique of DH many times, of course, and a couple of them had seen something of what was on display here, but as several of them said to me later, they had never seen it in action as a political force, where the excess of emotion and the brute point of the emotion (in some sense to shut me down or disable my line of critique without engaging it directly) were so readily visible.

Some of the other points A made that I took notes on at the time: “I don’t accept analyses based on power,” apparently meaning that any analysis of any situation that looks at political power is inherently invalid, a claim I found not only remarkable coming from a humanist of any discipline, but also one that we typically hear only from the political right, which is to say, the party that benefits from its alignment with power, an alignment it often tries to downplay even as it benefits.

“The grants awarded by ODH [the Office of Digital Humanities at the National Endowment for the Humanities] are not uniform” (I had pointed out that ODH exclusively or near-exclusively funds tools-and-archives, a point that I am not alone in making and that I wrote up in a blog post with detailed analysis of one year’s awards). Interestingly, either B or C chimed in at this point to say that actually they agreed that the awards were remarkably uniform in their focus on tools and archives, the point I was making.

UKIP

To this I responded, “yes they are, and I’ve done a statistical analysis that shows it. There have never been any grants awarded for critical study.”

A replied: “they aren’t uniform, and it is their prerogative to decide what to fund. And as we just saw [referring to a single recently-published article on big data] statistics aren’t reliable.” (I really struggled not to laugh at this point: a DHer committed to quantitative analysis so angry at me as to argue that statistics as a whole are not valid? But it happened; there are even witnesses.) I tried to point out that we were not dealing with sampling (aka the usual meaning of “statistics”) but with an analysis of the complete list of all ODH grants for a single year, and a briefer examination of all the grants for other years, in which virtually no grants are devoted to “theory” or cultural studies as such, or to critical analysis of the digital. A waved this off with A’s hand and a pronounced sneer. (Interestingly, this was one point where either B or C interjected a bit in my favor, opining that there is a uniformity to the grants along the lines I suggested and that they are unprecedented, but A was unmoved.)

I asked: “is it the prerogative of funding agencies to provide unprecedented amounts of funding [A shakes head vehemently no, to disagree that they are unprecedented] for projects not requested by the units themselves?” A replied: “they aren’t unprecedented.” I insisted that they are and asked for the precedent to which A was referring, and A rejected my question as inappropriate without giving any actual response.

I said that despite the general truth that DH is not a “monolith,” there is a “core agenda” or view of “narrow DH” that is widely-understood in the field, often referred to as “tools and archives.” I referred to the Burdick et al Digital_Humanities volume as a recent example that explicitly sets out this single-focused agenda for DH. A interrupted again, angrily, dismissing my comments, insisting that “that volume has been widely discredited” and that the “narrow” view of DH was incorrect.

Toward the end of the conversation one of the more telling exchanges occurred. I had noted that “a main point of DH has been to reject the role of theory in English Departments, and it has been successful.” A replied quite a bit later, as if they comment had struck some kind of nerve: “the one thing I agree with David about is that DH is opposed to theory,” making it clear that this was a very good thing.

One dynamic that is worth pausing over: B and C are both relatively well-known members of the DH community. Not only were they visibly shocked by A’s conduct, but they both several times made comments in which they tried to “heal the breach” by granting that certain parts of my critique were probably right, and several times explicitly endorsed some of my specific comments. Yet anyone sitting in that room, no doubt including B & C themselves, walked away seeing the conflict between A and me as the thing that was happening, as the main political event. To me, B & C stand for all those perfectly well-meaning DHers who are not themselves directly invested in its poisonous political agenda. I do not resent the fact that B & C could not repair the event more fully. But I think they are emblematic of the role played by all those in the DH community who don’t understand or endorse or take seriously what I have tried for years to explicate as its politics. They are, broadly speaking, ineffective, and as such, end up adding gravitas to the power of those with an agenda. Their level of conviction and commitment, especially politically speaking, is far shallower than those who really do care. My impression, which may be self-serving, was that B & C were actually more inclined to take my statements seriously because of the wide-ranging and inexact vitriol of A’s performance; at some level I hope that the level of attack those of us who dare to try to locate the politics of DH might inspire others of reasonable mind to do the same.

Then in the evening we had a series of talks by the guests. It will surprise no one to know that A’s paper (composed, I am 99% sure, prior to the events of the day) explicitly and at length endorsed exclusively the tools-and-archives, “narrow” definition of DH that A had strenuously attacked me 6 hours earlier in the day for suggesting was the core of DH. A seemed not to recognize at all that this contradicted what A had so vehemently stated hours before. It even sounded like DH was a monolith after all, which I found a bit shocking.

I let this post sit for quite a while, though I took notes at the time for the purposes of writing it up. What I found remarkable about the encounter was the way that, as I have seen many times, any critique of DH in general receives what I take to be a typical rightist reaction form. First, hostility and belittling of the target; then, absolute rejection of anything the target says, typically without even having heard what that was; then, an assertion of positive principles that, more often than not, actually endorses what the critique was, but with the added affirmation that what is done was correct. This is the same pattern I encounter when I criticize Tor, or bitcoin, or cyberlibertarianism. I am an idiot; I am wrong for saying these things tend to the right; I don’t understand what the right is; actually, the right is correct, and these things should tend to the right–and despite this being my original thesis, I am completely wrong. I see that as part of the rightward tilt that is endemic to digital technology, absent careful and vigilant attendance to one’s political means and ends. “The digital” is strongly aligned with power. Power and capital in our society are inextricably linked, and in many ways identical. Strongly identifying with “the digital” almost always entails a strong identification with power. That identification works particularly well, as do all reactionary ideological formations, by burying that identification under a facade of neutrality. “I reject political analyses,” this position says, while enjoying and strongly occupying the position of power which it currently inhabits itself. Much like Wikipedia editors or GamerGate trolls, this simultaneous embrace of and disavowal of power is key to the maintenance of rightist political formations.

Posted in cyberlibertarianism, digital humanities, rhetoric of computation, what are computers for | Tagged , , , , , , , , , | Comments closed

Crowdforcing: When What I “Share” Is Yours

Among the many default, background, often unexamined assumptions of the digital revolution is that sharing is good. A major part of the digital revolution in rhetoric is to repurpose existing language in ways that advantage the promoters of one scheme or another. It is no surprise that while it may well have been the case that to earlier generations sharing was, in more or less uncomplicated ways, good, the rhetorical revolution works to dissuade us from considering whether the ethics associated with earlier terminology still apply, telling us instead that if we call it sharing, it must be good.

This is fecund ground for critics of the digital, and rightly so. Despite being called “the sharing economy”—a phrase almost as literally oxymoronic as “giant shrimp,” “living dead” or “civil war”—the companies associated with that practice have very little to do with what we have until now called “sharing.” As a rule, they are much more like digital sharecroppers such as Facebook than their promotional materials tell us, charging rent on the labor and property of individuals while centralized providers make enormous profits on volume (and often enough by offloading inherent employer costs to workers, while making it virtually impossible for them to act as organized labor). Of course there is a continuum; there are “sharing economy” phenomena that are not particularly associated with the extraction of profit from a previously unexploited resource, but there are many others that are specifically designed to do just that.

One phenomenon that has so far flown under the radar in discussions of peer-to-peer production and the sharing economy but that demands recognition on its own is one for which I think an apt name would be crowdforcing. Crowdforcing in the sense I am using it refers to practices in which one or more persons decides for one or more others whether he or she will share his or her resources, without the other person’s consent or even, perhaps more worryingly, knowledge. While this process has analogs and has even itself occurred prior to the digital revolution and the widespread use of computational tools, it has positively exploded thanks to them, and thus in the digital age may well constitute a difference in kind as well as amount.

Once we conceptualize it this way, crowdforcing can be found with remarkable frequency in current digital practice. Consider the following recent events:

  • In a recent triumph of genetic science, a company called DeCode Genetics has created a database with the complete DNA sequences for all 320,000 citizens of Iceland. Slightly less noted in the news coverage of the story is that DeCode collected actual tissue samples from only about 10,000 people, and then used statistical techniques to extrapolate the remaining 310,000 sequences. This is not population-level data: it is the full genetic sequence of each individual. As the MIT Technology Review reported, the study raises “complex medical and ethical issues.” For example, “DeCode’s data can now identify about 2,000 people with the gene mutation across Iceland’s population, and [DeCode founder and CEO Kári] Stefánsson said that the company has been in negotiations with health authorities about whether to alert them.” Gísli Pálsson, an anthropologist at the University of Iceland, is reported as saying that “This is beyond informed consent. People are not even in the studies, they haven’t submitted any consent or even a sample.” While there are unique aspects to Iceland’s population that makes it particularly useful for a study like this, scientists have no doubt that “This technique can be applied to any population,” according to Myles Axton, chief editor of Nature Genetics, which published some of DeCode’s findings. And while news coverage of the story has dwelt on the complex medical-ethics issues relating to informing people who may not want to know of their risk for certain genetic diseases, this reasoning can and must be applied much more widely: in general, thanks to big data analytics, when I give data to a company with my overt consent, I am often sharing a great deal more data about others to whom I am connected without even their knowledge, let alone any kind of consent. We can see this in the US on popular sites like 23andMe and Ancestry.com, where the explicit goals often include specific inferential data about people who are not using the product. The US itself is in process of conducting a genetic database that may mimic the inferential capacities of the Icelandic one. Genetic information is is one of the better-regulated parts of the data sciences (though regulation even in this domain remains inadequate), and yet even it seems to have an impoverished vision of what is possible with this data; where do we look for constraints placed on this sort of data analysis in general?
  • Sharing pictures of your minor children on Facebook is already an interesting enough issue. Obviously, you have the parental right to decide whether or not to post photos of your minor children, but parents likely do not understand all the ramifications of such sharing for themselves, let alone for their children, not least since none of us know what Facebook and the data it harvests will be like in 10 or 20 years. Yet an even more pressing issue occurs when people share pictures on Facebook and elsewhere of other peoples’ minor children, without the consent or even knowledge of those parents. Facebook makes it easy to tag photos with the names of people who don’t belong to it. The refrain we hear ad nauseum—“if you’re concerned about Facebook, don’t use it”—is false in many ways, among which the most critical may be that those most concerned about Facebook, who have therefore chosen not to use it, may thereby have virtually no control over not just the “shadow profile” Facebook reportedly maintains for everyone in the countries where it operates, but even what appears to be ordinary sharing data that can be used by all the data brokers and other social analytic providers. Thus while you may make a positive, overt decision not to share about yourself, and even less about the minor children of whom you have legal guardianship, others can and routinely do decide you are going to anyway.
  • So-called “Sharing Economy” companies like Uber, Lyft, and particularly in this case AirBnB insist on drawing focus to the population who looks most sympathetic from the companies’ point of view: first, the individual service providers (individuals who earn extra money by occasionally driving a car for Uber, or who rent out their apartments when out of the city for a few weeks), and second, the individual users (those buying Uber car rides or renting out AirBnB properties). They work hard to draw attention away from themselves as companies (except when they are hoping to attract investor attention), and even more strongly away from their impact on the parts of the social world that are impacted by their services—in so far as these are mentioned at all, it is typically with derisive words like “incumbent” and in contexts where we are told that the world would beyond question be a better place if these “incumbents” would just go away. One does not need to look hard on peers.org, an astroturf industry lobbying group disguised as a grassroots quasi-socialist advocacy organization, to find paean after paean to the benefits brought to individuals by these giant corporate middlemen. (More objectively, much of the “sharing economy” looks like yet particularly broad cases of the time-honored rightist practice of privatizing profits while “socializing” losses.) One has to look much harder—in fact, one will look in vain—to find accounts of the neighbors of AirBnB hosts whose zoned-residential properties have turned into unregulated temporary lodging facilities, with all of the attendant problems these bring. One has to look even harder to find thoughtful economics analyses that discuss what the longer-term effects are of housing properties routinely taking in additional short-term income: it does not take much more than common sense to realize that the additional income flowing in will eventually get added to the value of the properties themselves, eventually pricing the current occupants out of the properties in which they live. The impact of these changes may be most pronounced on those who have played no role whatsoever in the decision to rent out units to AirBnB—in fact, in the case of condominiums, the community may have explicitly ruled out such uses for exactly this reason, and yet, absent intensive policing by the condo board, may find their explicit rules and contracts being violated in any number of ways. And condo boards are among the entities with the most power to resist these involuntary “innovations” on established guidelines; others have no idea they are happening. As Tom Slee’s persistent research has shown, AirBnB has a disturbing relationship with what appear to be a variety of secretive corporate bodies who have essentially turned zoned-residential properties into full-time commercial units, which not only violates laws that were enacted for good reason, but also promises to radically alter property values for residents using the properties as the law currently permits.
  • Related to the sharing of genetic information is the (currently) much broader category of information that we call right now the quantified self. In the largest sense we could include in this category any kind of GTD, to-do list, calorie and diet trackers, health monitors, exercise and fitness trackers, monitoring devices such as FitBit and even glucose monitors, and many more to come. On the one hand, there are certainly mild crowdforcing effects of collecting this data on oneself, just as there are crowdforcing effects of SAT prep programs (if they work as advertised, which is debatable) and steroid usage in sports. But when this data is collected and aggregated—whether by companies like FitBit or even user communities—they start to impact all of us in ways that only seem minor today. When these data get accessed by brokers to insurers or insurers themselves, they provide ample opportunities for those of us who choose not to use these methods at all to be directly impacted by other people’s use of them, from whether health insurers start to charge us “premiums” (ie, deny us “discounts”) if we refuse to give them access to our own data, to inferences made about us based on the data they do have run through big data correlations with the richer data provided by QS participants, and so on.
  • The concerns raised by practitioners and scholars like Frank Pasquale, Cathy O’Neill, Kate Crawford, Danielle Citron, Julia Angwin and others about the uses of so-called “big data analytics” resonate at least to some extent with the issue of crowdforcing. Big data providers create statistical aggregates of human behavior based on collecting what are thought to be representative samples of various populations. They cannot create these aggregates without the submission of data from the representative group. Those willing to submit their data may not themselves understand the purposes to which their data is being put. A simple example of this is found in the so-called “wellness programs” that are becoming increasingly attached to health insurance. In its most common formulation, insurers offer a “discount” to those customers willing to submit to a variety of tests, data collection routines, and to participate in certain forms of proactive health activities (such as using a gym). Especially in the first two cases, it looks to the user like the insurer is incenting them to take tests that may find markers of diseases that can be easily treated in their early stages and much less easily treated if caught later. Regardless of whether these techniques work or not, which is debatable, the insurers have at least one other motive in pushing these programs, which is to collect data on their customers and create statistical aggregates that affect not just those who submit to the testing, but their entire base of insured customers, and even those it does not currently insure. The differential pricing model that insurers call “discounts” or sometimes “bonuses” (but rarely “penalties,” which speaking financially, they also are: it is another characteristic of the kinds of marketing rhetoric swamping everything in the digital world that literally the same practice can appear noxious if called a “penalty” but welcome if called a “discount”) seem entirely at odds with the central premise of insurance, which is that risk is distributed across large groups regardless of their particular risk profile. Even in the age of Obamacare that discourages insurers from discriminating based on pre-existing conditions, and where large employers are required by law to provide insurance at the same cost to all their employees, these “discounts” allow insurers to do just the opposite, and suggest a wide range of follow-on practices that will discriminate with an even more finely-grained comb. If customers to these companies understood that the “bonuses” have been created to craft discriminatory data profiles at least as much as they are to promote customer wellness, perhaps they would resist these “incentives” more strenuously. As it is, not only do those being crowdforced by these policies have very little access to information that makes clear the purpose of this data collection, but those contributing to it have very little idea of what they are doing to other people or even themselves. And this, of course, is one of the deep problems with social media in general, most notably exemplified by the data brokerage industry.
  • As a final example, consider the proposed and so-far unsuccessful launch of Google Glass. One of the most maddening parts of the debate over Google Glass was that proponents would focus almost exclusively on the benefits Glass might afford them, while dismissing what critics kept talking about, which is the impact Glass has on others—on those who choose not to use Glass. Critics said: Glass puts all public (and much private) space under constant recorded surveillance, both by the individuals wearing Glass and, perhaps even more worryingly, by Google itself when it stores, even temporarily, that data on its servers. What was so frustrating about this debate was the recalcitrance of Google and its supporters to see that they were arguing very strongly that the rights of people who choose not to use Glass were not up for the taking by those who did want to use it; that Google’s explicit insistence that I have to allow my children to be recorded on video and Google to store that video simply by dint of their being alive where any person who happens to use Glass might be was nothing short of remarkable. It was not hard to find Glassholes overtly insisting that their rights (and therefore Google’s rights) trump those of everyone else. This controversy was a particularly strong demonstration of the fact that the “if you don’t like it, don’t use it” mantra is false. I think we all have to look at the failure of Google to (so far) create a market for Glass as a real success of the mass of public engagement to reject the ability of a huge for-profit corporation to dictate terms to everyone. It’s even a success of the free market, in the sense that Google’s market research clearly showed that this product was not going to be met with significant consumer demand. But it is partly the visibility of Glass that allowed this to happen; too much of what I talk about here is not immediately visible in the way Glass was.

To some extent, crowdforcing is a variation on the relatively wide class of phenomena to which economists refer as externalities. Externalities refer in general to costs or benefits that impact a party even when they played no direct role in a transaction. These are usually put into two main classes. “Negative” externalities occur when costs are attached to uninvolved parties, of which the classic examples usually have to do with environmental pollution by for-profit companies, for which cleanup and health costs are “paid” by individuals who may have no connection whatsoever with the company. “Positive” externalities occur when someone else’s actions with which I’m uninvolved benefit me regardless: the simplest textbook example is something like neighbors improving their houses, which may raise property values even for homeowners who have done no work on their properties at all. Externalities clearly occur with particular frequency in times of technological and market change; there were no doubt quite a few people who would have preferred to use horses for transportation even after so many people were driving cars that horses could no longer be allowed on the roadways. While these kinds of externalities might be in some way homologous with some of the crowdforcing examples that are economic in focus (such as the impact of AirBnB and Uber on the economic conditions of the properties they share), they strike me as not capturing so well the data-centric aspects of much current sharing. Collecting blood samples from those individuals who were tested certainly allowed researchers in the past to determine normal and abnormal levels of the various components of human blood, but they did not make it possible to infer much (if anything) about my blood without I myself being tested. In fact, in the US, Fifth   Amendment protections against self-incrimination extend to the collection of such personal data because it is considered unique to each human body: that is, it is currently impermissible to collect DNA samples from everyone unless a proper warrant has been issued. How do we track this right with the ability to infer everyone’s DNA without most of us ever submitting to collection?

a crowd

Crowdforcing effects also overlap with phenomena researchers refer to by names like “neighborhood effects” and “social contagion.” In each of these, what some people do ends up affecting what many other people do, in a way that goes much beyond the ordinary majoritarian aspects of democratic culture. That is, we know that only one candidate will win an election, and that therefore those who did not vote for that candidate will be (temporarily) forced to acknowledge the political rule of people with whom they don’t agree. But this happens in the open, with the knowledge and even the formal consent of all those involved, even if that consent is not always completely understood.

Externalities produced by economic transactions often look something like crowdforcing. For example, when people with means routinely hire tutors and coaches for their children for standardized tests, they end up skewing the results even more in their favor, thus impacting those without means in ways they frequently do not understand and may not be aware of. This can happen in all sorts of markets, even in cultural markets (fashion, beauty, privilege, skills, experience). But it is only the advent of society-wide digital data collection and analysis techniques that makes it so easy to sell your neighbor out without their knowledge and consent, and to have what is sold be so central to their lifeworld.

Dealing with this problem requires, first of all, conceptualizing it as a problem. That’s all I’ve tried to do here: suggest the shape of a problem that, while not entirely new, comes into stark relief and becomes widespread due to the availability of exactly the tools that are routinely promoted as “crowdsourcing” and “collective intelligence” and “networks.” As always, this is by no means to deny the many positive effects these tools and methods can have; it is to suggest that we are currently overly committed to finding those positive effects and not to exploring or dwelling on the negative effects, as profound as they may be. As the examples I’ve presented here show, the potential for crowdforcing effects on the whole population are massive, disturbing, and only increasing in scope.

In a time when so much cultural energy is devoted to the self, maximizing, promoting, decorating and sharing it, it has become hard to think with anything like the scrutiny required about how our actions impact others. From an ethical perspective, this is typically the most important question we can ask: arguably it is the foundation of ethics itself. Despite the rhetoric of sharing, we are doing our best to turn away from examining how our actions impact others. Our world could do with a lot more, rather than less, of that kind of thinking.

Posted in "social media", cyberlibertarianism, materality of computation, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , , , , , , , , , , , | 9 Responses

Tor, Technocracy, Democracy

As important as the technical issues regarding Tor are, at least as important—probably more important—is the political worldview that Tor promotes (as do other projects like it). While it is useful and relevant to talk about formations that capture large parts of the Tor community, like “geek culture” and “cypherpunks” and libertarianism and anarchism, one of the most salient political frames in which to see Tor is also one that is almost universally applicable across these communities: Tor is technocratic. Technocracy is a term used by political scientists and technology scholars to describe the view that political problems have technological solutions, and that those technological solutions constitute a kind of politics that transcends what are wrongly characterized as “traditional” left-right politics.

In a terrific recent article describing technocracy and its prevalence in contemporary digital culture, the philosophers of technology Evan Selinger and Jathan Sadowski write:

Unlike force wielding, iron-fisted dictators, technocrats derive their authority from a seemingly softer form of power: scientific and engineering prestige. No matter where technocrats are found, they attempt to legitimize their hold over others by offering innovative proposals untainted by troubling subjective biases and interests. Through rhetorical appeals to optimization and objectivity, technocrats depict their favored approaches to social control as pragmatic alternatives to grossly inefficient political mechanisms. Indeed, technocrats regularly conceive of their interventions in duty-bound terms, as a responsibility to help citizens and society overcome vast political frictions.

Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (“civic,” “ethical,” “white” and “black” hat alike), hacktivists, WikiLeaks fans, Anonymous “members,” even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands.

While these suggestions typically frame themselves in terms of the words we use to describe core political values—most often, values associated with democracy—they actually offer very little discussion adequate to the rich traditions of political thought that articulated those values to begin with. That is, technocratic power understands technology as an area of precise expertise, in which one must demonstrate a significant level of knowledge and skill as a prerequisite even to contributing to the project at all. Yet technocrats typically tolerate no such characterization of law or politics: these are trivial matters not even up for debate, and in so far as they are up for debate, they are matters for which the same technical skills qualify participants. This is why it is no surprise that amount the 30 or 40 individuals listed by the project as “Core Tor People,” the vast majority are developers or technology researchers, and those few for whom politics is even part of their ambit, approach it almost exclusively as technologists. The actual legal specialists, no more than a handful, tend to be dedicated advocates for the particular view of society Tor propagates. In other words, there is very little room in Tor for discussion of its politics, for whether the project actually does embody widely-shared political values: this is taken as given.

freedom is slavery

“Freedom Is Slavery,” a flag of the United Technocracies of Man, a totalitarian oligarchic state in the Ad Astra Per Astera alternate history by RvBOMally

This would be fine if Tor really were “purely” technological—although just what a “purely” technological project might be is by no means clear in our world—but Tor is, by anyone’s account, deeply political, so much so that the developers themselves must turn to political principles to explain why the project exists at all. Consider, for example, the Tor Project blog post written by lead developer Roger Dingledine that describes the “possible upcoming attempts to disable the Tor network” discussed by Yasha Levine and Paul Carr on Pando. Dingledine writes:

The Tor network provides a safe haven from surveillance, censorship, and computer network exploitation for millions of people who live in repressive regimes, including human rights activists in countries such as Iran, Syria, and Russia.

And further:

Attempts to disable the Tor network would interfere with all of these users, not just ones disliked by the attacker.

Why would that be bad? Because “every person has the right to privacy. This right is a foundation of a democratic society.”

This appears to be an extremely clear statement. It is not a technological argument: it is a political argument. It was generated by Dingledine of his own volition; it is meant to be a—possibly the—basic argument that that justifies Tor. Tor is connected to a fundamental human right, the “right to privacy” which is a “foundation” of a “democratic society.” Dingledine is certainly right that we should not do things that threaten such democratic foundations. At the same time, Dingledine seems not to recognize that terms like “repressive regime” are inherently and deeply political, and that “surveillance” and “censorship” and “exploitation” name political activities whose definitions vary according to legal regime and even political point of view. Clearly, many users of Tor consider any observation by any government, for any reason, to be “exploitation” by a “repressive regime,” which is consistent for the many members of the community who profess a variety of anarchism or anarcho-capitalism, but not for those with other political views, such as those who think that there are circumstances under which laws need to be enforced.

Especially concerning about this argument is that it mischaracterizes the nature of the legal guarantees of human rights. In a democracy, it is not actually up to individuals on their own to decide how and where human rights should be enforced or protected, and then to create autonomous zones wherein those rights are protected in the terms they see fit. Instead, in a democracy, citizens work together to have laws and regulations enacted that realize their interpretation of rights. Agitating for a “right to privacy” amendment to the Constitution would be appropriate political action for privacy in a democracy. Even certain forms of (limited) civil disobedience are an important part of democracy. But creating a tool that you claim protects privacy according to your own definition of the term, overtly resisting any attempt to discuss what it means to say that it “protects privacy,” and then insisting everyone use it and nobody, especially those lacking the coding skills to be insiders, complain about it because of its connection to fundamental rights, is profoundly antidemocratic. Like all technocratic claims, it challenges what actually is a fundamental precept of democracy that few across the political spectrum would challenge: that open discussion of every issue affecting us is required in order for political power to be properly administered.

It doesn’t take much to show that Dingledine’s statement about the political foundations of Tor can’t bear the weight he places on it. I commented on the Tor Project blog, pointing out that he is using “right to privacy” in a different way from what that term means outside of the context of Tor: “the ‘right to privacy’ does not mean what you assert it means here, at all, even in those jurisdictions that (unlike the US) have that right enshrined in law or constitution.” Dingledine responded:

Live in the world you want to live in. (Think of it as a corollary to ‘be the change you want to see in the world’.)

We’re not talking about any particular legal regime here. We’re talking about basic human rights that humans worldwide have, regardless of particular laws or interpretations of laws.

I guess other people can say that it isn’t true — that privacy isn’t a universal human right — but we’re going to keep saying that it is.

This is technocratic two-stepping of a very typical sort and deeply worrying sort. First, Dingledine claimed that Tor must be supported because it follows directly from a fundamental “right to privacy.” Yet when pressed—and not that hard—he admits that what he means by “right to privacy” is not what any human rights body or “particular legal regime” has meant by it. Instead of talking about how human rights are protected, he asserts that human rights are natural rights and that these natural rights create natural law that is properly enforced by entities above and outside of democratic polities. Where the UN’s Universal Declaration on Human Rights of 1948 is very clear that states and bodies like the UN to which states belong are the exclusive guarantors of human rights, whatever the origin of those rights, Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role.

We don’t have to look very hard to see the problems with that. Many in the US would assert that the right to bear arms means that individuals can own guns (or even more powerful weapons). More than a few construe this as a human or even a natural right. Many would say “the citizen’s right to bear arms is a foundation of a democratic society.” Yet many would not. Another democracy, the UK, does not allow citizens to bear arms. Tor, notably, is the home of many hidden services that sell weapons. Is it for the Tor developers to decide what is and what is not a fundamental human right, and how states should recognize them, and to distribute weapons in the UK despite its explicit, democratically-enacted, legal prohibition of them? (At this point, it is only the existence of legal services beyond Tor’s control that make this difficult, but that has little to do with Tor’s operation: if it were up to Tor, the UK legal prohibition on weapons would be overwritten by technocratic fiat.)

We should note as well that once we venture into the terrain of natural rights and natural law, we are deep in the thick of politics. It simply is not the case that all political thinkers, let alone all citizens, are going to agree about the origin of rights, and even fewer would agree that natural rights lead to a natural law that transcends the power of popular sovereignty to protect. Dingledine’s appeal to natural law is not politically neutral: it takes a side in a central, ages-old debate about the origin of rights, the nature of the bodies that guarantee them.

That’s fine, except when we remember that we are asked to endorse Tor precisely because it instances a politics so fundamental that everyone, or virtually everyone, would agree with it. Otherwise, Tor is a political animal, and the public should accede to its development no more than it does to any other proposed innovation or law: it must be subject to exactly the same tests everything else is. Yet this is exactly what Tor claims it is above, in many different ways.

Further, it is hard not to notice that the appeal to natural rights is today most often associated with the political right, for a variety of reasons (ur-neocon Leo Strauss was one of the most prominent 20th century proponents of these views). We aren’t supposed to endorse Tor because we endorse the right: it’s supposed to be above the left/right distinction. But it isn’t.

Tor, like all other technocratic solutions (or solutionist technologies) is profoundly political. Rather than claiming it is above them, it should invite vigorous political discussion of its functions and purpose (as at least the Tor Project’s outgoing Executive Director, Andrew Lewman, has recently stated, though there have yet to be many signs that the Tor community, let alone the core group of “Tor People,” agrees with this). Rather than a staff composed entirely of technologists, any project with the potential to intercede so directly in so many vital areas of human conduct should be staffed by at least as many with political and legal expertise as it is by technologists. It should be able to articulate its benefits and drawbacks fully in the operational political language of the countries in which it operates. It should be able to acknowledge that an actual foundation of democratic polities is the need to make accommodations and compromises between people whose political convictions will differ. It needs to make clear that it is a political project, and that like all political projects, it exists subject to the will of the citizenry, to whom it reports, and which can decide whether or not the project should continue. Otherwise, it disparages the very democratic ground on which many of its promoters claim to operate.

This, in the end, is one reason that Pando’s coverage of Tor is so important, and a reason it strikes me as seriously unfortunate to suggest that. I think many in Tor know much less about politics than they think they do. If they did, they might wonder as I do why it is that organizations like Radio Free Asia and the Broadcasting Board of Governors have been such persistent supporters of the project. These organizations are not in the business of supporting technology for technology’s sake, or science for the sake of “pure science.” Rather, they promote a particular view of “media freedom” that is designed to promote the values of the US and some of its allies. These organizations have strong ties to the intelligence community. Anyone with a solid knowledge of political history will know that RFA and BBG only fund projects that advance their own interests, and that those interests are those of the US at its most hegemonic, at its most willing to push its way inside of other sovereign states. Many view them as distributors of propaganda, pure and simple.

You don’t have to look hard to find this information: Wikipedia itself notes that Catharin Dalpino of the centrist Brookings Institution think tank (ie, no wild-eyed radical) says of Radio Free Asia: “It doesn’t sound like reporting about what’s going on in a country. Often, it reads like a textbook on democracy, which is fine, but even to an American it’s rather propagandistic.” It is no stretch to see the “media freedom” agenda of these organizations and the “internet freedom” agenda surrounding Tor as more alike than different. Further, Tor is arguably a much more powerful tool than are media broadcasts, despite how powerful those themselves are. This is not to say that it is absolutely wrong for the US to promote its values this way, or that everything about Radio Free Europe and Radio Free Asia was and is bad. It’s to say these are profoundly political projects, and democracy demands that the citizenry and its elected representatives, not technocrats, decide whether to pursue them.

We are often told that Tor is just trying to do good, trying to inspire respect for human decency and human rights, and that its community is just being attacked because it is “an easy target.” Yet the contrary story is much more rarely told: that Tor encourages a technocratic dismissal of democratic values, and promotes serious and seriously uninformed anti-government hostility. Further, despite the claims of its advocates that Tor is meant to protect “activists” against human rights abuses (as the Tor community construes these), the fact remains that to many observers, Tor is just as lucidly seen as a tool that promotes and encourages human rights abuses of the very worst kind: child pornography, child exploitation, all the crimes and suffering that go along with worldwide distribution of illegal drugs, assassination for hire, and much more. The Tor community dismisses these worries as “FUD” (or more poetically, the “Four Horsemen of the Info-Apocalypse”) but evidence that they are real is very hard for the objective observer to overlook (even lists on the open web of the most widely-used hidden services reveals very few that are not involved in circumventing laws that many my consider not only reasonable but important). The “use case” for encrypted messaging such as OTR (Off-The-Record messaging) is far easier to understand in a political sense than is the one for the hidden services that sell drugs, weapons, promote rape porn, and so on. It is beyond ironic that a tool for which the most salient uses may be the most serious affronts to human rights should be promoted as if its contributions to human rights are so obvious as to be beyond question. Does Tor do “good”? No doubt. But it also enables some very bad things, at least as I personally evaluate “good” and “bad.” You can’t say that on the one hand the good it enables accrues to Tor’s benefit, while the bad it enables is just an unavoidable cost of doing business. With very limited exceptions (e.g. speech itself, and even there the balance is contested) we don’t treat cultural phenomena that way. The only name for striking the right balance between those poles is politics, and it is entirely possible that the political balance Tor strikes is one that, were it better understood, few people would assent to. Making decisions about matters like this, not the expanded and putative “right to privacy,” is the foundation of democracy. Unless Tor learns not just to accommodate but to encourage such discussions, it will remain a project based on technocracy, not democracy, and therefore one that those of us concerned about the fate of democracy must view with significant concern.

Posted in "hacking", cyberlibertarianism, privacy, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment