The Terribly Thin Conception of Ethics in Digital Technology

Thanks in part to ongoing revelations about Facebook, there is today a louder discussion than there has been for a while about the need for deep thinking about ethics in the fields of engineering, computer science, and the commercial businesses built out of them. In the Boston Globe, Yonatan Zunger wrote about an “ethics crisis” in computer science.  In The New York Times, Natasha Singer wrote about “tech’s ethical ‘dark side.’”

Chris Gilliard wrote an excellent article in the April 9, 2018 Chronicle of Higher Education focusing specifically on education technology titled  “How Ed Tech is Exploiting Students.” Since students are particularly affected by academic programs like computer science and electrical engineering, one might imagine and hope that teachers of these topics would be particularly sensitive to ethical concerns. (Full disclosure: I consider Chris a good friend, and he and I have collaborated on work in the past, intend to do so in the future, and I read an early draft of his Chronicle piece and provided comments to him.)

robot teacher fooled students

image source: YouTube

Gilliard’s concerns, expressed repeatedly in the article, have to do with 1) what “informed consent” means in the context of education technology; 2) with the fact that participating in certain technology projects entails that students are, often unwittingly, contributing their labor to projects that benefit someone else—that is, they are working for free; and 3) with the fact that the privacy implications of many ed-tech projections are not at all clear to the students:

Predictive analytics, plagiarism-detection software, facial-recognition technology, chatbots — all the things we talk about lately when we talk about ed tech — are built, maintained, and improved by extracting the work of the people who use the technology: the students. In many cases, student labor is rendered invisible (and uncompensated), and student consent is not taken into account. In other words, students often provide the raw material by which ed tech is developed, improved, and instituted, but their agency is for the most part not an issue.

Gilliard gives a couple of examples of ed-tech projects that concern him along these lines. One of them is a project by Prof. Ashok Goel of the Georgia Institute of Technology.

Ashok K. Goel, a professor at the Georgia Institute of Technology, used IBM’s “Watson” technology to test a chatbot teaching assistant on his students for a semester. He told them to email “Jill” with any questions but did not tell them that Jill was a bot.

Gilliard summarizes his concerns about this and other projects as focusing on:

how we think about labor and how we think about consent. Students must be given the choice to participate, and must be fully informed that they are part of an experiment or that their work will be used to improve corporate products.

In an April 11 letter to the Chronicle, Goel objected to being included in Gilliard’s article. Yet rather than rebutting Gilliard’s critique, Goel’s response affirms both its substance and its spirit. In other words, despite claiming to honor the ethical concerns Gilliard raises, Goel seems not to understand them, and to use his lack of understanding as a rebuttal. This reflects, I think, the incredibly thin understanding of ethics that permeates the world of digital technology, especially but not at all only in education technology.

Here are the substantive parts of Goel’s response:

In this project, we collect questions posed by students and answers given by human teaching assistants on the discussion forum of an online course in artificial intelligence. We use this data exclusively for partially automating the task of answering questions in subsequent offerings of the course both to reduce teaching load and to provide prompt answers to student questions anytime anywhere. We deliberately elected not to inform the students in advance the first time we deployed Jill Watson as a virtual teaching assistant because she is also an experiment in constructing human-level AI and we wanted to determine if actual students could detect Jill’s true identity in a live setting. (They could not!)

In subsequent offerings of the AI class over the last two years, we have informed the students at the start of the course that one or more of the teaching assistants are a reincarnation of Jill operating under a pseudonym and revealed the identity of the virtual teaching assistant(s) at the end of the course. The response of the several hundred students who have interacted with Jill’s various reincarnations over two years has been overwhelmingly positive.

In what follows I am going to assume that Goel raised all the issues he wanted to in his letter. It’s possible that he didn’t; the Chronicle maintains a tight word limit on letters. But it is clear that the issues raised in the letter are the primary ones Goel saw in Gilliard’s article and that he thinks his project raises.

In almost every way, the response affirm the claims Gilliard makes, rather than refuting them. First, Gilliard’s article clearly referred to “a semester,” which can only be the first time the chatbot was used, and Goel indicates, without explanation or justification, that he “deliberately elected not to inform the students in advance” about the project during that semester. Yet that deliberation is exactly one of Gilliard’s points: what gives technologists the right to think that they can conduct such experiments without student consent in the first place? Goel does not tell us. That subsequent instances had consent—of a sort, as I discuss next—only reinforces the notion that they should have had consent the first time as well.

There are even deeper concerns, which happen also to be the specific ones Gilliard raises. First, what does “informed consent” mean? The notion of “informed consent” as promulgated by, for example, the Common Rule of the HHS, the best guide we have to the ethics of human experimentation in the US, insists that one can only give consent if one has the option not to give consent. This is not rocket science. Not just the Common Rule, but the 1979 Belmont Report on which the Common Rule is based, itself reflecting on the Nuremberg Trials, defines “informed consent” specifically with reference to the ability of the subject to refuse to participate. This is literally the first paragraph of the Belmont Report’s section on “informed consent”:

Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to choose what shall or shall not happen to them. This opportunity is provided when adequate standards for informed consent are satisfied.

If anything the idea of “informed consent” has grown only richer since then. Perhaps Goel allows students to take a different section of the Artificial Intelligence class if they do not want to participate in the Jill Watson experiment. Such a choice would be required for student consent to be “consent.” His letter reads as if Goel does not realize that “informed consent” without choice is not consent at all. If so, this is not an isolated problem. Some have argued, rightly in my opinion, that failure to understand the meaning of “consent” is a structural problem in the world of digital technology, one that ties the behavior of software, platforms and hardware to the sexism and misogyny of techbro culture. Even the Association for Computing Machinery (ACM, the leading professional organization for computer scientists) maintains a “Code of Ethics and Professional Conduct” that speaks directly of “respecting the privacy of others” in a way that is hard to reconcile with the Jill Watson experiment and with much else in the development of digital technology.

Further, Goel indicates that the point of the Jill Watson experiment is “an experiment in constructing human-level AI.” He does not make clear whether students are told that this is part of the point of the Watson experiment. He does not make clear that both the pursuit of what he calls “human-level AI,” but many philosophers, cognitive scientists and other researchers, including myself, consider a misapplication of ordinary language, raises on its own significant ethical questions, the nature and extent of which certainly go far beyond what students in a course about AI can possibly have covered before the course begins, if they are covered at all. Do the students truly understand the ethical concerns raised by so-called AIs that can effectively mimic the responses of human teachers? Is their informed consent rich with discussions of the ethical considerations raised by this? Do they understand the labor implications of developing chatbots that can replace human teachers at universities? If so, Prof. Goel does not indicate they are.

The sentence about the Watson “experiment” appears to contradict another sentence in the same paragraph, where Goel writes that data generated by the Jill Watson experiment is used “exclusively for partially automating the task of answering questions in subsequent offerings of the course” (emphasis added). Perhaps the meaning of “exclusively” here is that the literal data collected to train Jill Watson is segregated into that project. But the implication of the “experiment” sentence is that whether or not that is the case, the project itself generates knowledge that Goel and his collaborators are using in their other research and even commercial endeavors. This is exactly the concern that is front and center in Gilliard’s article. When the students are fully informed about the ethical and privacy considerations raised by the technology in the course they are about to take, are they provided with a full accounting of Goel’s academic and commercial projects, with detailed explanations of how results developed in the Jill Watson project may or may not play into them? Once again, if so, Goel appears not to think such concerns needed to be mentioned in his letter.

At any rate, Goel certainly makes it sound as if the work done by students in the course helps his own research projects, whether in providing training data for the Jill Watson AI model, and/or in providing research feedback for future models. So do the press releases Georgia Tech issues about the project. I presume it is quite possible that this research could lead directly or indirectly to commercial applications. It may already be leading in that direction.

Gilliard concludes his article by writing that “When we draft students into education technologies and enlist their labor without their consent or even their ability to choose, we enact a pedagogy of extraction and exploitation.” In his letter Goel entirely overlooks the question of labor and claims that consent simply to have a bot as a virtual TA with no apparent alternative, and without clear discussion of the various ways their participation in this project might inform future research and commercial endeavors, mitigates the exploitation Gilliard writes about. This exchange only demonstrates how much work ethicists have left to do in the field of education technology (and digital technology in general), and how uninterested technologists are in listening to what ethicists have to say.

UPDATE, May 3, 5:30pm: soon after posting this story, I was directed by my friends Evan Selinger and Audrey Watters to two papers by Goel that indicate that he had Institutional Review Board (the bodies that implement the Common Rule) approval for the Jill Watson project, and two papers (paper one; paper two) in which he writes at greater length than he does in the Chronicle letter about some ethical implications of the Jill Watson project. I will update this post soon with some reflections on these papers.

Posted in cyberlibertarianism, privacy, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , | Leave a comment

Please Consider Supporting Our Legal Challenge to Cambridge Analytica’s Role in the Trump Election

Since December of last year, I have been part of a small group of concerned citizens engaged in a series of actions against Cambridge Analytica (CA) and its parent corporation, SCL Group.

I am writing this post in the hopes of gathering support (that is, funds) we need to continue this action. You can support us at our Crowd Justice page, which has more information.

Here I’ve tried to lay out some of the background behind our efforts.

Crowd Justice campaign header

Our actions are driven by concern for claims made by CA, and those whose work it relies on, regarding the level of behavioral manipulation of which they are capable, and specifically about whether the techniques CA has developed have been used to manipulate voting behavior, especially in the 2016 US Presidential election and the UK Brexit election (although as US citizens our inquiry is limited to the US election). Carol Cadwalladr of The Guardian is the journalist who has covered the topic most extensively: see, for example, her piece on this campaign, “British Courts May Unlock Secrets of How Trump Campaign Profiled US Voters,” and her earlier pieces on the Brexit/Leave.EU campaign, “Follow the Data: Does a Legal Document Link Brexit Campaigns to US Billionaire?” and “The Great British Brexit Robbery: How Our Democracy Was Hijacked” and on the US Presidential election, including “Cambridge Analytica Affair Raises Questions Vital to Our Democracy” and “When Nigel Farage Met Julian Assange.”

The UK has more extensive data protection laws than does the US. Its laws and regulations are administered by the Information Commissioner’s Office (here not standing for Initial Coin Offering). Because CA/SCL, a British company (SCL Group) with an American subsidiary (CA) appears to have directly collected and used data about US citizens in its work for the Cruz and Trump campaigns in the 2016 election, the UK Data Protection Act (DPA) is triggered. The DPA has many provisions for individuals to discover exactly what data is being collected on them and how it is being used.

In late 2016, David Carroll of Parsons School of Design and I and a few others, working with data researcher Paul-Olivier Dehaye, and the project he runs, submitted formal requests to CA/SCL under the UK Data Protection Law, which allows us to see all data companies have about us.

During the spring, UK attorney (aka “solicitor”) Ravi Naik of Irvine Thanvi Natas Solicitors took an interest in our efforts and helped to coordinate our requests to CA/SCL. Ravi himself has written an article in The Guardian about the campaign.

It took longer than the 40 days the law allows, but eventually (in March) CA/SCL did return files to David and myself. The file consisted of a single Excel spreadsheet with 3 tabs. Two of these were relatively innocuous identifying information (date of birth, address, records of which elections I’ve voted in) that are available to marketers via public election records. The third, though, is shocking in its implications:

CA SCL data

What is startling about this data, in part, is that it appears to be specifically about how manipulable I might be with regard to central hot button issues in the political public sphere—not necessarily what my opinions are, but whether I would be susceptible to manipulation about issues like “Gun Rights” and “Traditional Social and Moral Values.” In general this psychographic profile strikes me as being plausible, though not necessarily how I consciously think I’d rank all of these issues for myself: but then again, the point of psychographic data is that it knows us better than we know ourselves.

We don’t think this information can possibly be complete, since it gives very little sense of what I think about all of these issues, which a marketer like CA/SCL would surely need to take targeted actions based on it—for example, even if environmental issues are at level 10 importance to me, this data does not indicate whether that means I consider the problem to be climate change, or the fact that climate change is a fraud.

CA/SCL provided no information whatsoever on where and how this information was gathered, whether it represents a purchase of existing information or analytics performed on a body of data CA/SCL also has but has not disclosed, and so on.

In 2017, the ICO issued a document called “Guidance for Political Campaigning.” Among the many provisions of this guidance that CA/SCL would appear not to have followed scrupulously even based on this limited amount of data is this:

79. The big data revolution has made available new and powerful technical means to analyse extremely large and varied datasets. These can include traditional datasets such as the electoral register but also information which people have made publicly accessible on Facebook, Twitter and other social media. Research and profiling carried out by and on behalf of political parties can now benefit from these advanced analytical tools. The outputs may be used to understand general trends in the electorate, or to find and influence potential voters. (16)

80. Whatever the purpose of the processing, it is subject to the DPA if it involves data from which living individuals can be identified. This brings with it duties for the party commissioning the analytics and rights for the individuals to whom the data relates. It includes the duty to tell people how their data is being used. While people might expect that the electoral register is used for election campaigns they may well not be aware of how other data about them can be used and combined in complex analytics. If a political organisation is collecting data directly from people eg via a website or obtains it from another source, it has to tell them what it is going to do with the data. In the case of data obtained from another source, the organisation may make the information available in other ways, eg on its website, if contacting individuals directly would involve disproportionate effort. It cannot simply choose to say nothing, and the possible complexity of the analytics is not an excuse for ignoring this requirement. Our code of practice on privacy notices, transparency and control provides advice on giving people this information. (16-17)


81. Even where information about individuals is apparently publicly accessible, this does not automatically mean that it can be re-used for another purpose. If a political organisation collects and processes this data, then it is a data controller for that data, and has to comply with the requirements of the DPA in relation to it. (17)

In its responses to us, other than the data mentioned above, CA/SCL has engaged in a pattern of bullying and denial that suggests to me, at least, that it has much more to disclose and will do everything in its power not to.

In order to take the next step in our legal challenge to CA/SCL, we need to raise £25,000. That is a lot of money. None of the money is going to us; we are raising the money using the established legal crowdfunding site Crowd Justice. The money is needed for two reasons: first, because in the UK, the loser of a lawsuit can be forced to pay the winner’s legal fees (so-called “adverse fees”). If we sue CA/SCL and lose, we could be liable for the fees CA/SCL has paid to its attorneys. With the Mercers backing CA/SCL, we are already certain that they will be using some of the highest-priced corporate attorneys available. Second, the money is needed to pay our own legal fees and to partly reimburse the solicitors working on the case for their time, even though most of their time is being donated.

We believe that continuing to force this issue could ultimately cause CA/SCL to release all of its data on the 2016 Presidential election, and possibly even the Brexit campaign. We also believe it may have extremely positive effects in preventing CA/SCL and other organizations from engaging in similar actions in the future.

We have at this point raised about £20,000 of the initial £25,000 we need to raise to start actions beyond making our subject data requests under the DPA. If you are at all inclined to help us in this effort, please visit our Crowd Justice page.

Posted in "social media", materality of computation, privacy, revolution, surveillance, we are building big brother, what are computers for | Tagged , , , , , , , , , , , | Leave a comment

Article: “The Militarization of Language: Cryptographic Politics and the War of All against All”

I have an article in the latest boundary 2 titled “The Militarization of Language: Cryptographic Politics and the War of All against All.” It is my most sustained attempt to locate and critique a political philosophy in the discourse of encryption advocates, a project I’ve addressed as well in pieces like “Code Is Not Speech” and “Tor, Technocracy, Democracy.” It’s a piece I haven’t before posted drafts of, in part because it includes a relatively strong critique of some of Jacob Appelbaum’s talks, especially his infamous 30c3 talk, “To Protect and Infect: The Militarization of the Internet (Part Two; in three acts).” The title of Applebaum’s talk was part of what motivated me to write this piece, as it appears as part of and was commissioned for a boundary 2 dossier called “The Militarization of Knowledge.”

Here’s the formal abstract:

The question of the militarization of language emerges from the politics surrounding cryptography, or the use of encryption in contemporary networked digital technology, and the intersection of encryption with the politics of language. Ultimately, cryptographic politics aims to embody at a foundational level a theory of language that some recent philosophers, including Charles Taylor and Philip Pettit, locate partly in the writings of Thomas Hobbes. As in Hobbes’s political theory, this theory of language is closely tied to the conception of political sovereignty as necessarily absolute and as the only available alternative to absolute sovereignty being a state of nature (or more accurately what Pettit 2008 calls a “second state of nature,” one in which language plays a key role). In that state of nature, the only possible political relation is what Hobbes calls a war of “all against all.” While Hobbes intended that image as a justification for the quasi-absolute power of the political sovereign, those who most vigorously pursue cryptographic politics appear bent on realizing it as a welcome sociopolitical goal. To reject that vision, we need to adopt a different picture of language and of sovereignty itself, a less individualistic picture that incorporates a more robust sense of shared and community responsibility and that entails serious questions about the political consequences of the cryptographic program.

boundary 2 cover

If you’d like a copy and do have institutional access, please use this official link to the article at boundary 2 at Duke University Press.

If you don’t have institutional access and would like a copy, please email me (dgolumbia-at-gmail-dot-com) or access a copy at

Posted in "hacking", cyberlibertarianism, privacy, rhetoric of computation, surveillance | Tagged , , , , , , , , , , , | Leave a comment

The Destructiveness of the Digital Humanities (‘Traditional’ Part II)

In what purport to be responses or rebuttals to critiques I and others have offered of Digital Humanities (DH), my argument is routinely misrepresented in a fundamental way. I am almost always said to oppose the use of digital technology in the humanities. This happens despite the fact that I and those I have worked with use digital technologies in hundreds of ways in our research and that our critiques—typically including exactly the ones DHers are responding to—make this explicit.

It is undeniable that DH is in some sense organized around the use of some digital tools (but not others, and this gap is itself is a very important part of how, on my analysis, the DH formation operates, a matter I have written about at some length). What I and the scholars I work with, as opposed to some conservative pundits, worry about is not the use of digital technology in the humanities. Speaking only for myself, what I oppose most strongly is the attitude toward the rest of the humanities I find widespread in DH circles: the view that the rest of the humanities (and particularly literary studies) are benighted, old-fashioned, out of date, and/or “traditional.”

This is what I mean when I describe DH as an ideological formation, more than it is a method or set of methods. The destructiveness in the ideological formation is what I oppose, not the use of tools per se, or even the actual work done by at least some in DH. The ideological formation, I have argued, is what distinguishes DH from the fields that preceded it (that is not to say that computational ideologies were not present in Humanities Computing—they certainly were—but they had failed to find the institutional purchase and power DH was seeking, which is why Humanities Computing needed to be transmuted into DH). Further, I have argued repeatedly that this destructiveness is an inherent feature of DH, perhaps even its most important constitutive feature: that is to say that the most common shared feature in DH work is its “incidental” destructiveness toward whatever it declares not to be part of itself.

There are deep and interesting ideological reasons why the apparent championing of digital tools should overlap with this overtly destructive attitude toward humanistic research, some of which I’ve just touched on in a recent post, “The Destructiveness of the Digital.” It has something to do with the destructiveness toward whatever is considered “non-digital” among digital partisans, which is part of why I have called DH the “cyberlibertarian humanities” (a claim that is just as routinely misrepresented by DH responders as is the rest of my critique).

I want to leave that aside, in favor of presenting just one unexpectedly clear and symptomatic public example of the destructiveness embedded in DH. In her interview in the LARB “Digital in the Humanities” series, senior DH scholar Laura Mandell approvingly quotes another senior DH scholar, Julia Flanders, saying: “We don’t want to save the humanities as they are traditionally constituted.”

We don’t want to save the humanities as they are traditionally constituted

That, to me, summarizes DH—or at least the part of DH that concerns me and others very greatly—in one compact sentence.

Now I’m sure, as soon as I point it out, there will be a lot of backtracking and spin to claim that this sentence means something other than what it clearly seems to. This is true even though practice shows that the plain reading is correct, and that DHers frequently speak in disparaging and dismissive ways about the rest of literary studies. Yet when pressed, and this is part of why I see DH as resembling so many other rightist formations, rather than simply owning and admitting its disparagement of other approaches, DH starts to blame those who point it out and portray itself as the victim.

In context, I don’t think there is any other reasonable way to read the sentence. What “the humanities as they are traditionally constituted” means here is “the humanities other than DH.”

(It seems worth noting that the characteristic double-mindedness in DH about what constitutes DH itself makes this even more problematic and more transparently a kind of power politics: the only kinds of humanities research that should be saved are not “the kind that uses digital tools,” since virtually all humanities research these days uses digital tools in many different ways, but instead, “whatever scholars who are identified with DH say is part of DH,” a fact which in certain moods even some DHers themselves acknowledge.)

Further, that quotation has been out there now for over a year, and nobody has, as far as I know, bothered to comment or push back on it, despite plenty of opportunities to do so; that fact in itself shows the insensitivity in DH to its own institutional politics.

For reference, here is the entire exchange in which Mandell’s statement appears:

Another concern that has come up deals with public intellectualism, which many scholars and journalists alike have described as being in decline (for example, Nicholas Kristof’s New York Times essay last year). What role, if any, do you think digital work plays? Could the digital humanities (or the digital in the humanities) be a much-needed bridge between the academy and the public, or is this perhaps expecting too much of a discipline?

I have a story to tell about this. I was at the digital humanities conference at Stanford one year and there was a luncheon at which Alan Liu spoke. His talk was a plea to have the digital humanities help save the humanities by broadcasting humanities work — in other words, making it public. It was a deeply moving talk. But to her credit, Julia Flanders stood up and said something along the lines of, “We don’t want to save the humanities as they are traditionally constituted.” And she is right. There are institutional problems with the humanities that need to be confronted and those same humanities have participated in criticizing the digital humanities. Digital humanists would be shooting themselves in the foot in trying to help the very humanities discipline that discredits us. In many ways Liu wasn’t addressing the correct audience, because he was speaking to those who critique DH and asking that they take that critical drive that is designed to make the world a better place and put it into forging a link with the public — making work publicly available. Habermas has said that the project of Enlightenment is unfinished until we take specialist discourses and bring them back to the public. This has traditionally been seen as a lesser thing to do in the humanities. For Habermas, it is seen as the finishing of an intellectual trajectory. This is a trajectory that we have not yet completed and it is something, I think, the digital humanities can offer.

The archness and self-contradictory nature of this passage are emblematic of a phenomenon we see more and more of in DH circles. Literally within the same paragraph where she declares that the rest of the humanities should go away, in a remarkable instance of what I like to call right reaction and what Michael Kimmel calls aggrieved entitlement, Mandell says that is the rest of the humanities that are engaged in “discrediting” DH. One has to ask: what is the proper way for “non-DH” humanists to react to a very successful effort—in many places, literally the only thing administrators know about what is happening in English departments these days—that says that the humanities shouldn’t be saved? To simply stop practicing our discipline? And given that your project is predicated on ending the rest of the humanities, how could any response that disagrees with that not also be (wrongly) construed as “discrediting” your practice?

It’s also worth noting that in Mandell’s story, Alan Liu is the one making the request to support the humanities, and that Liu is one of the few English professors identified with and accepted by the DH community who has refused to give ground on the importance of non-DH literary studies. In other words, his request could and should have been met with sympathy and respect, if DH really did not contain a kernel of destructive animus toward the rest of the humanities. It’s worth noting that as this microcosmic scene suggests, Liu’s efforts to get the DH community to support non-DH literary studies have seen very little uptake.

In fact, if we step back from the scene just a bit, it is a bit bizarre to imagine the scene, where one digitally-respected senior scholar says “please don’t kill the rest of the humanities,” and a few others say, “no, we want to kill them.” Of all the people in the world who should be speaking up to kill the rest of the humanities, how did we get to the place where it is people who are nominally literature scholars leading the charge? The answer to that is DH: not the use of tools and building of archives—more power to them—but the destructive, “digitally”-grounded ideology that DH is built from and that it revels in. The one that says all other forms of knowledge have suddenly become outmoded and “traditional,” and this one new way is now the exclusive way forward.

Late last year I wrote a post where I discussed the way that Immanuel Wallerstein analyzes the concept of “traditional” and its place in the global system of capital. This piece builds on that one, and I recommend reading the whole thing, but I’ll just quote one paragraph from Wallerstein that is especially germane to this point:

Historical capitalism has been, we know, Promethean in its aspirations. Although scientific and technological change has been a constant of human historical activity, it is only with historical capitalism that Prometheus, always there, has been ‘unbound,’ in David Landes’s phrase. The basic collective image we now have of this scientific culture of historical capitalism is that it was propounded by noble knights against the staunch resistance of the forces of ‘traditional,’ non-scientific culture. In the seventeenth century, it was Galileo against the Church; in the twentieth, the ‘modernizer’ against the mullah. At all points, it was said to have been ‘rationality’ versus ‘superstition,’ and ‘freedom’ versus ‘intellectual oppression.’ This was presumed to be parallel to (even identical with) the revolt in the arena of the political economy of the bourgeois entrepreneur against the aristocratic landlord. (Immanuel Wallerstein, Historical Capitalism, 75)

I doubt it will surprise anyone familiar with my way of thinking that I wrote this with an eye toward precisely the way that the idea of “traditional” is used in DH: DH has always cast the rest of the humanities as “traditional,” in just the way that Wallerstein notes—and this despite the incredibly variety of approaches (including the very approaches that ground DH) that “traditional” seems to indicate.

This alignment of the DH project against what it falsely projects as “traditional” academic practice is part of why I see it as closely aligned with neoliberalism, in a deep and fundamental way that can’t be ameliorated by ad-hoc patches applied here and there. Until DH confronts the way that it has from its inception been deeply imbricated in a cultural conversation according to which technology points toward the future and everything (supposedly) “non-technological” points to the past, it will be unable to come to terms with itself as the ideological formation I and many others see it as.

The fact that this can occur within a disciplinary milieu where the identification of ideological formations had until very recently been a major part of the stock in trade is not just ironic, but symptomatic of DH as a politics. When you think about it, one way of looking at the social scene there is that DH scholars, who have in general eschewed and even dismissed the project of interpretation, especially politicized interpretation, in favor of formalism and textual editing and “building,” turn to their colleagues who still do specialize in interpreting ideologies and say, in this one instance, our own profession, that we don’t know how to use the methods in which we specialize. Is that credible? Is it credible that the critics of DH, who typically are people who specialize in sniffing out ideologies, don’t understand how to do ideology critique in our own field, but DHers, who in general avoid ideology critique like the plague, can somehow do it better than we do? Who is attacking whose professionalism here? And what could be more destructive to literary studies than to say that literary scholars do not understand how to do their own work?

To end on a positive note: despite being frequently accused of wanting to “end” DH, whatever in the world that would mean, that is only true in a very limited sense: I want to “end” the practice within DH of calling the rest of the humanities “traditional” and “anti-technology” and “out of touch” and “the past.” I want to “end” the rejection of theory and politics and the weird idea that “close reading” is some kind of fetish, within the context of literary studies. I want to end the view that “building” is “doing something,” whereas “writing” is not, and even the view that “building” and “writing” are different kinds of things. I want to end the view that DH is “public” and the rest of literary studies is not. I want DHers to embrace the fact that they are part of the humanities. This might end “DH” per se as an ideological formation, but it would not end the careers of scholars who want to use digital tools in the service of humanities research, of whom I am very much one. One might think that would be asking virtually nothing at all—embrace and support the disciplines of which you are a part—but as the twinned quotation from Flanders and Mandell shows, especially given that it is offered specifically as a rebuke to exactly that request coming from “within” DH, it turns out to be asking a great deal.

Posted in cyberlibertarianism, definitions that matter, digital humanities, rhetoric of computation, theory | Tagged , , , , , , , , | Comments closed

The Destructiveness of the Digital

I’ve argued for a long time along different ways that despite its overt claims to creativity, “building,” “democratization,” and so on, digital culture is at least partly also characterized by profoundly destructive impulses. While digital promoters love to focus on the great things that will come from some new version of something over its existing version, I can’t help focusing on what their promotion says—implicitly or explicitly—about the thing they claim to be replacing, typically at profit to themselves, whether in terms of political or personal power (broadly speaking) or financial gain.

Note that it is in no way a contradiction for both destructiveness and creativity to exist at the same time, something I repeatedly try to explain without much success. In fact it would be odd for only one or the other to exist, and one does not negate the other, at least not as a rule. The fact that there is a lot of creativity in digital culture does not directly address the question of whether there is also destructiveness. Further, the continual response that creativity does negate the destructiveness shifts the terms of discussion so that we can’t really deal as directly as we should with the destructiveness.

I’m not going to go into these arguments in detail right now, but just want to present a particularly clear example of digital destructiveness I happened to hear recently. On April 11 on a regular segment called “Game On” of a BBC Radio 4 program called “Let’s Talk About Tech” (the episode is available for listening and download through early May), host Danny Wallace interviews Hilmar Veigar Pétursson, CEO of CCP, the publisher of the MMORPG EVE Online (“a space-based, persistent world massively multiplayer online role-playing game”), on the occasion of that game’s winning a BAFTA award in the “evolving games” category.

EVE Online

Screen cap from the 2013 “largest space battle EVE Online has ever seen,” from the subreddit /r/eve via an article by Ian Birnbaum at PC Gamer

In the final exchange in the interview (starting around minute 23), Wallace asks Pétursson to reflect more generally about the nature of games and gaming. I’ve transcribed the whole exchange below.

Wallace sets the stage by invoking the defensive aggrieved entitlement of the gamer, which is symptomatically portrayed in the voice of the scolding critic who essentially declares video games not to be art (with no interrogation of what “art” means exactly–just not “frivolous”), but Pétursson’s response goes well beyond what Wallace asks. Asked to articulate the value of EVE Online, Pétursson turns to attack (literally) all other forms of media, and in particular to disparage the entire project of reading. The claim on the surface has to be that all the kinds of philosophical and narrative engagement one experiences in books can be better experienced in video games than by reading the books and other texts (and experiencing the other media) out of which all world cultures emerge.

So we move from a largely fictional dismissal of the value of one medium—games—to an explicit and disparaging rejection of all other forms of media. Further, this disparagement rests on an unsurprising and completely unsophisticated account of what media consumption is really like—a wildly undertheorized presumption that looking at screens and using a pointing device constitutes “interaction” in a way that reading or listening to the radio or even watching screens without a pointing device at the ready does not. That whole frame is inaccurate: it suggests something massively untrue about the experience of reading (and even more of listening and talking) that no careful study of the subject would support, and also a conception of what happens when we play games that is deeply interested. After all, anyone who has ever participated in a raid in World of Warcraft knows that the feeling of suture and of interactivity that players have is, at best, profoundly weird: most of what is going on in the game and on the screen is absolutely not under the player’s “control,” and what is under “control” is a highly limited set of device clicks and gestures that certainly give or go along with the feeling of being “in the game,” but are in fact very different from actually playing a game with one’s body (here thinking of a game like basketball or baseball). Further, that fictional relationship—the immersive sense that one is in the game and participating with the other elements of it—is philosophically much harder to distinguish than one might expect from the suturing relationship the viewer or reader has to text and media of various sorts. The questions of why and how I identify with my avatar in a video game as over against why and how I identify with the main character or analytical perspective offered in a book, or a movie, and so on, are fascinating ones without easy answers. Of course, digital utopians long ago decided that digital media are “interactive” in a way other media aren’t, a notion that is itself built on a serious disparagement of anything non-digital (or anything digital utopians don’t like).

Pétursson tells us that the testimony we have from thousands of years and literally hundreds of millions of people regarding narrative and visual and linguistic media can be dismissed, while the “thousands” of people who play EVE Online provides evidence that this new medium proves the fruitlessness of all other forms of media. In other words, the testimony of EVE Online players is valid, but that of non-players is not valid. It may seem subtle, but this privileging of the testimony of those one values and dismissal of those one doesn’t is one critical root of the development of hate. (Some readers will know that Pétursson’s complaint echoes a famously vicious and totally inaccurate assessment of Tolstoy’s novels [and a fortiori all novels] by digital guru and venture capital-consultant Clay Shirky.)

One of my main concerns with the destructiveness of digital culture has been precisely its disparagement and dismissal of all forms of knowledge that the digerati deem “traditional” or “out of date” or “fruitless,” typically with very little exposure to those forms of knowledge. I am especially concerned with what this perspective teaches with regard to politics. Politics is very complicated terrain for all of us, even those of us who study it for a living. Understanding how various political forces operate and take advantage of popular energy and opinion is among the most urgent political tasks of our time. It is beyond doubt that the rise of authoritarian populism in our time is fueled in part with a studied agnotology, with the promotion of ignorance about politics that makes people particularly vulnerable to manipulation.

So what politics does EVE Online teach, “fruitfully” as against the “fruitless” pursuit of political knowledge from reading and other forms of media? As a non-player of the game I’m in no position to judge, but it’s notable that the game is known for a fairly destructive take on governance. Here’s a bit from Wikipedia’s section on “griefing” in EVE Online:

Due to the game’s focus on freedom, consequence, and autonomy, many behaviors that are considered griefing in most MMOs are allowed in EVE. This includes stealing from other players, extortion, and causing other players to be killed by large groups of NPCs.

I don’t know if there’s any connection between Pétursson’s destructive attitude toward non-game forms of media and this overt hostility toward the ethical principles of social behavior that many of us adhere to. I don’t know whether players of EVE Online share Pétursson’s hostility. But it’s hard not to wonder.

And of course that isn’t even really the point. The point is that this  hostility to anything that is currently identified as not being part of the digital is visible all over the place in digital culture. It is far in excess of what celebration of cool new things requires. And it is completely unmotivated. Large-category new forms of media do not eliminate or obviate older ones: movies didn’t eliminate books, television didn’t actually eliminate radio, and so on. You don’t have to hate books and movies and tv to enjoy games. You don’t have to hate to be part of the “new.” Unfortunately, too many people apparently think otherwise.

Transcribed From April 11 “Game On” segment of  BBC Radio 4 “Let’s Talk About Tech”:

Question (Danny Wallace, BBC)

The old brain, the old parts of the media, for instance, and social commentators, and people who are cultural commentators, will say all video games, they’re just video games. They’re just for kids, or miscreants living in their parents’ basements. That is.. that’s firmly disappearing now, that point of view, isn’t it. You’ve lasted long enough to outlive the people who said, why on earth are you making, spending all this time and all this effort making something as frivolous as video games.

Answer (Hilmar Veigar Pétursson, CEO of CCP, publisher of EVE Online)

Yeah, I mean, in some aspects, we’re making computer games. And many aspects of EVE are like that. But there are also aspects of EVE which is nothing like that, which are so fundamentally unique, you can’t really… you would have to scramble for analogies. It really is a virtual life, where people live out. They do work, they trade, they build social structures, they make friends, they succeed, they fail, they learn, they have lessons in leadership, trust. It’s an extraordinarily beneficial activity, I would argue. And that’s not just my own point. I have thousands, tens of thousands of people that just fundamentally agree with me. So it’s an element of truth, once you get enough people to agree with it. So I’ve never really looked at it like that. The fact that we’re classified as a computer game, I mean, doesn’t really bother me. It helps people understand what it is. It’s not like I have a very good classification for what we really are: something virtual worlds, virtual life, social economy, I mean there are many analogies you can bring to it. But yeah, we’ve never really thought of it as just being computer games.

I would then argue, I mean there’s a lot of other things in human endeavors which are frankly uninteresting. If you look at most media, it’s broadcast to consumer and there’s no participation. Why is reading a book considered a better activity than playing a game? At least in a game you’re participating. In a book you’re just wallowing in some other’s imagination. How is that a fruitful activity? It’s very equivalent to watching TV. I find reading books… I generally frown upon it. I would rather play a game. I learn more from computer games than books.

Posted in cyberlibertarianism, games, rhetoric of computation | Tagged , , , , , , , , , , , | 2 Responses

Race, Technology, and the Word “Traditional” in the World-System

“Traditional” is one of the more interesting words to keep track of in contemporary discourse, particularly when it comes up in discussions of technology.

For the most part, it is used as a slur.

It is a word used to disparage an object or practice, to compare it to whatever one wants to posit as “new” or “innovative” or even “worthwhile” or “useful.”

It’s an implicit slur: after all, in a variety of contexts, “traditions” and “traditional” are words that point to good things, things we (apparently) value, things we don’t necessarily want to change. Though these days, more and more, especially in discussions of technology and the economy, it’s the slur meaning that predominates.

I’ve always noticed this and meant to write a brief note about it, since it seems to me the question of what is “traditional” and what isn’t is highly relative and mobile. Before I could get around to that, though, I ran across a surprisingly pointed discussion of this term in an unexpected source: the short 1983 book Historical Capitalism (London: Verso), by the Marxist world-systems theorist and sociologist Immanuel Wallerstein.

Wallerstein’s work is usually, rightly, seen as an effort to understand how capitalism works across the globe, with a particular focus on international flows of trade and the ways classes can be played off against each other among as well as within countries. His best-known work is the multivolume The Modern World-System. Wikipedia provides the following fairly accurate if quite general summary of some key parts of his work:

A lasting division of the world into core, semi-periphery, and periphery is an inherent feature of world-system theory. Other theories, partially drawn on by Wallerstein, leave out the semi-periphery and do not allow for a grayscale of development. Areas which have so far remained outside the reach of the world-system enter it at the stage of “periphery”. There is a fundamental and institutionally stabilized “division of labor” between core and periphery: while the core has a high level of technological development and manufactures complex products, the role of the periphery is to supply raw materials, agricultural products, and cheap labor for the expanding agents of the core. Economic exchange between core and periphery takes place on unequal terms: the periphery is forced to sell its products at low prices, but has to buy the core’s products at comparatively high prices. Once established, this unequal state tends to stabilize itself due to inherent, quasi-deterministic constraints. The statuses of core and periphery are not exclusive and fixed geographically, but are relative to each other. A zone defined as “semi-periphery” acts as a periphery to the core and as a core to the periphery. At the end of the 20th century, this zone would comprise Eastern Europe, China, Brazil, and Mexico. It is important to note that core and peripheral zones can co-exist in the same location.

Yet what is sometimes less understood is that Wallerstein is a theorist of race and its critical role in the establishment of capitalism, that much of his early work focused on Africa, that he considers himself profoundly influenced by the anticolonial writer Frantz Fanon.

Wallerstein describes Historical Capitalism as an attempt “to see capitalism as a historical system, over the whole of its history and in concrete unique reality” (7). The book is made up of revisions of three lectures Wallerstein gave in 1982 along with a new conclusion. The first chapter, Wallerstein says, is largely devoted to economics; the second to politics, and the third, which I’ll be discussing here, to culture. Its title is “Truth as Opiate: Rationality and Rationalization.” Somewhat surprisingly, the word “traditional” occupies a central place in Wallerstein’s analysis.

Wallerstein, Historical Capitalism (Verso, 1982)

These, for example, are the third chapter’s first two paragraphs:

Historical capitalism has been, we know, Promethean in its aspirations. Although scientific and technological change has been a constant of human historical activity, it is only with historical capitalism that Prometheus, always there, has been ‘unbound,’ in David Landes’s phrase. The basic collective image we now have of this scientific culture of historical capitalism is that it was propounded by noble knights against the staunch resistance of the forces of ‘traditional,’ non-scientific culture. In the seventeenth century, it was Galileo against the Church; in the twentieth, the ‘modernizer’ against the mullah. At all points, it was said to have been ‘rationality’ versus ‘superstition,’ and ‘freedom’ versus ‘intellectual oppression.’ This was presumed to be parallel to (even identical with) the revolt in the arena of the political economy of the bourgeois entrepreneur against the aristocratic landlord.

This basic image of a worldwide cultural struggle has had a hidden premise, namely one about temporality. ‘Modernity’ was assumed to be temporally new, whereas ‘tradition’ was temporally old and prior to modernity; indeed, in some strong versions of the imagery, tradition was ahistorical and therefore virtually eternal. This premise was historically false and therefore fundamentally misleading. The multiple cultures, the multiple ‘traditions’ that have flourished within the time-space boundaries of historical capitalism, have been no more primordial than the multiple institutional frameworks. They are largely the creation of the modern world, part of its ideological scaffolding. Links of the various ‘traditions’ to groups and ideologies that predate historical capitalism have existed, of course, in the sense that they have often been constructed using some historical and intellectual materials already existent. Furthermore, the assertion of such transhistorical links has played an important role in the cohesiveness of groups in their political-economic struggles within historical capitalism. But, if we wish to understand the cultural forms these struggles take, we cannot afford to take ‘traditions’ at their face value, and in particular we cannot afford to assume that ‘traditions’ are in fact traditional. (75-6)

So for Wallerstein, the very act of naming a practice “traditional” is an important part of the cultural work of global capitalism, tied directly to the historical creation of what we today call “race.” The very allegation that some practices are “traditional” “has formed one of the most significant pillars of historical capitalism, institutional racism” (78); “racism was the mode by which various segments of the work-force within the same economic structure were constrained to relate to each other,” he goes on, “racism was the ideological justification for the hierarchization of the work-force and its unequal distribution of reward.”

Wallerstein uses the past tense in these formulations because he is discussing the historical formation of racial discrimination, especially when racial categorizations were explicit and legal; he is not suggesting that racism does not still exist. But because “in the past fifty to one hundred years, it has been under sharp attack” (80), a complementary ideology has moved to center stage, namely what Wallerstein calls “universalism.” Belief in universalism “has been the keystone of the ideological arch of historical capitalism” (81).

By universalism Wallerstein in part means “pressures at the level of culture” to create and enforce norms around a single model of culture and cultural progress, via a “complex of processes we sometimes label ‘westernization,’ or even more arrogantly ‘modernization’” (82) and which includes phenomena like “Christian proselytization; the imposition of European language; instruction in specific technologies and mores; changes in legal codes.”

The process of modernization, Wallerstein writes, “required the creation of a world bourgeois cultural framework that could be grafted onto ‘national’ variations. This was particularly important in terms of science and technology, but also in the realm of political ideas and the social sciences” (83). Thus the

concept of a neutral ‘universal’ culture to which the cadres of the world division of labor would be ‘assimilated’ (the passive voice being important here) hence came to serve as one of the pillars of the world-system as it historically evolved. The exaltation of progress, and later of ‘modernization,’ summarized this set of ideas, which served less as true norms of social action than as status-symbols of obeisance and of participation in the world’s upper strata. The break from the supposedly culturally-narrow religious bases of knowledge in favor of supposedly trans-cultural scientific bases of knowledge served as the self-justification of a particular pernicious form of cultural imperialism.

The universalism of scientific culture “lent itself to the concept known today as ‘meritocracy’” (84), a “framework within which individual mobility was possible without threatening hierarchical work-force allocation. On the contrary, meritocracy reinforced hierarchy” (85).  “The great emphasis on the rationality of scientific activity,” he writes, “was the mask of the irrationality of endless accumulation.”

While “universalism was offered to the world as a gift of the powerful to the weak,” “the gift itself harbored racism, for it gave the recipients two choices: accept the gift, thereby acknowledging that one was slow on the hierarchy of received wisdom; refuse the gift, thereby denying oneself weapons that could reverse the unequal real power situation.”

There is much more to Wallerstein’s compact and dense argument, including many important reflections on the profoundly ambivalent relationship of technological progress and cultural nationalism to socialism, and I recommend the book in its entirety. But I am primarily interested here in the consequences of Wallerstein’s work for understanding the deployment of the concept of “traditional” in contemporary technological discourse. In my opinion, “traditional” is a word, and a concept, that should be avoided in thoughtful work about technology, economics, and “progress,” as it is an almost-entirely ideological label, one that is even more than what I earlier called it, a “slur.” Indeed, it is not merely a label: it is an ideological lever, a tool used to organize the world so as to maximize power for the ones doing the labeling, and to disempower the lives and cultures of those to whom the label is applied, and to make them available for resource exploitation.

Work on this piece benefited greatly from conversations with Tressie McMillan Cottom and Audrey Watters.

Next: “traditional” in vivo

Posted in definitions that matter, digital humanities, rhetoric of computation, theory | Tagged , , , , , , , , , , , , , , , | 1 Response

The Politics of Bitcoin: Expanded Bibliography with Live Links

Production constraints and editorial guidelines required The Politics of Bitcoin, in both its print and electronic versions, to include only the base URLs of online materials referenced in the book, and even in the electronic version these aren’t live links. In addition, space constraints meant that some work valuable to me in composing the book had to be cut out. What follows is a fuller bibliography of the works referenced or that I’d have liked to have referenced in the book, complete with working URLs to online materials.

Politics of Bitcoin

Posted in bitcoin, cyberlibertarianism | Tagged , , , , , , , | Leave a comment

Trump, Clinton, and the Electoral Politics of Bitcoin

My new book, The Politics of Bitcoin, is not directly about electoral politics, but rather the political and political-economic theories that inform the development of Bitcoin and its underlying blockchain software. My argument does not require that there be direct connections between promoting Bitcoin and supporting one candidate or party or another.

Rather, what concerns me about Bitcoin is how it contributes to the spread of deeply right-wing ideas in economics and political philosophy without those right-wing associations being made at all explicit.  Call it, “moving the Overton window to the right” (although I find the concept of the “Overton window” troubling, not least for its own origins on the political right) especially along some axes that may not even be altogether legible to many in the general public. So many people have heard of Bitcoin and the blockchain as technologies that promote “freedom” and “democratization,” and resist interference by “central authorities”; many fewer understand what those words mean in relation to Bitcoin and the blockchain, where the words are used almost identically to the way extremists like Alex Jones and the John Birch Society use them.

Nevertheless, these foundational politics do at times intersect with ordinary electoral politics. Though this isn’t really what The Politics of Bitcoin is about, when people in social media saw the title they quickly presumed that was what I meant, and some of those comments prompted me to reflect a bit on how the politics of Bitcoin and the blockchain are intersecting with the current US presidential election.

* * *

First, the GOP. A Bitcoin supporter responded to some positive comments about the book by others on Twitter by writing:

There are several interesting ways that this comment strikes me as symptomatic. First, it tries to manage the narrative—defining the critique I’m offering, despite the fact that the tweet writer admits not to having read the book—by suggesting what the book does not, that right wing ideologues like Trump directly promote Bitcoin, or vice-versa. Second, it offers the very familiar story being promulgated by Google and others that we need to be very worried now about a superpowerful AI, which itself is a product of what thinkers like Dale Carrico and I think is already a profoundly conservative discourse, for reasons I won’t go into here.

The Trump comment is particularly interesting. There is certainly a fair amount of support for Trump among Bitcoin enthusiasts, though I’m not aware of any polling that would allow us to break that down into numbers. But it is pretty funny to be told that I shouldn’t be worried about Trump right now, because I am very worried, and I think anyone with a remote interest in politics and the—to put a point on it—fate of democracy itself should be worried about Trump. And to the degree that Bitcoin helps to spread the right-wing economic ideology that my book is really about, then I do think that there are connections between Bitcoin and Trump. Of course Bitcoin didn’t cause Trump—but the kinds of false, angry, other-targeting ideologies on which the Trump phenomenon depends can be readily found in all the kinds of online communities that create and promote the frightening range of right-wing political action we see everywhere today. We should be very worried about Trump, and we should be worried about how Bitcoin and other parts of online discourse feed the hate and studied ignorance that make so many people support him.

This site might be a parody but I don’t think it is.

It’s also fascinating that as we get down to the wire, Trump is sounding more and more like his ardent supporter Alex Jones, propagating the same falsehoods about “global financial powers” that we see in Bitcoin discourse (and, not unironically, being himself an incredibly wealthy person who made most of his money by cheating the system). In an October 13 speech in West Palm Beach, Florida, Trump stated that  “Hillary Clinton meets in secret with international banks to plot the destruction of U.S. sovereignty in order to enrich these global financial powers, her special interest friends, and her donors.”

* * *

So that’s the ostensibly mainstream right. What about the mainstream left? Here the story is even more interesting. Daniel Latorre, a Twitter friend and civic technologist, tweeted the following:


which links to an excellent piece Latorre wrote in 2015 called “Why Our Tech Talk Needs A Values Talk.”

Dan pointed me to video of the “Connectivity” session at the Clinton Global Initiative (CGI) 2016 conference (yes, the annual conference sponsored by the famous foundation), where around minute 32 two speakers appear talking about the blockchain. The first is Jamie Smith, Global Chief Communications Officer of the Bitfury Group, “your leading full service Blockchain technology company,” who gives a very brief introduction that is full of some very serious imploring of the audience, quite a few buzzwords that don’t really seem to go together, and graphics like this one:

blockchain transformation CGI

It is really an insult to the intelligence of the audience of a charitable organization to distribute venture capital promotional materials like these as if they mean something very concrete and beneficent—let alone the fact that the putative top benefit of the blockchain, distributed control of a verifiable ledger that all users can examine—has not even been mooted for phones of any sort, let alone inexpensive phones, so that whatever Smith is advertising here is at best a derivative of blockchain technology.

Like so many Bitcoin and blockchain promoters (and, to be fair, digital utopians and salespeople everywhere) Smith, too, engages in some serious management of the narrative via rhetorical sleight-of-hand: “the missing piece of the internet,” “the blockchain is the most transformational technology since the internet,” “without going through a trusted emissary.” Words like these are deployed to mystify, or to mislead, or both, but not to explain.

Managing the narrative: Smith says, “I can see why you think that [Bitcoin isn’t promising] because the coverage has not been great”: that is, because the coverage has in part accurately focused on the most popular uses of Bitcoin in Dark Web markets for illegal products like the long-shuttered Silk Road, and on the almost shocking frequency with which individuals lose the money they put into Bitcoin—which would be shocking even if one of the major advertised benefits of Bitcoin wasn’t its supposed superior safety vs. other forms of payment.

It’s worth dwelling on one statement Smith makes: “the Bitcoin Blockchain has never been hacked.”

Really? Remember that her talk is focused on the blockchain, not Bitcoin per se. But are blockchains unhackable? Does the CGI audience notice that the vital word in that construction is not “blockchain” but “Bitcoin,” because other blockchains most certainly have been hacked—most famously the first “autonomous” “smart contract” blockchain, TheDAO—which was very famously hacked no more than four months ago.

And of course, “hack” is used in a particular technical sense in the talk, since as I discuss in the book, putting money into Bitcoin is one of the riskiest things a person can do, both because of Bitcoin’s own wild volatility, and the ease with which Bitcoin exchanges can be and have been repeatedly hacked—by some estimates including up to a third of all exchanges—and millions of dollars vanished into thin air, or more often into the pockets of scam artists.

This is part of how political ideology gets promulgated—as opposed to actual political work getting done. Wildly contradictory sentiments are offered with outsize passion, imploring audiences to take action and support whatever scheme the speaker is suggesting, but not actually to research the question for themselves, not to ask whether what the speaker is promising actually makes sense.

* * *

Yet Smith is only the first speaker. At the end of her talk she introduces arguably the star of the blockchain panel, Peruvian economist Hernando de Soto.

Hernando de Soto.

Hernando de Soto, one of the world’s biggest blockchain promoters.

Hernando de Soto, one of the chief architects of the actual economic plans critics like Philip Mirowski call neoliberalism. And this is core, right-wing neoliberalism, meaning direct involvement with the most poisonous and influential figures and institutions of neoliberalism—the Mont Pelerin society, Friedrich Hayek, Milton Friedman, and many others—not simply the varieties of “outer shell” neoliberalism that does not always even know their own name. This is far-right, world-dominating economics.

Hernando de Soto. Vocal opponent of the highest-profile left economist in the last decade, Thomas Piketty. Author of two books, The Other Path: The Economic Answer to Terrorism (1989), which argues the solution to the problems that created Peru’s Shining Path was found in entrepreneurship and deregulation, and The Mystery of Capital (2000), described as “an elaborate smokescreen to hide the uglier truth” that corporations and wealthy individuals “run [developing] countries for the maximum extractive benefit of the west” which de Soto’s “solutions” may exacerbate much more than ameliorate.

Hernando de Soto. who talks glowingly about “reglobalizing the world.” Who lumps together ISIS and progressive anti-globalization protestors (though he is careful not to say they are the same–after he lumps them together in the first place). Who “spins the Arab Spring not as a populist opposition to dictators (most of whom are backed by the lynchpin of capitalism, the United States), but a scrappy revolt of entrepreneurs against state interference in commerce.” Who received the 2004 Milton Friedman prize for Advancing Liberty. Whose work in Peru was “the first and most successful outcome” (Mitchell 2009, 396) of the work of the Atlas Foundation for Economic Research, not just ideologically but historically directly connected to Hayek, the Cato Foundation, and the Mont Pelerin Society.

Being promoted at the charitable foundation of the Democratic candidate for President.

Under the name of blockchain.

Without anybody standing up and saying, “what is the ‘Friedrich Hayek of Latin America’ doing speaking for the Democratic candidate for president? And why is the Democratic party promoting it, without even noting who this person is and where his ideas come from?”

The product de Soto says he is making is one that uses the blockchain—the “public blockchain,” he and Bitfury call it, although the only current candidate for that is Bitcoin, and he does not explain whether or how the service he describes could run on the Bitcoin blockchain, or who would host the “public blockchain” he talks about. The product is one that will help to fulfill de Soto’s lifelong plan to record property rights for the poorest people in the world. (See “Hillary Clinton and the Blockchain” for what reads an awful like that a sales presentation of this and other blockchain projects the Clinton campaign has under consideration.)

That sounds noble, unless you are familiar with political science and political economy, in which the focus on property rights is precisely the hallmark of rightist politics going all the way back to what is now called the “classical liberalism” associated with Locke, although that term is now used to describe what is essentially right libertarianism and the relationship between these doctrines is profoundly fraught. Then it sounds like another version of the neoliberal, neo-colonial extractive development plans that have, in the opinion of many anti-globalization activists, been the cause of significant destruction to lives and property the world over, and contributed significantly to the poverty they claim to be helping (Gravois 2005, Johnson 2016, Mitchell 2009).

Here’s a bit on de Soto’s work from a piece by journalists Mark Ames and Yasha Levine (2013)

De Soto’s pitch essentially comes down to this: Give the poor masses a legal “stake” in whatever meager property they live in, and that will “unleash” their inner entrepreneurial spirit and all the national “hidden capital” lying dormant beneath their shanty floors. De Soto claimed that if the poor living in Lima’s vast shantytowns were given legal title ownership over their shacks, they could then use that legal title as collateral to take out microfinance loans, which would then be used to launch their micro-entrepreneurial careers. Newly-created property holders would also have a “stake” in the ruling political and economic system. It’s the sort of cant that makes perfect sense to the Davos set (where de Soto is a star) but that has absolutely zero relevance to problems of entrenched poverty around the world.

To be clear, de Soto could speak, and has spoken, to audiences from the center-left—the “left neoliberals,” the Tony Blairs and Bill and Hillary Clintons of the world—for a long time. Bill Clinton himself has called de Soto “the world’s greatest living economist,” and yet there he is the recipient of equally fulsome praise from figures like George HW Bush and Ronald Reagan. I don’t argue that Bitcoin is unique, or uniquely destructive; on the contrary part of the point is how easily and how transparently it fits into the shift toward the right we see in so many places today.

Still, when you listen to de Soto’s speech, it’s remarkable to hear him say things like “Western imperialism did a good job” or asking whether “blockchain can save globalization”—and that nobody in the audience raises objections to this sort of thing. You expect that on the right, not the left.

But, welcome to blockchain.

So yes, the politics of blockchain should make you worried about Trump, and the people who support Trump, and the rightward shift of all electoral politics today.

Works Cited


Posted in bitcoin, cyberlibertarianism, rhetoric of computation | Tagged , , , , , , , , , , , , , | 17 Responses

“Neoliberalism” Has Two Meanings

The word “neoliberalism” comes up frequently in discussions on and of digital media and politics. Use of the term is frequently derided by actors across the political spectrum, especially but not only by those at whom the term has been directed. (Nobody wants to be called a neoliberal and everyone always denies it, much as everyone denies being a racist or a misogynist: it is today an analytical term applied by those who disagree with it, although it has been used for self-identification in the past.) Sometimes the derision indicates genuine disagreement, but even more frequently it is part of an outright denial that there is any such thing as “neoliberalism,” or that the meaning of the term is so fuzzy as to make its application pointless.

There are many causes for this, but one that can be identified and addressed is fairly straightforward once it’s identified: neoliberalism has two meanings. Of course it has many more than two meanings, but it has two important, current, distinct, somewhat related meanings, and they get invoked in close enough proximity to each other so as to sometimes cause serious confusion. (The correct title for this post should really be “‘Neoliberalism’ Has (at Least) Two Meanings,” but the simpler version sounds better.) The existence of these two meanings may even explain some of the denials that the term means anything at all.

Meaning 1: “Neoliberal” as a modifier of “liberal” in the largely recent political sense of US/UK party politics, where the opposite is “conservative.” In this sense, the term is typically applied to people even more than political programs or dogma. Examples of “neoliberals” in this sense: Tony Blair, Bill Clinton, (arguably) Barack Obama, (arguably) Hillary Clinton.

This version of “neoliberal” is meant to identify a tendency among the political left to accommodate policies, especially economic policies, associated with the right, while publicly proclaiming identification with the left. Sometimes this is called “left neoliberalism.”

The best recent piece I know on this version of neoliberalism is Corey Robin’s “The First Neoliberals,” Jacobin (April 28, 2016), which includes pointers at the (brief) time when the term was introduced by those who described themselves that way:

[Neoliberalism is] the name that a small group of journalists, intellectuals, and politicians on the Left gave to themselves in the late 1970s in order to register their distance from the traditional liberalism of the New Deal and the Great Society.

The original neoliberals included, among others, Michael Kinsley, Charles Peters, James Fallows, Nicholas Lemann, Bill Bradley, Bruce Babbitt, Gary Hart, and Paul Tsongas. Sometimes called “Atari Democrats,” these were the men — and they were almost all men — who helped to remake American liberalism into neoliberalism, culminating in the election of Bill Clinton in 1992.

These were the men who made Jonathan Chait what he is today. Chait, after all, would recoil in horror at the policies and programs of mid-century liberals like Walter Reuther or John Kenneth Galbraith or even Arthur Schlesinger, who claimed that “class conflict is essential if freedom is to be preserved, because it is the only barrier against class domination.” We know this because he so resolutely opposes the more tepid versions of that liberalism that we see in the Sanders campaign.

It’s precisely the distance between that lost world of twentieth century American labor-liberalism and contemporary liberals like Chait that the phrase “neoliberalism” is meant, in part, to register.

We can see that distance first declared, and declared most clearly, in Charles Peters’s famous “A Neoliberal’s Manifesto,” which Tim Barker reminded me of last night. Peters was the founder and editor of the Washington Monthly, and in many ways the eminence grise of the neoliberal movement.

It’s important to say that, while this usage of the term may well be the one that is frequently applied in social media, and it’s certainly the one that gets mentioned most often in the context of electoral politics (see, for example, Nina Illingworth’s hilarious “Neoliberal Wall of Shame”), it isn’t really the one that scholars tend to use.

This usage came up recently in the controversy surrounding the #WeAreTheLeft publicity campaign, in which critics from the left, with whom I generally agree in this regard, criticized the organizers of that campaign as neoliberals: see Jeff Kunzler, “Dear #WeAreTheLeft, You Are Not the Left: The Rot of Liberal White Supremacy,” Medium (July 13, 2016) and Meghan Murphy, “#WeAreTheLeft: The Day Identity Politics Killed Identity Politics,” Feminist Current (July 14, 2016). Not surprisingly, this critique was met with the characteristic denial that the word means anything:

Meaning 2: “Neoliberal” as a modifier of “liberal” in the economic sense of the word “liberal,” as used for example in “liberal trade policies.” Also understood as a modifier of the liberalism associated with philosophers like Locke and Mill, which is itself frequently taken to be largely economic in nature. (The source I like best on the relationship between economic liberalism and rightist political programs is Ishay Landa’s The Apprentice’s Sorcerer: Liberal Tradition and Fascism, Studies in Critical Social Science, 2012.)

This is a movement of the political right, not of the left; its opposite would be something like “protectionism” or “planned economies” or even “socialism,” although in caricatured senses of those terms.  The lineage here begins with Hayek and von Mises, through the Mont Pelerin Society and Chicago School economics. Philip Mirowski is the go-to theorist of this movement. Examples: Hayek, Mises, self-identified right-wing “libertarians” like the Koch brothers, Murray Rothbard, Lew Rockwell, Ron and Rand Paul, and also hard-right politicians like Reagan and Thatcher, Scott Walker and Ted Cruz. These figures are better understood not with neoliberal as a term of personal identification (that is, “Ted Cruz is a neoliberal” doesn’t really mean much), but rather as advocating neoliberal policies or providing foundational theory for them. In contrast to the first meaning, this is sometimes referred to as “right neoliberalism.”

This is what scholars like David Harvey, Wendy Brown, Aihwa Ong, Will Davies, and Philip Mirowski mean when they talk about neoliberalism, even if the details of their usages of the term differ slightly. It’s the usage most often employed by scholars across the board. It’s the meaning the Wikipedia entry on neoliberalism currently invokes, barely mentioning the other.

When Daniel Allington, Sarah Brouillete and I wrote a piece called “Neoliberal Tools (and Archives): A Political History of the Digital Humanities,” in the Los Angeles Review of Books in May, it is this meaning we had in mind.

Neoliberalism in this sense is often understood, not entirely inaccurately, as free-market fundamentalism. But as Mirowski in particular explains it (Davies is also very good on this), neoliberalism has just as much to do with taking the reins of state power so as to favor commercial interests while publicly disparaging the idea of governmental power (or at least of democratic control of state power). Although this is most clearly explained in his book Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown (Verso, 2013), the summary he provides in “The Thirteen Commandments of NeoliberalismThe Utopian (June 19, 2013) explains a lot:

Although many secondhand purveyors of ideas on the right might wish to crow that “market freedom” promotes their own brand of religious righteousness, or maybe even the converse, it nonetheless debases comprehension to conflate the two by disparaging both as “fundamentalism”—a sneer unfortunately becoming commonplace on the left. It seems very neat and tidy to assert that neoliberals operate in a modus operandi on a par with religious fundamentalists: just slam The Road to Serfdom (or if you are really Low-to-No Church, Atlas Shrugged) on the table along with the King James Bible, and then profess to have unmediated personal access to the original true meaning of the only (two) book(s) you’ll ever need to read in your lifetime. Counterpoising morally confused evangelicals with the reality-based community may seem tempting to some; but it dulls serious thought. It may sometimes feel that a certain market-inflected personalized version of Salvation has become more prevalent in Western societies, but that turns out to be very far removed from the actual content of the neoliberal program.

Neoliberalism does not impart any dose of Old Time Religion. Not only is there no ur-text of neoliberalism; the neoliberals have not themselves opted to retreat into obscurantism, however much it may seem that some of their fellow travelers may have done so. You won’t often catch them wondering, “What Would Hayek Do?” Instead they developed an intricately linked set of overlapping propositions over time — from Ludwig Erhard’s “social market economy” to Herbert Giersch’s cosmopolitan individualism, from Milton Friedman’s “monetarism” to the rational-expectations hypothesis, from Hayek’s “spontaneous order” to James Buchanan’s constitutional order, from Gary Becker’s “human capital” to Steven Levitt’s “freakonomics,” from the Heartland Institute’s climate denialism to the American Enterprise Institute’s geo-engineering project, and, most appositely, from Hayek’s “socialist calculation controversy” to Chicago’s efficient-markets hypothesis. Along the way they have lightly sloughed off many prior classical liberal doctrines — for instance, opposition to corporate monopoly power as politically debilitating, or skepticism over strong intellectual property, or disparaging finance as an intrinsic source of macroeconomic disturbance — without coming clean on their reversals.

George Monbiot’s “Neoliberalism—The Ideology at the Root of All Our Problems,” The Guardian (April 15, 2016), offers an excellent if slightly less scholarly primer to the history of the term and the best-known instances of neoliberal policies, though he doesn’t include the crucial Mirowski/Davies insight about neoliberalism’s capture of state power:

As Naomi Klein documents in The Shock Doctrine, neoliberal theorists advocated the use of crises to impose unpopular policies while people were distracted: for example, in the aftermath of Pinochet’s coup, the Iraq war and Hurricane Katrina, which Friedman described as “an opportunity to radically reform the educational system” in New Orleans.

Where neoliberal policies cannot be imposed domestically, they are imposed internationally, through trade treaties incorporating “investor-state dispute settlement”: offshore tribunals in which corporations can press for the removal of social and environmental protections. When parliaments have voted to restrict sales of cigarettes, protect water supplies from mining companies, freeze energy bills or prevent pharmaceutical firms from ripping off the state, corporations have sued, often successfully. Democracy is reduced to theatre.

Monbiot also points out that this usage too comes from the promoters of the doctrine themselves: “In 1951, Friedman was happy to describe himself as a neoliberal. But soon after that, the term began to disappear. Stranger still, even as the ideology became crisper and the movement more coherent, the lost name was not replaced by any common alternative.” Mirowski and his colleagues explain this history in much more detail.

It’s important to note that, as Monbiot suggests, but as Manuela Cadelli, President of the Magistrates’ Union of Belgium, says outright, “Neoliberalism Is a Form of Fascism” (French original). If one sees the corporate-State nexus as a critical component of Fascism as a political-economic system, it’s not at all hard to see the connection (Landa’s book is the key source on this).



Is there a relationship between the two meanings? Arguably, the most obvious relationship is that the Mont Pelerin Society’s long-term plan to turn the entire country (and frankly the entire world) toward a rightist solidifying of corporate power can’t help but entail the capitulation of the left toward those goals. We certainly read and hear much less of the Clintons and Blair bashing democratic governance as an ideal, even though their actions have tended toward the Mont Pelerin program. No doubt, these are prongs of the same movement at some level, but they have very different profiles and effects in the world at large.

In a recent piece in Counterpunch, “The Time is Now: To Defeat Both Trump and Clintonian Neoliberalism” (July 19, 2016), Mark Lewis Taylor writes: “Trumpian authoritarianism and Clintonian neoliberalism are actually co-partners in a joint system of rule. Trump’s authoritarianism is often a hidden bitter fruit of Clintonian neoliberalism. Social movements for democracy must fight them both together.”

It is interesting to note that the term “neoconservative” as it is typically invoked (heavy reliance on military power to pursue what are largely economic objectives) is not properly speaking in opposition to either meaning of “neoliberal”; figures like GW Bush and Cheney easily fall under the second definition of neoliberal as well as the typical definition of neoconservative. Tony Blair looks a lot like a neoliberal in the first sense and also a neocon in the same Bush-Cheney sense, although perhaps slightly less self-starting (maybe).

Nothing I’m saying here is new. Most—though unfortunately not all—academic studies of neoliberalism note this issue, often using the right/left terminology. But critiques of the use of the word often appear to blur the two meanings, using the fact that some figures fall into one group or the other as a means of disqualifying use of the term altogether. And as Mirowski among others says, the argument that “neoliberalism does not exist” appears to do important work in solidifying the Mont Pelerin program.

When I write, I almost always intend the second meaning, but I recognize that I haven’t always been as clear about that as I might have been, even if I’ve been quoting Mirowski in the process. I plan to try to distinguish my uses of these meanings in the future and I can’t help thinking it would be useful if more people did.

Posted in cyberlibertarianism, theory | Tagged , , , , , , , , , , , , , , | 2 Responses

Code Is Not Speech

Brief version

Advocates understand the idea that “code is speech” to create an impenetrable legal shield around anything built of programming code. When they do this they misunderstand, or misrepresent, free speech law (and rights law in general), which rarely creates such impenetrable shields, the principles that underlie that law, and the ways those principles should and might apply to code. The idea that government cannot regulate things because they are made of code cannot be right. That principle not only lacks support in most theories of freedom of speech, but is actively rebutted in the very case law that advocates claim to be marshaling in favor of their position. Further, in promoting this position, advocates misrepresent—in a manner it is hard not to see as willful—the very nature of the programming code they care so much about.

In an excellent piece in MIT Technology Review, law professor Neil Richards goes to the foundations of this question, and wisely calls the view that “code is speech” a “fallacy,” a “fantasy,” a “mistake,” and “wrong.” As a general take on the question I think this is more correct than not, although there are details that need to be gone into at some length. Code can and does have speech-like aspects. But in general, code is much closer to action than it is to speech. The demand that governments not regulate code becomes a demand that governments not regulate action—that is, that governments not regulate at all. That would be less troubling if there were not widespread, repeated, and effective demands for that proposition by the world’s most powerful moneyed interests. Further, the major target of government regulation—corporations—are also the major users of code in the world today. Those corporations, including Apple, whose filings in the #AppleVsFBI case are the most recent instance prompting these discussions, have a deep vested interest in blunting the ability of governments to regulate.

While its relation to speech may certainly be important part of many judicial and legislative actions regarding code, there is no general principle equating code and speech that can or should be relied on to structure those decisions. The cyberlibertarian understanding of “code is speech” contributes to a profoundly conservative assault on the rights of citizens, by depriving the state of the power to regulate and legislate against the corporations that exist only at the state’s pleasure in the first place. This is why “code is speech” has been so powerfully advocated for decades among crypto-anarchists and cypherpunks. Yet at least these groups are, for the most part, explicit about their desire to shrink governmental power and expand the power of capital. Today the view that “code is speech” is far more widespread, but it is no less noxious, than the explicit crypto-anarchist doctrine. Yet civil rights were not created for corporations. They are for individual citizens. They are supposed to protect us against abuses of power, not license them. The fact that Apple is today trying to sell products that openly display their rebuke to government oversight should frighten anyone to the left of Murray Rothbard. That today’s “privacy activists” and “free speech advocates” promote Apple’s actions as a realization of civil rights is an incredibly clear sign of the power of cyberlibertarianism to turn well-understood principles and rights against themselves, claiming to stand for the “little guy” while in fact doing the opposite.

Full version

“Code Is Speech” is one of many articles of faith that make up the cypherpunk and/or crypto-anarchist creed. The acceptance and repetition of those articles of faith among so many, even those who claim not to be on board with the overall cypherpunk ethos, is one way of cashing out the notion of cyberlibertarianism. As with the other cypherpunk articles of faith, a profoundly ambiguous slogan is taken to be a black-and-white statement of principle with a simple and clear meaning. The problem is that it is no such thing.

That simple meaning, as I infer it, would go something like this:

  • The First Amendment to the US Constitution absolutely prohibits the US Government from regulating anything that can be said to be made of code: “Congress shall make no law.” Code is inherently the same kind of thing as political speech, and so the government may not legislate against or even regulate it. In particular, governments cannot legislate against the running of code.

This is how many advocates appear to understand the principle (e.g., the Electronic Frontier Foundation’s Executive Director in Time, EFF on its own blog, and a prominent “digital rights” activist/computer scientist). When I refer to “code is speech” in what follows, this is what I mean. By writing “code is not speech” I don’t mean to imply, as I make clear below, that code never has any speech-like features or should never be seen in that context by courts; I mean only that the above proposition is far more wrong than it is right. It is also deeply misleading and ultimately very destructive to democracy, and that should be no surprise: those who have been pushing this perspective the longest, and who arguably developed it, make no secret of their contempt for democratic governance.

The most recent invocation of “code is speech,” and a particularly telling one, is found in Apple’s February 25 motion to vacate the court order in the #AppleVsFBI case:

Under well-settled law, computer code is treated as speech within the meaning of the First Amendment. See, e.g., Universal City Studios, Inc. v. Corley, 273 F.3d 429, 449 (2d Cir. 2001); Junger v. Daley, 209 F.3d 481, 485 (6th Cir. 2000); 321 Studios v. Metro Goldwyn Mayer Studios, Inc., 307 F. Supp. 2d 1085, 1099–1100 (N.D. Cal. 2004); United States v. Elcom Ltd., 203 F. Supp. 2d 1111, 1126 (N.D. Cal. 2002); Bernstein v. Dep’t of State, 922 F. Supp. 1426, 1436 (N.D. Cal. 1996).

The Supreme Court has made clear that where, as here, the government seeks to compel speech, such action triggers First Amendment protections. (section B1; page 32)

The problems with this statement are surprisingly numerous given how brief it is. The case law is by no means “settled” for any number of reasons; to the degree that these cases even address specific issues under the “code is speech” penumbra, they are not the ones that Apple advocates; finally, courts have absolutely not made clear what Apple alleges they have, even in the narrow sense Apple claims—to the contrary, they have tended to hold the opposite position. In its filing Apple takes the general principle that “code is speech,” applies it in a new and legally unprecedented fashion, and then Apple and its supporters mock those who find any problems with the argument. This is typical of the imprecise and black-and-white way that digital enthusiasts think about “code is speech” (and other rights questions, for that matter). But it is both misleading and destructive.

I’ll explain why in four parts: 1) the law is not settled in any sense, but represents a patchwork of cases heard in various courts taking up different aspects of a complex issue, several of them remarkably inconclusive, and none of them taking the issue head-on mentioned above as the main understanding of “code is speech”; 2) government can and does, in fact, regulate speech; 3) in general, it is clear that across the board, code is not speech in the same sense that ordinary speech (or even expressive works of art and other media) is speech, because the primary purpose of code is to take action, and action, speaking generally, is exactly what government can and should regulate; 4) the specific argument Apple makes is entirely novel, demonstrating the deep ambiguity of the phrase “code is speech,” and rests on much shakier legal ground than are some of the other claims under that heading. This overreaching reinterpretation is typical of the ways digital evangelists try to turn legal principles against themselves to their own advantage. Finally, I’ll briefly explain why “code is speech” diminishes civil rights rather than strengthening them.

i. The Law Is Not Settled

First, and simplest, despite what Apple and EFF and other advocates say, the law about the relationship of code and speech is not settled. It is not true that the cases that have been decided so far add up to a clear articulation of principle, because they approach small parts of the problem from disparate angles; and the most central elements of that principle have never been addressed head-on. The issue raises fundamental questions about the nature of the First Amendment in its application to a new form of technology. Such questions can only approach being “settled” if and when the Supreme Court issues a ruling on a case, or at least affirms a lower court ruling by failing to issue cert on a petition for appeal. That has never happened in any of these cases. Only two cases have reached the Supreme Court that address aspects of this question at all: Brown and Sorrell. Neither of these cases takes the question head-on, and for good reason. Apple doesn’t even include them in its litany of cases. (see Appendix 1 for summaries of what each of the cases say).

The lower court cases, some of which get closer to the general question of code and speech, cannot possibly be said to have settled the law. For example, Bernstein v DOJ—the case that even EFF claims is the one that “established” the view that “code is speech”—which is arguably the case that took the general question of code and its relation to speech most directly, was vacated during the appeals process when the government decided not to enforce the relevant regulations due to changing facts: it is no longer considered valid precedent at all. Further, Apple refers to a lower court ruling in Bernstein v Dept of State in 1996 (as does EFF), but the later appellate rulings actually don’t fully support the reasoning the lower court offers, and raise serious questions about the core claims advocates extract from the cases (see Appendix 2). Finally, the cases all address different aspects of the relationship of code to speech, and several of them, to the extent they try to extract a general principle from the issue, actually argue against what “code is speech” advocates want the cases to say. A more accurate assessment of the legal landscape in the US would be: no court has ever issued a general opinion that could lead to the principle that “code is speech.” In Appendix 1 I step through each of the cases Apple cites, as well as the other relevant cases, and show in detail that they do not touch the core question, and that they do not settle many of the open legal questions, let alone the core question of the relationship of code to speech.

The relationship of code and speech needs to be examined thoroughly, in detail, with experts on all aspects of that relationship weighing in—not exclusively tech industry lawyers, engineers, corporate lobbyists and advocates with a vested and one-sided interest in magnifying their own power and the power of those they represent. The blanket belief that “code is speech” leads to rank overextensions of rights talk like the view that “data is speech” (Bambauer 2014), or that Google’s search engine results deserve First Amendment protection (Volokh and Falk 2012, ably dissected in Grimmelman 2014 and Pasquale 2012a, 2012b); it also fuels the truly disturbing First Amendment shield Google has tried to create around the EU “right to be forgotten” ruling. It is possible and desirable to think carefully about what the principles of freedom of the press and freedom of speech are supposed to mean and how those mesh with new technologies; Richards (2016) and Tutt (2012) give good overviews of some of the issues there, and the great First Amendment scholar Jack Balkin has been thinking about these questions for several decades (see e.g. Balkin 2004). But there is no “settled law” whatsoever about what that relationship should be.

ii. Government Can and Does Regulate Speech

Even if code were speech across the board—which it is not—it simply is not the case that the First Amendment means, in US law, or for that matter, freedom of speech laws elsewhere, that “if it’s speech, the government can’t pass laws about it.” Freedom of speech is usually taken as a prohibition on censorship or “prior restraint” on bona fide speech, of which the core exemplar is explicit political speech. Richards is particularly good on this topic. Yet what is so often overlooked in these discussions is that the First Amendment does not create a blanket prohibition on censorship; rather, actually settled case law has taken it for over half a century (including the period when most of the significant First Amendment case law has been developed) to mean that, in various contexts, different tests must be applied to determine whether or not a given law does or does not violate the First Amendment. These are typically framed in terms of levels of scrutiny, where scrutiny refers to the burden required for the government to overcome First Amendment protections. Scrutiny tests are used not just for the First Amendment, but in many circumstances where fundamental rights are implicated.

For example: it’s long been settled that “content-neutral” prohibitions on speech—that is, laws that prohibit any speech, regardless of content, in certain venues or at certain times—encounter a lower level of scrutiny than would censorship of political editorials. In general, for the government to censor a political editorial, it must meet the test of “strict scrutiny,” meaning that if the government can demonstrate a “compelling government interest” in censoring the content (and meet a few other criteria), it may be found constitutional. In general, for the government to issue content-neutral speech restrictions—for example, when a local government issues noise limits for certain periods of time, or declares certain parts of the city off-limits for protest—the standard is much lower, “intermediate scrutiny.” These “scrutiny” tests are all over First Amendment law and other rights law; they have become the standard way of mediating between rights interests and government power since at least the 1960s, in part due to considerations arising from the Equal Protection Clause of the 14th Amendment (for a more detailed account see Siegel 2006). Simply saying that something is covered by the First Amendment does not mean that government is unable to regulate it. For kinds of speech that are less central, lower tests apply—“intermediate scrutiny,” and in the least central instances, the “rational basis test.” I’ve added more information about scrutiny tests, including the fact that they are a routine part of legal education taught in introductory Constitutional Law classes (Appendix 5).

One doesn’t have to look far to see the court system making this abundantly clear, even in the very cases Apple and others cite as if it supports their position. This topic is addressed with particular force in the lower court ruling in Universal Studies v. Corley, in a part of the decision that was not overturned by subsequent rulings of the Appeals Court:

All modes of expression are covered by the First Amendment in the sense that the constitutionality of their “regulation must be determined by reference to First Amendment doctrine and analysis.” Regulation of different categories of expression, however, is subject to varying levels of judicial scrutiny. Thus, to say that a particular form of expression is “protected” by the First Amendment means that the constitutionality of any regulation of it must be measured by reference to the First Amendment. In some circumstances, however, the phrase connotes also that the standard for measurement is the most exacting level available. (Universal Studios v Reimerdes at IIIa [14])

In both the Universal Studios v Reimerdes and Universal Studies v. Corley decisions the judges go to great lengths to indicate the levels of scrutiny that various kinds of speech regulation require. They repeatedly reject the opinion that simply labeling something “speech” means government must keep its hands off. Further, it’s clear, as Neil Richards says, that the general trend of these tests is to put political speech by individuals at the core, demanding the highest level of scrutiny to justify legislation, and other forms of speech are seen in relation to it.

Apple’s filing in #AppleVsFBI overlooks this fact entirely—Apple writes as if the fact that “code is speech” simply blocks all action by the government. It doesn’t. Even if a Court were to apply the strict scrutiny test to a case like this one, it’s entirely plausible that using a legal warrant to “speak” by writing some computer code in the service of the investigation of a crime where many people were killed and many more were injured would be properly viewed as “compelling government interest” and a “narrowly tailored measure.” Although it is fashionable to dismiss everything the government says in this case and others like it, it’s notable that the Department of Justice makes exactly this point in its own filing (Appendix 4).

iii. In General, Code Is More Properly Viewed as Action than as Speech

This is truly the heart of the matter. Advocates love to gloss over the fact that the cases mentioned here look at one or another aspect of code, but actively reject the wholesale equation of code and speech. The reasons for this are obvious. Code clearly does have some speech-like qualities. In certain limited contexts, code can be used to express ideas between people, whether through the archetypal example of “Perl poetry” or the more prosaic argument that publication of Daniel Bernstein’s Snuffle code was meant to convey the idea of his encryption algorithm to other programmers. This is what advocates love to harp on, and some at times talk as if the fact that code can have expressive features itself triggers full First Amendment protections for any and all code. This is, simple, false.

The digital revolution is characterized by fallacious arguments that take the form “completely different and exactly the same”: a phenomenon is exactly the same as some existing phenomenon X, so critical objections to it are invalid; the phenomenon is completely different from X, which is why consumers would want it and developers will build and sell it. This is particularly true with regard to code. Code is one of the most explosively new phenomena society has ever encountered, particularly in the past 50 years. It has literally transformed enormous parts of the world. It is deeply connected to existing phenomena (especially formal logic and mathematics), but it is different from all of them. Code is why computers exist. Code is one of the main reasons that computers are remarkable, one of the main reasons we have a “digital revolution.” That is why it is incredibly disingenuous and self-serving that the most vigorous proponents of the transformative power of the digital—and in this sense of code—are the very same people telling us that code is so much like “language” (no doubt, because it is largely made of language and language-like symbols) that the law must treat it as if it were language

This is most easily seen when we think about the main use to which code is put. The reason we have and pay so much attention to code is because it is executed. Execution is not primarily a form of communication, not even between humans and machines. Execution is the running of instructions: it is the direct carrying out of actions. It is adding together two numbers, or multiplying ten of them, or looping back to perform an operation again, or incrementing a value. This is what code does. Lots of code does not even look like language at all—it looks, if anything, like giant arrays of numbers that mean very little to anyone but the most highly-trained programmer. All of it executes, or at least can be executed. That is what it is for. That is what it does. That’s what makes it different from lots of other things, like most language and expressive media, all of whose primary function is to convey thoughts and ideas and feelings between persons.

Code can and often does serve these expressive functions. But this fact has been used, often very cynically, by digital advocates to push the legal system into misapprehending expression as the primary purpose of code, which it is not. Then, decision based in part or in whole on these expressive functions are taken as “proof” of some sort that code cannot be regulated. This is cynical because the advocates know that what they mean by code is not the freedom to express ideas with code, but to run it. Thus the courts hear “code is speech” as a doctrine about the fact that code can be used to express ideas between people, and that it therefore can’t be restricted on First Amendment grounds. Engineers and the EFF and others hear these decisions as resulting in a dictum that the government cannot restrict the running of programs because running programs is like speaking. This is just, as near as one can be about legal matters, false on its face. And it results in abhorrent doctrine: it says that as long as a corporate actor takes action using programming code, the government cannot restrict it because of the First Amendment.

It’s important to note that two of the cases most often cited in this context, Bernstein and Universal Studios, both make this point. Bernstein explicitly restricts itself to the publication of code because code in execution is so different from ordinary forms of expression, and the dissent in Bernstein does not even accept this (Appendix 2); the court in Universal Studios considers at length “the functional aspects of code”—that is, its use in execution—and considers them to raise different questions from its expressive characteristics (Appendix 3).

iv. Apple’s Argument Expands the Idea of “Code Is Speech” in an Unprecedented and Antidemocratic Manner

Apple claims that “code is speech” is settled law; it isn’t. Then it claims that “code is speech” prohibits the government from compelling Apple to write code because that would be compelling it to speak. But that assertion rests on a something that is settled law: that the government can compel corporations to speak, much differently from how it treats individuals (see “Compelled Speech”). In its filings and press releases, Apple refers to itself as if it were a natural person. Natural persons have many more First Amendment (and other Bill of Rights) protections than do corporations, although the boundaries between the two are more porous and murky than they should be (see Greenwood’s essays, and Pollman 2011). Many lawyers and legal scholars (and even dissenting Supreme Court justices) feel that the twin leaps of logic that come to fruition in Citizens United and Hobby Lobby—that corporations are people and that money spent (by corporations) is speech and that corporations can have religious beliefs—depend on truly untenable extensions of constitutional rights to corporations. But even there, the Court has not ruled that corporations have exactly the same rights under the First Amendment that natural persons do.

Further, Apple nowhere mentions that the question of corporate compelled speech is one that the Supreme Court has adjudicated, largely against the position it takes. Corporations are artificial entities that only exist at the pleasure and license of the government. In the very act of incorporation, corporations agree to participate in certain aspects of law and regulation that natural persons do not have to.

Corporations can be compelled to speak. For example, corporations are not allowed to promulgate false advertising. This is different from the fact that individuals can sue after the fact for libel and slander. The FTC is allowed to censor advertisements it deems to be false, along with other tools at its disposal. In addition, the Court has long ruled that the FDA and other regulatory bodies can demand that products be labeled for potential harm to human life (e.g. poison labels on pesticides, warning labels on pharmaceuticals and tobacco products), for informational purposes related only generally to health (nutrition information on food), and even for purposes of general engineering safety and operability (such as the engineering labels that Apple itself includes inside its products and in the literature surrounding them—in other words, Apple is already being “compelled” to speak millions of times each day, much more publicly than it would have been in #AppleVsFBI, and does not complain about it).

Apple’s public portrayal of itself as a natural person whose rights to speak are being infringed by the government—and the way advocates unquestioningly repeat this representation—shows the danger of accepting “code is speech” as a general principle. Its effect is not at all what it appears to be. That makes sense: the speech of individuals is already protected at a very profound level by First Amendment jurisprudence, and if and when it appears that an individual is “speaking” through code—as at least the majority opinion in Bernstein found—courts have adequate resources to confront that issue. Just because “code is speech” is so vague on the one hand, and so badly grounded in actual free speech doctrine on the other, it leads to objectionable restatements of legal principle that are easily exploited by those who have the strongest vested interest in controlling how governments regulate power.

v. “Code Is Speech” Diminishes Civil Rights

Richards, Goodman, and others (Boston Globe, Fein and Gifford) have done good work in nailing this down, so I’m not going to belabor it, except to make a relatively obvious point that is repeatedly echoed in the case law. Most code is action. A huge amount of code is promulgated by corporations, not individuals. The effect of embracing “code is speech” is to say that governments cannot regulate what corporations do. That might seem like hyperbole, but it is 100% on board with the Silicon Valley view of the world, the overt anarcho-capitalism that many of its leaders embrace, and the covert cyberlibertarianism that so many more accept without fully understanding its consequences. It is profoundly anti-democratic. This is part of what makes it so confounding that so many see Apple as some kind of civil rights actor, especially when its avowed mission is to sell products that block the serving of legal, targeted warrants, and when it already makes such outrageous statements regarding the corporate taxes it clearly would owe if not for the existing successful capture of regulatory and legislative bodies that enable the kinds of corporate inversions and other tax dodges we see on display in the Panama leaks, among many other places.

The most tireless legal scholar on this point is Hofstra professor Daniel J. H. Greenwood, whose works, cited in the bibliography below, I cannot recommend enough. Greenwood (1998, 2005, 2013; Greenfield, Greenwood and Jaffe 2007) has long argued that “nothing in the structure or language of the bill of rights suggests that the traditional rights of American citizens apply to corporations” (Greenwood 2013, 14). Coates (2015), Miller (2011), Pollman (2011), and quite a few others have worked on it as well. To most scholars who don’t have a vested financial interest in the success of one company or another (e.g. those who don’t work directly for corporations or for corporate-funded think tanks), the encroachment of corporations into rights language has been one of the signal failures of US democracy. This is not to say corporations never should have any rights, or even that the notion of “artificial person” is entirely bankrupt (it seems to do important work, for example, in the rights of both corporations and natural persons to act as equal parties in contracts and to use the civil courts to adjudicate breaches of contract), but that the general expansion of corporate personhood and identification of corporations as the locus of constitutional rights is among the most significant dangers facing democratic governance today.

Consider this truly jarring statement regarding #AppleVsFBI from EFF’s Executive Director, Cindy Cohn:

The Supreme Court has rejected requirements that people put “Live Free or Die” on their license plates or sign loyalty oaths, and it has said that the government cannot compel a private parade to include views that organizers disagree with. That the signature and code in the Apple case are implemented via technology and computer languages rather than English makes no difference. For nearly 20 years in cases pioneered by EFF, the courts have recognized that writing computer code is protected by the First Amendment.

EFF is mostly staffed by lawyers. Cohn is an attorney who has a long pre-EFF history of working for civil and human rights—and actually worked on Bernstein, which EFF persists in mischaracterizing in several critical ways. Yet rather than making clear the fact that she is making a truly novel argument about a corporation being compelled to speak—or, to be much more honest, to take action—she purposely blurs the lines between between code as action and as speech, and between individuals and corporations. She writes that “the FBI should not force Apple to violate its beliefs,” but the only case that the Supreme Court has ever decided that even suggests that corporations have beliefs is the horribly right-wing 2014 Hobby Lobby decision, which nobody outside of far-right ideologues should endorse, and which itself depends on the fact that Hobby Lobby is a family-owned private corporation, not a public company like Apple. It is fine to endorse this view, I suppose, but to frame it in terms of loyalty oaths is really dirty pool. This is right-wing politicking of the highest order, demanding that corporations be extended the full panoply of rights which the framers and almost all non-technology and non-right-wing thinkers have always thought apply only to natural persons. That it can somehow be mounted in terms of “human rights” and “freedom” is really shocking. As a principle, “code is speech” does not represent a natural extension of rights, but rather a significant curtailing of rights, by putting ordinary actions outside the penumbra of legal regulation. Hopefully, should the matter ever be fully adjudicated by the Supreme Court, sanity will prevail (which is obviously asking a lot), and this will be made as clear as it should be.

code is speech

Image Source: ShutterStock/Mclek via



APPENDIX 1: The Cases Do Not Add Up to “Settled Law”

US Court Cases Cited By Apple in Its #AppleVsFBI Motions

  • Universal City Studios, Inc. v. Corley, 273 F.3d 429, 449 (2d Cir. 2001); this case is sometimes referred to as Universal Studios v Reimerdes; the Corley name got attached only on the 2001 appeal. This case is different from the encryption cases, because it asks whether the First Amendment prohibits the government from regulating the distribution of a software program designed to circumvent legal copyright protections (under the DMCA). Unlike some of the other cases, this one does not turn on the First Amendment question. Although it is used by advocates as if it does, in fact the language in this string of cases, especially the appeal, rebuts both the legal and factual arguments about “code is speech” much more than it supports them. Links: Wikipedia; decisions: Corley (appeal), Reimerdes (lower court).
  • Junger v. Daley, 209 F.3d 481, 485 (6th Cir. 2000); this case builds largely on the Bernstein rulings that were eventually overruled and then vacated. Further, like so many of the cases mentioned here, language in the decision rebuts rather than supports the expansive reading of “code is speech.” This case is also about the publication (and execution?) of encryption software. Links: Wikipedia; case archive.
  • 321 Studios v. Metro Goldwyn Mayer Studios, Inc., 307 F. Supp. 2d 1085, 1099–1100 (N.D. Cal. 2004); another case (like Universal Studios) about the DMCA, and another case that rebuts rather than supports the general “code is speech” equivalence: as Wikipedia puts it, the court “did not agree that enforcing the DMCA in this case would regulate computer code on the basis of content. The court held that only the functional element of the computer code was barred, and so the DMCA did not suppress the code based on its content. As such, the court applied an intermediate scrutiny standard in evaluating the restriction of speech in this case.” And it’s important to add: 321 Studios, the litigant who made the First Amendment argument, lost the case: “The court held that both of DVD Copy Plus and DVD-X Copy violated the DMCA and that the DMCA was not unconstitutional. The court enjoined 321 Studios from manufacturing, distributing, or otherwise trafficking in any type of DVD circumvention software.” Links: Wikipedia.
  • United States v. Elcom Ltd., 203 F. Supp. 2d 1111, 1126 (N.D. Cal. 2002); this case may be the biggest stretch of all those listed. The case was a criminal prosecution of software developer Dmitry Sklyarov under the DMCA. Sklyarov was acquitted by the jury, so there was no appeal possible. There is no judicial decision to refer to, and it is not at all clear that the case has anything to do with the First Amendment. Links: Wikipedia.
  • Bernstein v. Dep’t of State, 922 F. Supp. 1426, 1436 (N.D. Cal. 1996): this case is more properly referred to as Bernstein v. U.S. Dept. of Justice, 176 F.3d 1132 (9th Cir.1999) because “the Ninth Circuit ordered that this case be reheard by the en banc court, and withdrew the three-judge panel opinion.” It’s no accident that Apple cites the 1996 decision, as do advocates like EFF, because that decision says something about “code is speech” that is actually superseded in the 1999 appeals decision that overrules the 1966 decision (see Appendix 2). Further, this entire stream of cases was vacated in 2003 and was never fully tested in the courts; it has no binding force: “On October 15, 2003, almost nine years after Bernstein first brought the case, the judge dismissed it and asked Bernstein to come back when the government made a ‘concrete threat.’” Although Bernstein is the case “code is speech” advocates cite most routinely as if it supports their case (it does not), it is the least like settled law among all of them. Links: Wikipedia; EFF archive; Bernstein’s archive.

Relevant US Supreme Court Cases Not Cited By Apple in Its #AppleVsFBI Motions

  • Sorrell v. IMS Health Inc., No. 10-779 131 S.Ct. 2653 (2011). Apple does not cite this case, which is odd since it is the only one to have advanced to the Supreme Court, the only one to have turned directly on First Amendment questions, and the only case to possibly have what is usually meant by the phrase “settled law.” Of course this case is not directly about code: it is about whether governments can prevent corporations from selling data they have collected. Like Citizens United, this case was written by the right wing of the Court, and along with that case is frequently referred to by everyone to their left as a truly disturbing extension of First Amendment rights to corporations. Links: Wikipedia; decision.
  • Brown v. Entertainment Merchants Ass’n, 564 U.S. 08–1448 (2011). The other Supreme Court case that appears to touch on code as speech. In Brown, the Supreme Court ruled 7-2 that video games deserve the same protections as do any other forms of cultural expression. Andrew Tutt summarizes the holding: “As a threshold matter, the Court had to decide whether video games were speech. Rather than reach beyond video games to software generally, the Court zeroed in on video games and held that they were speech because they communicated ideas through familiar literary devices. The Court reasoned that video games were speech because they expressed ideas in familiar ways: ‘Like the protected books, plays, and movies that preceded them, video games communicate ideas—and even social messages—through many familiar literary devices (such as characters, dialogue, plot, and music) and through features distinctive to the medium (such as the player’s interaction with the virtual world).’” This sidesteps rather than addresses the main code-as-speech question: it is hard to argue that cultural products made with computer code are any different from any other cultural products, so they deserve identical kinds of protection to those other products. The case is clearly about things built with code, rather than code itself; Tutt is particularly good on the fundamental differences between the two. Links: Wikipedia; decision.
  • Citizens United v. Federal Election Commission, No. 08-205, 558 U.S. 310 (2010). The case in which the Supreme Court famously ruled that laws constraining campaign-related expenditures, even when those expenditures were made by corporations, violated the First Amendment. Relevant to this case because even the ACLU sided with the majority view here, which is typically summarized as “money is speech.” Citizens United extends, and to some extent depends upon, the expansion of First Amendment rights to entities other than natural persons (see e.g. Park 2014 for a catalog of some of the other relevant cases), but even those reasonable First Amendment advocates who agree with the basic thrust of the decision did much more damage than necessary to the Constitutional fabric due to “opportunistic overreach”: “In Citizens United, the Court was presented with a narrow question about the constitutionality of campaign finance rules as applied to a nonprofit’s on-demand video, but it transformed the case into an opportunity to rule with a broad brush, putting essentially all future regulation of campaign finance in conspicuous jeopardy” (Tribe 2015, 476-7). Links: Wikipedia; decision.
  • Burwell v. Hobby Lobby, 573 U.S. ___ (2014). In some ways even more than Citizens United this is the case that should disturb civil liberties advocates with regard to the “code is speech” claim. In Hobby Lobby the Court decided that corporations—although, in this case, a very specific kind of corporation that is closely held by a family—can have the kind of religious beliefs that the First Amendment was intended to protect under the free exercise clause. Almost all commentators, even some who support Citizens United, feel that in Hobby Lobby the court went far beyond the question put to it in the case. As one prominent legal commentator puts it, the “unfounded claims” made by the majority in Hobby Lobby “disregarded the fundamental feature of state corporate law: separation of ownership from the entity”: “Far from being ‘quite beside the point,’ legal separateness is the point of creating a corporation,” (Garrett 2014, 145-6). In her dissent, Justice Ruth Bader Ginsburg wrote: “the exercise of religion is characteristic of natural persons, not artificial legal entities” (Hobby Lobby at 2794, Ginsburg, J., dissenting). It’s notable how much the “code is speech” defense mounted by Apple portrays corporations as natural persons with the kinds of beliefs, attitudes and rights that many outside the far right think only attach to natural persons. Links: Wikipedia; slip opinion.

APPENDIX 2: Bernstein Explicitly Restricts Itself to the Publication and Expressive Features of Code

The part of a lower court ruling EFF and others love to cite, but that is superseded by appeals:

This court can find no meaningful difference between computer language, particularly high-level languages as defined above, and German or French….Like music and mathematical equations, computer language is just that, language, and it communicates information either to a computer or to those who can read it…
-Judge Patel, April 15, 1996

This seems mistaken for many reasons, among them that it creates an entirely new principle that music, math equations, and code are just language, which is obviously false. It also relies on the completely false equivalency between “programming languages” and “[human] languages,” another characteristic digital fallacy which requires separate treatment, but if you want some sense of why this is wrong, ask a linguist.

From the much more limited appeals ruling by Betty Fletcher:

Cryptographers use source code to express their scientific ideas in much the same way that mathematicians use equations or economists use graphs. Of course, both mathematical equations and graphs are used in other fields for many purposes, not all of which are expressive. But mathematicians and economists have adopted these modes of expression in order to facilitate the precise and rigorous expression of complex scientific ideas. Similarly, the undisputed record here makes it clear that cryptographers utilize source code in the same fashion. (Fletcher 1999 at 4233)

Judge Nelson’s dissent is even stronger, and I think it’s correct, and I hope that any future rulings, especially by the Supreme Court, follow its reasoning. Nelson writes that he is

inevitably led to conclude that encryption source code is more like conduct than speech. Encryption source code is a building tool. Academics and computer programmers can convey this source code to each other in order to reveal the encryption machine they have built. But, the ultimate purpose of encryption code is, as its name suggests, to perform the function of encrypting messages. Thus, while encryption source code may occasionally be used in an expressive manner, it is inherently a functional device. (Nelson, dissent, 4245-6)

APPENDIX 3: Both Universal Studios Rulings Say Unambiguously that Code Can Be Regulated

Universal Studios v Reimerdes:

These considerations suggest that the DMCA as applied here is content neutral, a view that draws support also from City of Renton v. Playtime Theatres, Inc. The Supreme Court there upheld against a First Amendment challenge a zoning ordinance that prohibited adult movie theaters within 1,000 feet of a residential, church or park zone or within one mile of a school. Recognizing that the ordinance did “not appear to fit neatly into either the ‘content based or the ‘content-neutral’ category,” it found dispositive the fact that the ordinance was justified without reference to the content of the regulated speech in that the concern of the municipality had been with the secondary effects of the presence of adult theaters, not with the particular content of the speech that takes place in them. As Congress’ concerns in enacting the anti-trafficking provision of the DMCA were to suppress copyright piracy and infringement and to promote the availability of copyrighted works in digital form, and not to regulate the expression of ideas that might be inherent in particular anti-circumvention devices or technology, this provision of the statute properly is viewed as content neutral. (at 21)

And further,

This analysis finds substantial support in the principal case relied upon by defendants, Junger v. Daley. The plaintiff in that case challenged on First Amendment grounds an Export Administration regulation that barred the export of computer encryption software, arguing that the software was expressive and that the regulation therefore was unconstitutional. The Sixth Circuit acknowledged the expressive nature of computer code, holding that it therefore was within the scope of the First Amendment. But it recognized also that computer code is functional as well and said that “[t]he functional capabilities of source code, particularly those of encryption source code, should be considered when analyzing the governmental interest in regulating the exchange of this form of speech.” Indeed, it went on to indicate that the pertinent standard of review was that established in United States v. O’Brien, the seminal speech-versus-conduct decision. Thus, rather than holding the challenged regulation unconstitutional on the theory that the expressive aspect of source code immunized it from regulation, the court remanded the case to the district court to determine whether the O’Brien standard was met in view of the functional aspect of code. (at 24)

APPENDIX 4: DoJ Response to Apple’s #AppleVsFBI Filing

On the difference between corporate & individual speech, & quoting two of the exact cases Apple cites to show that they do not support its argument:

Apple’s claim is particularly weak because it does not involve a person being compelled to speak publicly, but a for-profit corporation being asked to modify commercial software that will be seen only by Apple. There is reason to doubt that functional programming is even entitled to traditional speech protections. See, e.g., Universal City Studios, Inc. v. Corley, 273 F.3d 429, 454 (2d Cir. 2001) (recognizing that source code’s “functional capability is not speech within the meaning of the First Amendment”). “[T]hat [programming] occurs at some level through expression does not elevate all such conduct to the highest levels of First Amendment protection. Doing so would turn centuries of our law and legal tradition on its head, eviscerating the carefully crafted balance between free speech and permissible government regulation.” United States v. Elcom Ltd., 203 F. Supp. 2d 1111, 1128-29 (N.D. Cal. 2002). (US DoJ 2016, p32)

On the question of which scrutiny standard should apply:

Even if, despite the above, the Order placed some burden on Apple’s ability to market itself as hostile to government searches, that would not establish a First Amendment violation because the Order “promotes a substantial government interest that would [otherwise] be achieved less effectively.” Rumsfeld, 547 U.S. at 67. There is no question that searching a terrorist’s phone—for which a neutral magistrate has found probable cause—is a compelling government interest. See Branzburg v. Hayes, 408 U.S. 665, 700 (1972) (recognizing that “the investigation of a crime” and “securing the safety” of citizens are “fundamental” interests for First Amendment purposes). As set forth above, the FBI cannot search Farook’s iPhone without Apple’s assistance, and Apple has offered no less speech-burdensome manner for providing that assistance.

For all of these reasons, Apple’s First Amendment claim must fail. (US DoJ 2016, p34)

APPENDIX 5: Scrutiny Tests in Constitutional Jurisprudence

One of several principles involved in #AppleVsFBI that, unlike “code is speech,” deserves the label of “settled law” is the application of so-called “scrutiny tests” to cases involving possible governmental breaching of fundamental civil rights. While it is most familiarly applied in First Amendment cases, it is used across the board in a wide variety of rights cases, and is frequently associated with the Equal Protection clause of the 14th Amendment (though its connection to that clause is not entirely clear; see Siegel 2006 for a thorough analysis).

To show how basic scrutiny tests are to US rights law, here’s a characteristic passage from a Con Law textbook I was able to find online:

The determination that “speech” is involved is just the beginning. It means that the case will be decided under the First Amendment. However, it does not guarantee the outcome. The right to speak is not absolute. A society in which the government was powerless to restrain citizens from speaking at any time or place, on any subject, however loudly they pleased, would be an insufferable place to live. The First Amendment does not strip the government of power to regulate speech; it prohibits the government from “abridging freedom of speech.” Deciding when a restriction “abridges freedom of speech” is what First Amendment jurisprudence is about; this determination calls for complex value judgments. (Kanovitz 2010, 45; my emphasis)

These value judgments are made through the use of a well-established concept that one never reads in the pro-“code is speech” evangelism: “scrutiny.” Scrutiny is part of what Con Law students also learn in their first year. It refers to the kinds of tests courts must apply to determine whether a law or regulation is allowable. The closer a speech act is to the core case—core political speech, such as an editorial endorsing a political candidate or passing of a law—the higher the level of scrutiny that the courts must apply. This highest level is called “strict scrutiny”; in these cases, the government must show a “compelling governmental interest” in the goal of the regulation, and that the regulation is “narrowly tailored” to accomplish this goal. Even in cases of core political speech, regulations that meet the “strict scrutiny” tests have passed and will continue to pass Supreme Court muster. The most familiar example is in content-based prohibitions on political speech within a certain distance of polling places on election days. Remember, that is an example of the most important kind of speech on anyone’s philosophy, and yet Courts have routinely ruled that, within these very limited contexts, actual prohibition on speech is fully legal.

Here’s material from a Con Law resource at the University of Missouri-Kansas that goes into some detail about the varying application of scrutiny tests with regard to the Equal Protection Clause:

Legislation frequently involves making classifications that either advantage or disadvantage one group of persons, but not another.  States allow 20-year-olds to drive, but don’t let 12-year-olds drive.  Indigent single parents receive government financial aid that is denied to millionaires.  Obviously, the Equal Protection Clause cannot mean that government is obligated to treat all persons exactly the same–only, at most, that it is obligated to treat people the same if they are “similarly circumstanced.”

Over recent decades, the Supreme Court has developed a three-tiered approach to analysis under the Equal Protection Clause.

Most classifications, as the Railway Express and Kotch cases illustrate, are subject only to rational basis review.  Railway Express upholds a New York City ordinance prohibiting advertising on commercial vehicles–unless the advertisement concerns the vehicle owner’s own business.  The ordinance, aimed at reducing distractions to drivers, was underinclusive (it applied to some, but not all, distracting vehicles), but the Court said the classification was rationally related to a legitimate end. Kotch was a tougher case, with the Court voting 5 to 4 to uphold a Louisiana law that effectively prevented anyone but friends and relatives of existing riverboat pilots from becoming a pilot.  The Court suggested that Louisiana’s system might serve the legitimate purpose of promoting “morale and esprit de corps” on the river.  The Court continues to apply an extremely lax standard to most legislative classifications.  In Federal Communications Commission v Beach (1993), the Court went so far as to say that economic regulations satisfy the equal protection requirement if “there is any conceivable state of facts that could provide a rational basis for the classification.”  Justice Stevens, concurring, objected to the Court’s test, arguing that it is “tantamount to no review at all.”

Classifications involving suspect classifications such as race, however, are subject to closer scrutiny. A rationale for this closer scrutiny was suggested by the Court in a famous footnote in the 1938 case of Carolene Products v. United States (see box at left). Usually, strict scrutiny will result in invalidation of the challenged classification–but not always, as illustrated by Korematsu v. United States, in which the Court upholds a military exclusion order directed at Japanese-Americans during World War II. Loving v Virginia produces a more typical result when racial classifications are involved: a unanimous Supreme Court strikes down Virginia’s miscegenation law.

The Court also applies strict scrutiny to classifications burdening certain fundamental rights. Skinner v Oklahoma considers an Oklahoma law requiring the sterilization of persons convicted of three or more felonies involving moral turpitude (“three strikes and you’re snipped”). In Justice Douglas’s opinion invalidating the law we see the origins of the higher-tier analysis that the Court applies to rights of a “fundamental nature” such as marriage and procreation. Skinner thus casts doubt on the continuing validity of the oft-quoted dictum of Justice Holmes in a 1927 case (Buck v Bell) considering the forced sterilization of certain mental incompetents: “Three generations of imbeciles is enough.”

The Court applies a middle-tier scrutiny (a standard that tends to produce less predictable results than strict scrutiny or rational basis scrutiny) to gender and illegitimacy classifications.

There is nothing to the claim that the fundamental rights listed in the Bill of Rights or the UDHR simply block governments from all legislation and regulation. Among other things, such categorical principles would make it impossible to adjudicate cases where rights and responsibilities come into conflict—which, again contrary to the rhetoric of crypto-anarchists and other black-and-white thinkers, happens more often than not. Wikipedia links: Strict Scrutiny; Intermediate Scrutiny

Posted in cyberlibertarianism, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , , | 5 Responses