We Don’t Know What ‘Personal Data’ Means

It’s Not Just What We Tell Them. It’s What They Infer.

Many of us seem to think that “personal data” is a straightforward concept.  In discussions about Facebook, Cambridge Analytica, GDPR, and the rest of the data-drenched world we live in now, we proceed from the assumption that personal data means something like “data about myself that I provide to a platform.” Personal data means my birthdate, my gender, my family connections, my political affiliations. It is this data that needs special protection and that we should be particularly concerned about providing to online services.

This is partly true, but in another way it can be seriously misleading. Further, its misleading aspects trickle down into too much of our thinking about how we can and should protect our personal or private data, and more broadly our political institutions. The fact is that personal data must be understood as a much larger and even more invasive class of information than the straightforward items we might think.

A key to understanding this can be found in a 2014 report by Martin Abrams, Executive Director of the Information Accountability Foundation (a somewhat pro-regulation industry think tank), called “The Origins of Personal Data and Its Implications for Governance.” Abrams offers a fairly straightforward description of four different types of personal data: provided data, which “originates via direct actions taken by the individual in which he or she is fully aware of actions that led to the data origination”; observed data, which is “simply what is observed and recorded,” a category which includes an enormous range of data points: “one may observe where the individual came from, what he or she looks at, how often he or she looks at it, and even the length of pauses”; derived data, “derived in a fairly mechanical fashion from other data and becomes a new data element related to the individual”; and inferred data, “the product of a probability-based analytic process.”

To this list we’d need to add at least two more categories: anonymized data and aggregate data. Anonymized data is data that in some way has the identifying information, for example a person’s name, stripped off of it; unlike the other categories, the GDPR does directly address anonymized and pseudonymized data, noting in GDPR Recital 26 that the “regulation does not therefore concern the processing of such anonymous information”; this might be more comforting if it were not clear that “true data anonymization is an extremely high bar, and data controllers often fall short of actually anonymizing data.”

Aggregate data, as I’m using the term here, refers to data that is collected at the level of the group, but does not allow drilling down to specific individuals. In both cases, the lack of direct personal identification may not interfere with the ability to target individuals, even if they can’t necessarily be targeted by name (although in the case of anonymized data, a major concern is that advances in technology all too often make it very possible to de-anonymize what had once been anonymized). GDPR’s impact on aggregate data is one of the areas of the regulation that remains unclear.

To understand why this data that may on the surface seem “impersonal,” but can in fact be highly targeted at us as individuals, consider, for example, one form of analysis that is found all over the Facebook/Cambridge Analytica story, the so-called Big Five personality traits that sometimes go by the acronym OCEAN: openness, conscientiousness, extroversion, agreeableness, and neuroticism. Researchers and advertisers focus on the Big Five because they appear to give marketers and behavioral manipulators particularly strong tools with which to target us.

A recent New York Times story takes some examples from the research of Michael Kosinski, a Stanford University researcher whose work is often placed at the center of these conversations, and which may have been used by Cambridge Analytica. While Kosinski might be a particularly good salesperson for the techniques he employs, we do not have to accept at face value that everything he says is correct to see that the general methods he uses are widely employed, and seem to have significant validity.

Kosinski provided the Times with inferences he made about individuals’ OCEAN scores based on nothing more than Facebook “like” data. He generated these inferences by taking a group of experimental participants, ranking their own “like” scores against their own OCEAN scores, and then using machine learning to infer probabilities about the associations of their likes with their OCEAN ranks. In addition to their OCEAN rankings, in some places Kosinski has claimed even more for this technique; in a famous segment in Jamie Bartlett’s recent Secrets of Silicon Valley documentary, Kosinski correctly infers Bartlett’s religious background from his Facebook likes. This is also exactly the kind of aggregate, inferential data about us that the infamous “This Is Your Digital Life” app Facebook has said Cambridge Analytica used to gather information about not just individuals but our friends.

Even in the examples Kosinski gives, it is tempting to draw causal relationships between the proffered data and the inferred data. Perhaps people who like A Clockwork Orange are particularly interested in alternate versions of reality, so perhaps this makes them “open” to new experiences; perhaps Marilyn Manson gives off visual or aural cues for being neurotic. This kind of reasoning is just the mistake we need to avoid. These causal relationships may even exist, but it does not matter, and that is not what Kosinski’s techniques aim to discover. The software that decides that more “open” people like A Clockwork Orange is not asking why that is the case; it just is.

The fact that some of these data points look like they have causal relationships to the inferred personality traits is misleading. This was just a bunch of data with many points (that is, thousands of different things people could “like”) so that it was possible to create rich mappings between that set of data and an experimental set between people whose characteristics on the OCEAN scale are well-known, and their own “likes.”

The same kind of comparisons can be done with any set of data. It could be done with numbers, colors, the speed of mouse clicks, the timber of voice, and many, many other kinds of data.

These facts create a huge conundrum for privacy advocates. We might think we have told Facebook that we like A Clockwork Orange, and that that is the end of the story. But what if, by telling them that fact, we have also told them that we are gay, or have cirrhosis of the liver, or always vote against granting zoning permits to large box stores? What if we tell them that not even by something concrete like “liking” a movie, but simply by the speed and direction characteristics of how we use our computer mouse?

It is critical to understand that it does not matter whether this information is accurate at the level of each specific individual. Data collectors know it is not entirely accurate. Again, people mistake the correlation for a linear, causal relationship: “if I like hiking as a hobby, I am conscientious.” They then tend to evaluate that relationship on grounds of whether or not it makes sense to them. But this is a mistake. What is accurate about these techniques involves statistical inference. That inference is something along the lines of “75% of those who report they like hiking rank high on the ‘openness’ scale.” Of course it is wrong in some cases. That does not matter. If an advertiser or political operative wants to affect behavior, they look for triggers that motivate people who score highly on the “openness” score. Then they offer products, services, and manipulative media with similar scores.

That is, they don’t have to know why a point of data implies another point of data. They only have to know that it does, within a certain degree of accuracy. This is why descriptions like inferential, aggregate and anonymized data must be of primary concern in understanding what we typically mean by “privacy.”

Research into big data analytics is replete with examples of the kinds of inferential and derived data that we need to understand better. Data scientist and mathematician Cathy O’Neil’s important 2016 book Weapons of Math Destruction includes many examples. Some of her examples make sense, even if they offend pretty much any sense of justice we might have: the use of personality tests to screen job applicants based on inferences made about them rather than anything about their work history or qualifications (105-6), or O’Neil’s own experience building a system to determine a shopper’s likelihood to purchase based on their behavior clicking on various ads (46-7). Others derive inferences from data apparently remote from the subject, such as O’Neil’s citation of a Consumer Reports investigation that found a Florida auto insurer basing insurance rates more heavily on credit scores than on accident records (164-5). In that case, observed data (use of credit) is converted into derived data (the details of the credit report) and then, via big data analysis, converted into inferential data (likelihood of making an insurance claim).

Automating Equality: How High-Tech Tools Profile, Police, and Punish the Poor, a 2017 volume by political scientist and activist Virginia Eubanks, contains many examples of derived and inferential data being used to harm already-vulnerable members of society, such as an Allegheny County (in Pennsylvania) algorithm that attempted to predict which families were likely to be at high risk for child abuse, but which was based on big data analysis of hundreds of variables, many of which have no proven direct relationship to child abuse, and which ended up disproportionately targeting African American families (Chapter 4). In Chapter of the book Eubanks writes about massive data collection programs in Los Angeles intended to provide housing resources to the homeless, but which end up favoring some individuals over others for reasons that remain opaque to human case workers, and not clearly based on the directly relevant observations about clients that those case workers prefer to make life-changing decisions.

In the Cambridge Analytica case, inferential data appears to play a key role. When David Carroll and I requested our data from the company under UK law, the most interesting part of the results was a table of 10 ranked hot-button political issues. No information was provided about how this data was produced, but it clearly cannot have been provided data, as it is not data I have directly provided to anyone; I have not even thought about these issues in this form, and if the data is correct, much of it is news to me. The data is likely not observed, since that would require there to be a forum in which I had taken actions to indicate the relative importance of these issues to me, and I can’t think of any forum in which I’ve done anything close to that. So that leaves inferential and derived data, and both Carroll and I and the analysts we’ve been working with presume that this data is in fact inferred from some large body of data (Cambridge Analytica has at some points claimed to hold upwards of 4000 data points on individuals, on which it performs what Kosinski and others call “psychographics,” which is just the kind of inferential personal data I’ve been talking about used to determine very specific aspects of an individual’s personality, including their susceptibility to various kinds of behavioral manipulation). While it is hard to judge the accuracy of the ranked list precisely (in part because we don’t really know how it was meant to be used), overall it seems quite accurate, and thus offers a fairly complete inferred and/or derived political profile of me based on provided and observed data that likely only had at best a partial relationship to my political leanings.

CA SCL data

Yes, we should be very concerned about putting direct personal data out onto social media. Obviously, putting “Democrat” or even “#Resist” in your public Twitter profile tells anyone who asks what party we are in. We should be asking hard questions about whether it is wise to allow even that minimal kind of declaration in public and whether it is wise to allow it to be stored in any form, and by whom. But perhaps even more seriously, and much less obviously, we need to be asking who is allowed to process and store information like that, regardless of where they got it from, even if they did not get it directly from us.

A side note: academics and activists sometimes protest the inaccessibility of some kinds of data due to the importance of understanding what companies like Facebook are doing with our data. That’s an important conversation to have, but it’s worth noting that both Kosinski and Alexander Kogan, another researcher at the heart of the Cambridge Analytica story, got access to the data they used because they were academics.

In his testimony before the US House of Representatives Energy and Commerce Committee on April 11, 2018, Facebook CEO Mark Zuckerberg offered the following reassurance to Facebook users:

The content that you share, you put there. You can take it down at any time. The information that we collect, you can choose to have us not collect. You can delete any of it, and, of course, you can leave Facebook if you want.

At first glance, this might seem to cover everything users would care about. But read the language closely. The content “users share” and the content that Facebook “collects” name much thinner segments of Facebook’s user data than the words might seem to suggest.

Just taking Zuckerberg’s language literally, “the content you share” sounds like provided data, and “the information that we collect” sounds like some unspecified mix of provided and observed data.

Mark Zuckerberg Data

But what about derived, inferred, and aggregate data?

What this kind of data can do for those who want to manipulate us is unknown, but its potential for harm is too clear to be overlooked. The existing regulations and enforcement agreements imposed on Facebook and other data brokers have proven insufficient. If there is one takeaway from the Cambridge Analytica story and the Facebook hearings and so on, it is that democracies, and that means democratic governments, need to get a handle on these phenomena right away, because the general public does not and cannot know the extent to which giving away apparently “impersonal” data might, in fact, reveal our most intimate secrets.

Further, as a few commentators have noted, Facebook and Google are the most visible tips of a huge iceberg. The hundreds of data brokers whose entire business consists in selling data about us that we never directly gave to them may be even more concerning, in part because their actions are so much more hidden from the public. Companies like Acxiom aggregate, analyze and sell data, both for advertising and for a wide range of other activities that impact us in many ways we don’t understand at all well enough, up to and including the “social credit score” that the Chinese government appears to be developing to track and control many aspects of public behavior.  Possibly even worse, the data fuels the activities of full-scale surveillance companies like Peter Thiel’s Palantir, with which Mark Zuckerberg himself said in his Congressional testimony declared he “isn’t that familiar,” despite Thiel being a very visible and outspoken early Facebook investor mentor to Zuckerberg. Facebook itself has a disturbing interest in the data of people who have not signed up for the service, which just illustrates its similarity to data brokers like Acxiom.

If Facebook and Google and the data brokers were to say, “you can obtain, and if you choose to, delete, all the data we have about you,” or better yet, “you have to actively opt-in to give us your data and agree to the processing we do with it,” that might go a long way toward addressing the kind of concerns I and others have been raising for a long time about what is happening with surveillance and behavioral manipulation in digital technology. But would that even be enough? Is it clear that data “about” me is all the data that is directly attached to my name, or whatever other unique personal identifier Facebook uses? Would these companies even be able to stay in business if they offered users that much control?

Even the much-vaunted and very important GDPR is not at all as clear as it could be about the different kinds of data.  If we are to rein in the massive invasions of our privacy found in social media, we need to understand much more clearly and specifically what that data is, and what social media companies and data brokers and even academics do with it.

Posted in "social media", privacy, rhetoric of computation | Tagged , , , , , , , , , , , , , , , | Leave a comment

The Terribly Thin Conception of Ethics in Digital Technology

Thanks in part to ongoing revelations about Facebook, there is today a louder discussion than there has been for a while about the need for deep thinking about ethics in the fields of engineering, computer science, and the commercial businesses built out of them. In the Boston Globe, Yonatan Zunger wrote about an “ethics crisis” in computer science.  In The New York Times, Natasha Singer wrote about “tech’s ethical ‘dark side.’”

Chris Gilliard wrote an excellent article in the April 9, 2018 Chronicle of Higher Education focusing specifically on education technology titled  “How Ed Tech is Exploiting Students.” Since students are particularly affected by academic programs like computer science and electrical engineering, one might imagine and hope that teachers of these topics would be particularly sensitive to ethical concerns. (Full disclosure: I consider Chris a good friend, and he and I have collaborated on work in the past, intend to do so in the future, and I read an early draft of his Chronicle piece and provided comments to him.)

robot teacher fooled students

image source: YouTube

Gilliard’s concerns, expressed repeatedly in the article, have to do with 1) what “informed consent” means in the context of education technology; 2) with the fact that participating in certain technology projects entails that students are, often unwittingly, contributing their labor to projects that benefit someone else—that is, they are working for free; and 3) with the fact that the privacy implications of many ed-tech projections are not at all clear to the students:

Predictive analytics, plagiarism-detection software, facial-recognition technology, chatbots — all the things we talk about lately when we talk about ed tech — are built, maintained, and improved by extracting the work of the people who use the technology: the students. In many cases, student labor is rendered invisible (and uncompensated), and student consent is not taken into account. In other words, students often provide the raw material by which ed tech is developed, improved, and instituted, but their agency is for the most part not an issue.

Gilliard gives a couple of examples of ed-tech projects that concern him along these lines. One of them is a project by Prof. Ashok Goel of the Georgia Institute of Technology.

Ashok K. Goel, a professor at the Georgia Institute of Technology, used IBM’s “Watson” technology to test a chatbot teaching assistant on his students for a semester. He told them to email “Jill” with any questions but did not tell them that Jill was a bot.

Gilliard summarizes his concerns about this and other projects as focusing on:

how we think about labor and how we think about consent. Students must be given the choice to participate, and must be fully informed that they are part of an experiment or that their work will be used to improve corporate products.

In an April 11 letter to the Chronicle, Goel objected to being included in Gilliard’s article. Yet rather than rebutting Gilliard’s critique, Goel’s response affirms both its substance and its spirit. In other words, despite claiming to honor the ethical concerns Gilliard raises, Goel seems not to understand them, and to use his lack of understanding as a rebuttal. This reflects, I think, the incredibly thin understanding of ethics that permeates the world of digital technology, especially but not at all only in education technology.

Here are the substantive parts of Goel’s response:

In this project, we collect questions posed by students and answers given by human teaching assistants on the discussion forum of an online course in artificial intelligence. We use this data exclusively for partially automating the task of answering questions in subsequent offerings of the course both to reduce teaching load and to provide prompt answers to student questions anytime anywhere. We deliberately elected not to inform the students in advance the first time we deployed Jill Watson as a virtual teaching assistant because she is also an experiment in constructing human-level AI and we wanted to determine if actual students could detect Jill’s true identity in a live setting. (They could not!)

In subsequent offerings of the AI class over the last two years, we have informed the students at the start of the course that one or more of the teaching assistants are a reincarnation of Jill operating under a pseudonym and revealed the identity of the virtual teaching assistant(s) at the end of the course. The response of the several hundred students who have interacted with Jill’s various reincarnations over two years has been overwhelmingly positive.

In what follows I am going to assume that Goel raised all the issues he wanted to in his letter. It’s possible that he didn’t; the Chronicle maintains a tight word limit on letters. But it is clear that the issues raised in the letter are the primary ones Goel saw in Gilliard’s article and that he thinks his project raises.

In almost every way, the response affirm the claims Gilliard makes, rather than refuting them. First, Gilliard’s article clearly referred to “a semester,” which can only be the first time the chatbot was used, and Goel indicates, without explanation or justification, that he “deliberately elected not to inform the students in advance” about the project during that semester. Yet that deliberation is exactly one of Gilliard’s points: what gives technologists the right to think that they can conduct such experiments without student consent in the first place? Goel does not tell us. That subsequent instances had consent—of a sort, as I discuss next—only reinforces the notion that they should have had consent the first time as well.

There are even deeper concerns, which happen also to be the specific ones Gilliard raises. First, what does “informed consent” mean? The notion of “informed consent” as promulgated by, for example, the Common Rule of the HHS, the best guide we have to the ethics of human experimentation in the US, insists that one can only give consent if one has the option not to give consent. This is not rocket science. Not just the Common Rule, but the 1979 Belmont Report on which the Common Rule is based, itself reflecting on the Nuremberg Trials, defines “informed consent” specifically with reference to the ability of the subject to refuse to participate. This is literally the first paragraph of the Belmont Report’s section on “informed consent”:

Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to choose what shall or shall not happen to them. This opportunity is provided when adequate standards for informed consent are satisfied.

If anything the idea of “informed consent” has grown only richer since then. Perhaps Goel allows students to take a different section of the Artificial Intelligence class if they do not want to participate in the Jill Watson experiment. Such a choice would be required for student consent to be “consent.” His letter reads as if Goel does not realize that “informed consent” without choice is not consent at all. If so, this is not an isolated problem. Some have argued, rightly in my opinion, that failure to understand the meaning of “consent” is a structural problem in the world of digital technology, one that ties the behavior of software, platforms and hardware to the sexism and misogyny of techbro culture. Even the Association for Computing Machinery (ACM, the leading professional organization for computer scientists) maintains a “Code of Ethics and Professional Conduct” that speaks directly of “respecting the privacy of others” in a way that is hard to reconcile with the Jill Watson experiment and with much else in the development of digital technology.

Further, Goel indicates that the point of the Jill Watson experiment is “an experiment in constructing human-level AI.” He does not make clear whether students are told that this is part of the point of the Watson experiment. He does not make clear that both the pursuit of what he calls “human-level AI,” but many philosophers, cognitive scientists and other researchers, including myself, consider a misapplication of ordinary language, raises on its own significant ethical questions, the nature and extent of which certainly go far beyond what students in a course about AI can possibly have covered before the course begins, if they are covered at all. Do the students truly understand the ethical concerns raised by so-called AIs that can effectively mimic the responses of human teachers? Is their informed consent rich with discussions of the ethical considerations raised by this? Do they understand the labor implications of developing chatbots that can replace human teachers at universities? If so, Prof. Goel does not indicate they are.

The sentence about the Watson “experiment” appears to contradict another sentence in the same paragraph, where Goel writes that data generated by the Jill Watson experiment is used “exclusively for partially automating the task of answering questions in subsequent offerings of the course” (emphasis added). Perhaps the meaning of “exclusively” here is that the literal data collected to train Jill Watson is segregated into that project. But the implication of the “experiment” sentence is that whether or not that is the case, the project itself generates knowledge that Goel and his collaborators are using in their other research and even commercial endeavors. This is exactly the concern that is front and center in Gilliard’s article. When the students are fully informed about the ethical and privacy considerations raised by the technology in the course they are about to take, are they provided with a full accounting of Goel’s academic and commercial projects, with detailed explanations of how results developed in the Jill Watson project may or may not play into them? Once again, if so, Goel appears not to think such concerns needed to be mentioned in his letter.

At any rate, Goel certainly makes it sound as if the work done by students in the course helps his own research projects, whether in providing training data for the Jill Watson AI model, and/or in providing research feedback for future models. So do the press releases Georgia Tech issues about the project. I presume it is quite possible that this research could lead directly or indirectly to commercial applications. It may already be leading in that direction.

Gilliard concludes his article by writing that “When we draft students into education technologies and enlist their labor without their consent or even their ability to choose, we enact a pedagogy of extraction and exploitation.” In his letter Goel entirely overlooks the question of labor and claims that consent simply to have a bot as a virtual TA with no apparent alternative, and without clear discussion of the various ways their participation in this project might inform future research and commercial endeavors, mitigates the exploitation Gilliard writes about. This exchange only demonstrates how much work ethicists have left to do in the field of education technology (and digital technology in general), and how uninterested technologists are in listening to what ethicists have to say.

UPDATE, May 3, 5:30pm: soon after posting this story, I was directed by my friends Evan Selinger and Audrey Watters to two papers by Goel that indicate that he had Institutional Review Board (the bodies that implement the Common Rule) approval for the Jill Watson project, and two papers (paper one; paper two) in which he writes at greater length than he does in the Chronicle letter about some ethical implications of the Jill Watson project. I will update this post soon with some reflections on these papers.

Posted in cyberlibertarianism, privacy, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , | Leave a comment

Please Consider Supporting Our Legal Challenge to Cambridge Analytica’s Role in the Trump Election

Since December of last year, I have been part of a small group of concerned citizens engaged in a series of actions against Cambridge Analytica (CA) and its parent corporation, SCL Group.

I am writing this post in the hopes of gathering support (that is, funds) we need to continue this action. You can support us at our Crowd Justice page, which has more information.

Here I’ve tried to lay out some of the background behind our efforts.

Crowd Justice campaign header

Our actions are driven by concern for claims made by CA, and those whose work it relies on, regarding the level of behavioral manipulation of which they are capable, and specifically about whether the techniques CA has developed have been used to manipulate voting behavior, especially in the 2016 US Presidential election and the UK Brexit election (although as US citizens our inquiry is limited to the US election). Carol Cadwalladr of The Guardian is the journalist who has covered the topic most extensively: see, for example, her piece on this campaign, “British Courts May Unlock Secrets of How Trump Campaign Profiled US Voters,” and her earlier pieces on the Brexit/Leave.EU campaign, “Follow the Data: Does a Legal Document Link Brexit Campaigns to US Billionaire?” and “The Great British Brexit Robbery: How Our Democracy Was Hijacked” and on the US Presidential election, including “Cambridge Analytica Affair Raises Questions Vital to Our Democracy” and “When Nigel Farage Met Julian Assange.”

The UK has more extensive data protection laws than does the US. Its laws and regulations are administered by the Information Commissioner’s Office (here not standing for Initial Coin Offering). Because CA/SCL, a British company (SCL Group) with an American subsidiary (CA) appears to have directly collected and used data about US citizens in its work for the Cruz and Trump campaigns in the 2016 election, the UK Data Protection Act (DPA) is triggered. The DPA has many provisions for individuals to discover exactly what data is being collected on them and how it is being used.

In late 2016, David Carroll of Parsons School of Design and I and a few others, working with data researcher Paul-Olivier Dehaye, and the PersonalData.io project he runs, submitted formal requests to CA/SCL under the UK Data Protection Law, which allows us to see all data companies have about us.

During the spring, UK attorney (aka “solicitor”) Ravi Naik of Irvine Thanvi Natas Solicitors took an interest in our efforts and helped to coordinate our requests to CA/SCL. Ravi himself has written an article in The Guardian about the campaign.

It took longer than the 40 days the law allows, but eventually (in March) CA/SCL did return files to David and myself. The file consisted of a single Excel spreadsheet with 3 tabs. Two of these were relatively innocuous identifying information (date of birth, address, records of which elections I’ve voted in) that are available to marketers via public election records. The third, though, is shocking in its implications:

CA SCL data

What is startling about this data, in part, is that it appears to be specifically about how manipulable I might be with regard to central hot button issues in the political public sphere—not necessarily what my opinions are, but whether I would be susceptible to manipulation about issues like “Gun Rights” and “Traditional Social and Moral Values.” In general this psychographic profile strikes me as being plausible, though not necessarily how I consciously think I’d rank all of these issues for myself: but then again, the point of psychographic data is that it knows us better than we know ourselves.

We don’t think this information can possibly be complete, since it gives very little sense of what I think about all of these issues, which a marketer like CA/SCL would surely need to take targeted actions based on it—for example, even if environmental issues are at level 10 importance to me, this data does not indicate whether that means I consider the problem to be climate change, or the fact that climate change is a fraud.

CA/SCL provided no information whatsoever on where and how this information was gathered, whether it represents a purchase of existing information or analytics performed on a body of data CA/SCL also has but has not disclosed, and so on.

In 2017, the ICO issued a document called “Guidance for Political Campaigning.” Among the many provisions of this guidance that CA/SCL would appear not to have followed scrupulously even based on this limited amount of data is this:

79. The big data revolution has made available new and powerful technical means to analyse extremely large and varied datasets. These can include traditional datasets such as the electoral register but also information which people have made publicly accessible on Facebook, Twitter and other social media. Research and profiling carried out by and on behalf of political parties can now benefit from these advanced analytical tools. The outputs may be used to understand general trends in the electorate, or to find and influence potential voters. (16)

80. Whatever the purpose of the processing, it is subject to the DPA if it involves data from which living individuals can be identified. This brings with it duties for the party commissioning the analytics and rights for the individuals to whom the data relates. It includes the duty to tell people how their data is being used. While people might expect that the electoral register is used for election campaigns they may well not be aware of how other data about them can be used and combined in complex analytics. If a political organisation is collecting data directly from people eg via a website or obtains it from another source, it has to tell them what it is going to do with the data. In the case of data obtained from another source, the organisation may make the information available in other ways, eg on its website, if contacting individuals directly would involve disproportionate effort. It cannot simply choose to say nothing, and the possible complexity of the analytics is not an excuse for ignoring this requirement. Our code of practice on privacy notices, transparency and control provides advice on giving people this information. (16-17)

And

81. Even where information about individuals is apparently publicly accessible, this does not automatically mean that it can be re-used for another purpose. If a political organisation collects and processes this data, then it is a data controller for that data, and has to comply with the requirements of the DPA in relation to it. (17)

In its responses to us, other than the data mentioned above, CA/SCL has engaged in a pattern of bullying and denial that suggests to me, at least, that it has much more to disclose and will do everything in its power not to.

In order to take the next step in our legal challenge to CA/SCL, we need to raise £25,000. That is a lot of money. None of the money is going to us; we are raising the money using the established legal crowdfunding site Crowd Justice. The money is needed for two reasons: first, because in the UK, the loser of a lawsuit can be forced to pay the winner’s legal fees (so-called “adverse fees”). If we sue CA/SCL and lose, we could be liable for the fees CA/SCL has paid to its attorneys. With the Mercers backing CA/SCL, we are already certain that they will be using some of the highest-priced corporate attorneys available. Second, the money is needed to pay our own legal fees and to partly reimburse the solicitors working on the case for their time, even though most of their time is being donated.

We believe that continuing to force this issue could ultimately cause CA/SCL to release all of its data on the 2016 Presidential election, and possibly even the Brexit campaign. We also believe it may have extremely positive effects in preventing CA/SCL and other organizations from engaging in similar actions in the future.

We have at this point raised about £20,000 of the initial £25,000 we need to raise to start actions beyond making our subject data requests under the DPA. If you are at all inclined to help us in this effort, please visit our Crowd Justice page.

Posted in "social media", materality of computation, privacy, revolution, surveillance, we are building big brother, what are computers for | Tagged , , , , , , , , , , , | Leave a comment

Article: “The Militarization of Language: Cryptographic Politics and the War of All against All”

I have an article in the latest boundary 2 titled “The Militarization of Language: Cryptographic Politics and the War of All against All.” It is my most sustained attempt to locate and critique a political philosophy in the discourse of encryption advocates, a project I’ve addressed as well in pieces like “Code Is Not Speech” and “Tor, Technocracy, Democracy.” It’s a piece I haven’t before posted drafts of, in part because it includes a relatively strong critique of some of Jacob Appelbaum’s talks, especially his infamous 30c3 talk, “To Protect and Infect: The Militarization of the Internet (Part Two; in three acts).” The title of Applebaum’s talk was part of what motivated me to write this piece, as it appears as part of and was commissioned for a boundary 2 dossier called “The Militarization of Knowledge.”

Here’s the formal abstract:

The question of the militarization of language emerges from the politics surrounding cryptography, or the use of encryption in contemporary networked digital technology, and the intersection of encryption with the politics of language. Ultimately, cryptographic politics aims to embody at a foundational level a theory of language that some recent philosophers, including Charles Taylor and Philip Pettit, locate partly in the writings of Thomas Hobbes. As in Hobbes’s political theory, this theory of language is closely tied to the conception of political sovereignty as necessarily absolute and as the only available alternative to absolute sovereignty being a state of nature (or more accurately what Pettit 2008 calls a “second state of nature,” one in which language plays a key role). In that state of nature, the only possible political relation is what Hobbes calls a war of “all against all.” While Hobbes intended that image as a justification for the quasi-absolute power of the political sovereign, those who most vigorously pursue cryptographic politics appear bent on realizing it as a welcome sociopolitical goal. To reject that vision, we need to adopt a different picture of language and of sovereignty itself, a less individualistic picture that incorporates a more robust sense of shared and community responsibility and that entails serious questions about the political consequences of the cryptographic program.

boundary 2 cover

If you’d like a copy and do have institutional access, please use this official link to the article at boundary 2 at Duke University Press.

If you don’t have institutional access and would like a copy, please email me (dgolumbia-at-gmail-dot-com) or access a copy at academia.edu.

Posted in "hacking", cyberlibertarianism, privacy, rhetoric of computation, surveillance | Tagged , , , , , , , , , , , | Leave a comment

The Destructiveness of the Digital Humanities (‘Traditional’ Part II)

In what purport to be responses or rebuttals to critiques I and others have offered of Digital Humanities (DH), my argument is routinely misrepresented in a fundamental way. I am almost always said to oppose the use of digital technology in the humanities. This happens despite the fact that I and those I have worked with use digital technologies in hundreds of ways in our research and that our critiques—typically including exactly the ones DHers are responding to—make this explicit.

It is undeniable that DH is in some sense organized around the use of some digital tools (but not others, and this gap is itself is a very important part of how, on my analysis, the DH formation operates, a matter I have written about at some length). What I and the scholars I work with, as opposed to some conservative pundits, worry about is not the use of digital technology in the humanities. Speaking only for myself, what I oppose most strongly is the attitude toward the rest of the humanities I find widespread in DH circles: the view that the rest of the humanities (and particularly literary studies) are benighted, old-fashioned, out of date, and/or “traditional.”

This is what I mean when I describe DH as an ideological formation, more than it is a method or set of methods. The destructiveness in the ideological formation is what I oppose, not the use of tools per se, or even the actual work done by at least some in DH. The ideological formation, I have argued, is what distinguishes DH from the fields that preceded it (that is not to say that computational ideologies were not present in Humanities Computing—they certainly were—but they had failed to find the institutional purchase and power DH was seeking, which is why Humanities Computing needed to be transmuted into DH). Further, I have argued repeatedly that this destructiveness is an inherent feature of DH, perhaps even its most important constitutive feature: that is to say that the most common shared feature in DH work is its “incidental” destructiveness toward whatever it declares not to be part of itself.

There are deep and interesting ideological reasons why the apparent championing of digital tools should overlap with this overtly destructive attitude toward humanistic research, some of which I’ve just touched on in a recent post, “The Destructiveness of the Digital.” It has something to do with the destructiveness toward whatever is considered “non-digital” among digital partisans, which is part of why I have called DH the “cyberlibertarian humanities” (a claim that is just as routinely misrepresented by DH responders as is the rest of my critique).

I want to leave that aside, in favor of presenting just one unexpectedly clear and symptomatic public example of the destructiveness embedded in DH. In her interview in the LARB “Digital in the Humanities” series, senior DH scholar Laura Mandell approvingly quotes another senior DH scholar, Julia Flanders, saying: “We don’t want to save the humanities as they are traditionally constituted.”

We don’t want to save the humanities as they are traditionally constituted

That, to me, summarizes DH—or at least the part of DH that concerns me and others very greatly—in one compact sentence.

Now I’m sure, as soon as I point it out, there will be a lot of backtracking and spin to claim that this sentence means something other than what it clearly seems to. This is true even though practice shows that the plain reading is correct, and that DHers frequently speak in disparaging and dismissive ways about the rest of literary studies. Yet when pressed, and this is part of why I see DH as resembling so many other rightist formations, rather than simply owning and admitting its disparagement of other approaches, DH starts to blame those who point it out and portray itself as the victim.

In context, I don’t think there is any other reasonable way to read the sentence. What “the humanities as they are traditionally constituted” means here is “the humanities other than DH.”

(It seems worth noting that the characteristic double-mindedness in DH about what constitutes DH itself makes this even more problematic and more transparently a kind of power politics: the only kinds of humanities research that should be saved are not “the kind that uses digital tools,” since virtually all humanities research these days uses digital tools in many different ways, but instead, “whatever scholars who are identified with DH say is part of DH,” a fact which in certain moods even some DHers themselves acknowledge.)

Further, that quotation has been out there now for over a year, and nobody has, as far as I know, bothered to comment or push back on it, despite plenty of opportunities to do so; that fact in itself shows the insensitivity in DH to its own institutional politics.

For reference, here is the entire exchange in which Mandell’s statement appears:

Another concern that has come up deals with public intellectualism, which many scholars and journalists alike have described as being in decline (for example, Nicholas Kristof’s New York Times essay last year). What role, if any, do you think digital work plays? Could the digital humanities (or the digital in the humanities) be a much-needed bridge between the academy and the public, or is this perhaps expecting too much of a discipline?

I have a story to tell about this. I was at the digital humanities conference at Stanford one year and there was a luncheon at which Alan Liu spoke. His talk was a plea to have the digital humanities help save the humanities by broadcasting humanities work — in other words, making it public. It was a deeply moving talk. But to her credit, Julia Flanders stood up and said something along the lines of, “We don’t want to save the humanities as they are traditionally constituted.” And she is right. There are institutional problems with the humanities that need to be confronted and those same humanities have participated in criticizing the digital humanities. Digital humanists would be shooting themselves in the foot in trying to help the very humanities discipline that discredits us. In many ways Liu wasn’t addressing the correct audience, because he was speaking to those who critique DH and asking that they take that critical drive that is designed to make the world a better place and put it into forging a link with the public — making work publicly available. Habermas has said that the project of Enlightenment is unfinished until we take specialist discourses and bring them back to the public. This has traditionally been seen as a lesser thing to do in the humanities. For Habermas, it is seen as the finishing of an intellectual trajectory. This is a trajectory that we have not yet completed and it is something, I think, the digital humanities can offer.

The archness and self-contradictory nature of this passage are emblematic of a phenomenon we see more and more of in DH circles. Literally within the same paragraph where she declares that the rest of the humanities should go away, in a remarkable instance of what I like to call right reaction and what Michael Kimmel calls aggrieved entitlement, Mandell says that is the rest of the humanities that are engaged in “discrediting” DH. One has to ask: what is the proper way for “non-DH” humanists to react to a very successful effort—in many places, literally the only thing administrators know about what is happening in English departments these days—that says that the humanities shouldn’t be saved? To simply stop practicing our discipline? And given that your project is predicated on ending the rest of the humanities, how could any response that disagrees with that not also be (wrongly) construed as “discrediting” your practice?

It’s also worth noting that in Mandell’s story, Alan Liu is the one making the request to support the humanities, and that Liu is one of the few English professors identified with and accepted by the DH community who has refused to give ground on the importance of non-DH literary studies. In other words, his request could and should have been met with sympathy and respect, if DH really did not contain a kernel of destructive animus toward the rest of the humanities. It’s worth noting that as this microcosmic scene suggests, Liu’s efforts to get the DH community to support non-DH literary studies have seen very little uptake.

In fact, if we step back from the scene just a bit, it is a bit bizarre to imagine the scene, where one digitally-respected senior scholar says “please don’t kill the rest of the humanities,” and a few others say, “no, we want to kill them.” Of all the people in the world who should be speaking up to kill the rest of the humanities, how did we get to the place where it is people who are nominally literature scholars leading the charge? The answer to that is DH: not the use of tools and building of archives—more power to them—but the destructive, “digitally”-grounded ideology that DH is built from and that it revels in. The one that says all other forms of knowledge have suddenly become outmoded and “traditional,” and this one new way is now the exclusive way forward.

Late last year I wrote a post where I discussed the way that Immanuel Wallerstein analyzes the concept of “traditional” and its place in the global system of capital. This piece builds on that one, and I recommend reading the whole thing, but I’ll just quote one paragraph from Wallerstein that is especially germane to this point:

Historical capitalism has been, we know, Promethean in its aspirations. Although scientific and technological change has been a constant of human historical activity, it is only with historical capitalism that Prometheus, always there, has been ‘unbound,’ in David Landes’s phrase. The basic collective image we now have of this scientific culture of historical capitalism is that it was propounded by noble knights against the staunch resistance of the forces of ‘traditional,’ non-scientific culture. In the seventeenth century, it was Galileo against the Church; in the twentieth, the ‘modernizer’ against the mullah. At all points, it was said to have been ‘rationality’ versus ‘superstition,’ and ‘freedom’ versus ‘intellectual oppression.’ This was presumed to be parallel to (even identical with) the revolt in the arena of the political economy of the bourgeois entrepreneur against the aristocratic landlord. (Immanuel Wallerstein, Historical Capitalism, 75)

I doubt it will surprise anyone familiar with my way of thinking that I wrote this with an eye toward precisely the way that the idea of “traditional” is used in DH: DH has always cast the rest of the humanities as “traditional,” in just the way that Wallerstein notes—and this despite the incredibly variety of approaches (including the very approaches that ground DH) that “traditional” seems to indicate.

This alignment of the DH project against what it falsely projects as “traditional” academic practice is part of why I see it as closely aligned with neoliberalism, in a deep and fundamental way that can’t be ameliorated by ad-hoc patches applied here and there. Until DH confronts the way that it has from its inception been deeply imbricated in a cultural conversation according to which technology points toward the future and everything (supposedly) “non-technological” points to the past, it will be unable to come to terms with itself as the ideological formation I and many others see it as.

The fact that this can occur within a disciplinary milieu where the identification of ideological formations had until very recently been a major part of the stock in trade is not just ironic, but symptomatic of DH as a politics. When you think about it, one way of looking at the social scene there is that DH scholars, who have in general eschewed and even dismissed the project of interpretation, especially politicized interpretation, in favor of formalism and textual editing and “building,” turn to their colleagues who still do specialize in interpreting ideologies and say, in this one instance, our own profession, that we don’t know how to use the methods in which we specialize. Is that credible? Is it credible that the critics of DH, who typically are people who specialize in sniffing out ideologies, don’t understand how to do ideology critique in our own field, but DHers, who in general avoid ideology critique like the plague, can somehow do it better than we do? Who is attacking whose professionalism here? And what could be more destructive to literary studies than to say that literary scholars do not understand how to do their own work?

To end on a positive note: despite being frequently accused of wanting to “end” DH, whatever in the world that would mean, that is only true in a very limited sense: I want to “end” the practice within DH of calling the rest of the humanities “traditional” and “anti-technology” and “out of touch” and “the past.” I want to “end” the rejection of theory and politics and the weird idea that “close reading” is some kind of fetish, within the context of literary studies. I want to end the view that “building” is “doing something,” whereas “writing” is not, and even the view that “building” and “writing” are different kinds of things. I want to end the view that DH is “public” and the rest of literary studies is not. I want DHers to embrace the fact that they are part of the humanities. This might end “DH” per se as an ideological formation, but it would not end the careers of scholars who want to use digital tools in the service of humanities research, of whom I am very much one. One might think that would be asking virtually nothing at all—embrace and support the disciplines of which you are a part—but as the twinned quotation from Flanders and Mandell shows, especially given that it is offered specifically as a rebuke to exactly that request coming from “within” DH, it turns out to be asking a great deal.

Posted in cyberlibertarianism, definitions that matter, digital humanities, rhetoric of computation, theory | Tagged , , , , , , , , | Comments closed

The Destructiveness of the Digital

I’ve argued for a long time along different ways that despite its overt claims to creativity, “building,” “democratization,” and so on, digital culture is at least partly also characterized by profoundly destructive impulses. While digital promoters love to focus on the great things that will come from some new version of something over its existing version, I can’t help focusing on what their promotion says—implicitly or explicitly—about the thing they claim to be replacing, typically at profit to themselves, whether in terms of political or personal power (broadly speaking) or financial gain.

Note that it is in no way a contradiction for both destructiveness and creativity to exist at the same time, something I repeatedly try to explain without much success. In fact it would be odd for only one or the other to exist, and one does not negate the other, at least not as a rule. The fact that there is a lot of creativity in digital culture does not directly address the question of whether there is also destructiveness. Further, the continual response that creativity does negate the destructiveness shifts the terms of discussion so that we can’t really deal as directly as we should with the destructiveness.

I’m not going to go into these arguments in detail right now, but just want to present a particularly clear example of digital destructiveness I happened to hear recently. On April 11 on a regular segment called “Game On” of a BBC Radio 4 program called “Let’s Talk About Tech” (the episode is available for listening and download through early May), host Danny Wallace interviews Hilmar Veigar Pétursson, CEO of CCP, the publisher of the MMORPG EVE Online (“a space-based, persistent world massively multiplayer online role-playing game”), on the occasion of that game’s winning a BAFTA award in the “evolving games” category.

EVE Online

Screen cap from the 2013 “largest space battle EVE Online has ever seen,” from the subreddit /r/eve via an article by Ian Birnbaum at PC Gamer

In the final exchange in the interview (starting around minute 23), Wallace asks Pétursson to reflect more generally about the nature of games and gaming. I’ve transcribed the whole exchange below.

Wallace sets the stage by invoking the defensive aggrieved entitlement of the gamer, which is symptomatically portrayed in the voice of the scolding critic who essentially declares video games not to be art (with no interrogation of what “art” means exactly–just not “frivolous”), but Pétursson’s response goes well beyond what Wallace asks. Asked to articulate the value of EVE Online, Pétursson turns to attack (literally) all other forms of media, and in particular to disparage the entire project of reading. The claim on the surface has to be that all the kinds of philosophical and narrative engagement one experiences in books can be better experienced in video games than by reading the books and other texts (and experiencing the other media) out of which all world cultures emerge.

So we move from a largely fictional dismissal of the value of one medium—games—to an explicit and disparaging rejection of all other forms of media. Further, this disparagement rests on an unsurprising and completely unsophisticated account of what media consumption is really like—a wildly undertheorized presumption that looking at screens and using a pointing device constitutes “interaction” in a way that reading or listening to the radio or even watching screens without a pointing device at the ready does not. That whole frame is inaccurate: it suggests something massively untrue about the experience of reading (and even more of listening and talking) that no careful study of the subject would support, and also a conception of what happens when we play games that is deeply interested. After all, anyone who has ever participated in a raid in World of Warcraft knows that the feeling of suture and of interactivity that players have is, at best, profoundly weird: most of what is going on in the game and on the screen is absolutely not under the player’s “control,” and what is under “control” is a highly limited set of device clicks and gestures that certainly give or go along with the feeling of being “in the game,” but are in fact very different from actually playing a game with one’s body (here thinking of a game like basketball or baseball). Further, that fictional relationship—the immersive sense that one is in the game and participating with the other elements of it—is philosophically much harder to distinguish than one might expect from the suturing relationship the viewer or reader has to text and media of various sorts. The questions of why and how I identify with my avatar in a video game as over against why and how I identify with the main character or analytical perspective offered in a book, or a movie, and so on, are fascinating ones without easy answers. Of course, digital utopians long ago decided that digital media are “interactive” in a way other media aren’t, a notion that is itself built on a serious disparagement of anything non-digital (or anything digital utopians don’t like).

Pétursson tells us that the testimony we have from thousands of years and literally hundreds of millions of people regarding narrative and visual and linguistic media can be dismissed, while the “thousands” of people who play EVE Online provides evidence that this new medium proves the fruitlessness of all other forms of media. In other words, the testimony of EVE Online players is valid, but that of non-players is not valid. It may seem subtle, but this privileging of the testimony of those one values and dismissal of those one doesn’t is one critical root of the development of hate. (Some readers will know that Pétursson’s complaint echoes a famously vicious and totally inaccurate assessment of Tolstoy’s novels [and a fortiori all novels] by digital guru and venture capital-consultant Clay Shirky.)

One of my main concerns with the destructiveness of digital culture has been precisely its disparagement and dismissal of all forms of knowledge that the digerati deem “traditional” or “out of date” or “fruitless,” typically with very little exposure to those forms of knowledge. I am especially concerned with what this perspective teaches with regard to politics. Politics is very complicated terrain for all of us, even those of us who study it for a living. Understanding how various political forces operate and take advantage of popular energy and opinion is among the most urgent political tasks of our time. It is beyond doubt that the rise of authoritarian populism in our time is fueled in part with a studied agnotology, with the promotion of ignorance about politics that makes people particularly vulnerable to manipulation.

So what politics does EVE Online teach, “fruitfully” as against the “fruitless” pursuit of political knowledge from reading and other forms of media? As a non-player of the game I’m in no position to judge, but it’s notable that the game is known for a fairly destructive take on governance. Here’s a bit from Wikipedia’s section on “griefing” in EVE Online:

Due to the game’s focus on freedom, consequence, and autonomy, many behaviors that are considered griefing in most MMOs are allowed in EVE. This includes stealing from other players, extortion, and causing other players to be killed by large groups of NPCs.

I don’t know if there’s any connection between Pétursson’s destructive attitude toward non-game forms of media and this overt hostility toward the ethical principles of social behavior that many of us adhere to. I don’t know whether players of EVE Online share Pétursson’s hostility. But it’s hard not to wonder.

And of course that isn’t even really the point. The point is that this  hostility to anything that is currently identified as not being part of the digital is visible all over the place in digital culture. It is far in excess of what celebration of cool new things requires. And it is completely unmotivated. Large-category new forms of media do not eliminate or obviate older ones: movies didn’t eliminate books, television didn’t actually eliminate radio, and so on. You don’t have to hate books and movies and tv to enjoy games. You don’t have to hate to be part of the “new.” Unfortunately, too many people apparently think otherwise.

Transcribed From April 11 “Game On” segment of  BBC Radio 4 “Let’s Talk About Tech”:

Question (Danny Wallace, BBC)

The old brain, the old parts of the media, for instance, and social commentators, and people who are cultural commentators, will say all video games, they’re just video games. They’re just for kids, or miscreants living in their parents’ basements. That is.. that’s firmly disappearing now, that point of view, isn’t it. You’ve lasted long enough to outlive the people who said, why on earth are you making, spending all this time and all this effort making something as frivolous as video games.

Answer (Hilmar Veigar Pétursson, CEO of CCP, publisher of EVE Online)

Yeah, I mean, in some aspects, we’re making computer games. And many aspects of EVE are like that. But there are also aspects of EVE which is nothing like that, which are so fundamentally unique, you can’t really… you would have to scramble for analogies. It really is a virtual life, where people live out. They do work, they trade, they build social structures, they make friends, they succeed, they fail, they learn, they have lessons in leadership, trust. It’s an extraordinarily beneficial activity, I would argue. And that’s not just my own point. I have thousands, tens of thousands of people that just fundamentally agree with me. So it’s an element of truth, once you get enough people to agree with it. So I’ve never really looked at it like that. The fact that we’re classified as a computer game, I mean, doesn’t really bother me. It helps people understand what it is. It’s not like I have a very good classification for what we really are: something virtual worlds, virtual life, social economy, I mean there are many analogies you can bring to it. But yeah, we’ve never really thought of it as just being computer games.

I would then argue, I mean there’s a lot of other things in human endeavors which are frankly uninteresting. If you look at most media, it’s broadcast to consumer and there’s no participation. Why is reading a book considered a better activity than playing a game? At least in a game you’re participating. In a book you’re just wallowing in some other’s imagination. How is that a fruitful activity? It’s very equivalent to watching TV. I find reading books… I generally frown upon it. I would rather play a game. I learn more from computer games than books.

Posted in cyberlibertarianism, games, rhetoric of computation | Tagged , , , , , , , , , , , | 2 Responses

Race, Technology, and the Word “Traditional” in the World-System

“Traditional” is one of the more interesting words to keep track of in contemporary discourse, particularly when it comes up in discussions of technology.

For the most part, it is used as a slur.

It is a word used to disparage an object or practice, to compare it to whatever one wants to posit as “new” or “innovative” or even “worthwhile” or “useful.”

It’s an implicit slur: after all, in a variety of contexts, “traditions” and “traditional” are words that point to good things, things we (apparently) value, things we don’t necessarily want to change. Though these days, more and more, especially in discussions of technology and the economy, it’s the slur meaning that predominates.

I’ve always noticed this and meant to write a brief note about it, since it seems to me the question of what is “traditional” and what isn’t is highly relative and mobile. Before I could get around to that, though, I ran across a surprisingly pointed discussion of this term in an unexpected source: the short 1983 book Historical Capitalism (London: Verso), by the Marxist world-systems theorist and sociologist Immanuel Wallerstein.

Wallerstein’s work is usually, rightly, seen as an effort to understand how capitalism works across the globe, with a particular focus on international flows of trade and the ways classes can be played off against each other among as well as within countries. His best-known work is the multivolume The Modern World-System. Wikipedia provides the following fairly accurate if quite general summary of some key parts of his work:

A lasting division of the world into core, semi-periphery, and periphery is an inherent feature of world-system theory. Other theories, partially drawn on by Wallerstein, leave out the semi-periphery and do not allow for a grayscale of development. Areas which have so far remained outside the reach of the world-system enter it at the stage of “periphery”. There is a fundamental and institutionally stabilized “division of labor” between core and periphery: while the core has a high level of technological development and manufactures complex products, the role of the periphery is to supply raw materials, agricultural products, and cheap labor for the expanding agents of the core. Economic exchange between core and periphery takes place on unequal terms: the periphery is forced to sell its products at low prices, but has to buy the core’s products at comparatively high prices. Once established, this unequal state tends to stabilize itself due to inherent, quasi-deterministic constraints. The statuses of core and periphery are not exclusive and fixed geographically, but are relative to each other. A zone defined as “semi-periphery” acts as a periphery to the core and as a core to the periphery. At the end of the 20th century, this zone would comprise Eastern Europe, China, Brazil, and Mexico. It is important to note that core and peripheral zones can co-exist in the same location.

Yet what is sometimes less understood is that Wallerstein is a theorist of race and its critical role in the establishment of capitalism, that much of his early work focused on Africa, that he considers himself profoundly influenced by the anticolonial writer Frantz Fanon.

Wallerstein describes Historical Capitalism as an attempt “to see capitalism as a historical system, over the whole of its history and in concrete unique reality” (7). The book is made up of revisions of three lectures Wallerstein gave in 1982 along with a new conclusion. The first chapter, Wallerstein says, is largely devoted to economics; the second to politics, and the third, which I’ll be discussing here, to culture. Its title is “Truth as Opiate: Rationality and Rationalization.” Somewhat surprisingly, the word “traditional” occupies a central place in Wallerstein’s analysis.

Wallerstein, Historical Capitalism (Verso, 1982)

These, for example, are the third chapter’s first two paragraphs:

Historical capitalism has been, we know, Promethean in its aspirations. Although scientific and technological change has been a constant of human historical activity, it is only with historical capitalism that Prometheus, always there, has been ‘unbound,’ in David Landes’s phrase. The basic collective image we now have of this scientific culture of historical capitalism is that it was propounded by noble knights against the staunch resistance of the forces of ‘traditional,’ non-scientific culture. In the seventeenth century, it was Galileo against the Church; in the twentieth, the ‘modernizer’ against the mullah. At all points, it was said to have been ‘rationality’ versus ‘superstition,’ and ‘freedom’ versus ‘intellectual oppression.’ This was presumed to be parallel to (even identical with) the revolt in the arena of the political economy of the bourgeois entrepreneur against the aristocratic landlord.

This basic image of a worldwide cultural struggle has had a hidden premise, namely one about temporality. ‘Modernity’ was assumed to be temporally new, whereas ‘tradition’ was temporally old and prior to modernity; indeed, in some strong versions of the imagery, tradition was ahistorical and therefore virtually eternal. This premise was historically false and therefore fundamentally misleading. The multiple cultures, the multiple ‘traditions’ that have flourished within the time-space boundaries of historical capitalism, have been no more primordial than the multiple institutional frameworks. They are largely the creation of the modern world, part of its ideological scaffolding. Links of the various ‘traditions’ to groups and ideologies that predate historical capitalism have existed, of course, in the sense that they have often been constructed using some historical and intellectual materials already existent. Furthermore, the assertion of such transhistorical links has played an important role in the cohesiveness of groups in their political-economic struggles within historical capitalism. But, if we wish to understand the cultural forms these struggles take, we cannot afford to take ‘traditions’ at their face value, and in particular we cannot afford to assume that ‘traditions’ are in fact traditional. (75-6)

So for Wallerstein, the very act of naming a practice “traditional” is an important part of the cultural work of global capitalism, tied directly to the historical creation of what we today call “race.” The very allegation that some practices are “traditional” “has formed one of the most significant pillars of historical capitalism, institutional racism” (78); “racism was the mode by which various segments of the work-force within the same economic structure were constrained to relate to each other,” he goes on, “racism was the ideological justification for the hierarchization of the work-force and its unequal distribution of reward.”

Wallerstein uses the past tense in these formulations because he is discussing the historical formation of racial discrimination, especially when racial categorizations were explicit and legal; he is not suggesting that racism does not still exist. But because “in the past fifty to one hundred years, it has been under sharp attack” (80), a complementary ideology has moved to center stage, namely what Wallerstein calls “universalism.” Belief in universalism “has been the keystone of the ideological arch of historical capitalism” (81).

By universalism Wallerstein in part means “pressures at the level of culture” to create and enforce norms around a single model of culture and cultural progress, via a “complex of processes we sometimes label ‘westernization,’ or even more arrogantly ‘modernization’” (82) and which includes phenomena like “Christian proselytization; the imposition of European language; instruction in specific technologies and mores; changes in legal codes.”

The process of modernization, Wallerstein writes, “required the creation of a world bourgeois cultural framework that could be grafted onto ‘national’ variations. This was particularly important in terms of science and technology, but also in the realm of political ideas and the social sciences” (83). Thus the

concept of a neutral ‘universal’ culture to which the cadres of the world division of labor would be ‘assimilated’ (the passive voice being important here) hence came to serve as one of the pillars of the world-system as it historically evolved. The exaltation of progress, and later of ‘modernization,’ summarized this set of ideas, which served less as true norms of social action than as status-symbols of obeisance and of participation in the world’s upper strata. The break from the supposedly culturally-narrow religious bases of knowledge in favor of supposedly trans-cultural scientific bases of knowledge served as the self-justification of a particular pernicious form of cultural imperialism.

The universalism of scientific culture “lent itself to the concept known today as ‘meritocracy’” (84), a “framework within which individual mobility was possible without threatening hierarchical work-force allocation. On the contrary, meritocracy reinforced hierarchy” (85).  “The great emphasis on the rationality of scientific activity,” he writes, “was the mask of the irrationality of endless accumulation.”

While “universalism was offered to the world as a gift of the powerful to the weak,” “the gift itself harbored racism, for it gave the recipients two choices: accept the gift, thereby acknowledging that one was slow on the hierarchy of received wisdom; refuse the gift, thereby denying oneself weapons that could reverse the unequal real power situation.”

There is much more to Wallerstein’s compact and dense argument, including many important reflections on the profoundly ambivalent relationship of technological progress and cultural nationalism to socialism, and I recommend the book in its entirety. But I am primarily interested here in the consequences of Wallerstein’s work for understanding the deployment of the concept of “traditional” in contemporary technological discourse. In my opinion, “traditional” is a word, and a concept, that should be avoided in thoughtful work about technology, economics, and “progress,” as it is an almost-entirely ideological label, one that is even more than what I earlier called it, a “slur.” Indeed, it is not merely a label: it is an ideological lever, a tool used to organize the world so as to maximize power for the ones doing the labeling, and to disempower the lives and cultures of those to whom the label is applied, and to make them available for resource exploitation.

Work on this piece benefited greatly from conversations with Tressie McMillan Cottom and Audrey Watters.

Next: “traditional” in vivo

Posted in definitions that matter, digital humanities, rhetoric of computation, theory | Tagged , , , , , , , , , , , , , , , | 1 Response

The Politics of Bitcoin: Expanded Bibliography with Live Links

Production constraints and editorial guidelines required The Politics of Bitcoin, in both its print and electronic versions, to include only the base URLs of online materials referenced in the book, and even in the electronic version these aren’t live links. In addition, space constraints meant that some work valuable to me in composing the book had to be cut out. What follows is a fuller bibliography of the works referenced or that I’d have liked to have referenced in the book, complete with working URLs to online materials.

Politics of Bitcoin

Posted in bitcoin, cyberlibertarianism | Tagged , , , , , , , | Leave a comment

Trump, Clinton, and the Electoral Politics of Bitcoin

My new book, The Politics of Bitcoin, is not directly about electoral politics, but rather the political and political-economic theories that inform the development of Bitcoin and its underlying blockchain software. My argument does not require that there be direct connections between promoting Bitcoin and supporting one candidate or party or another.

Rather, what concerns me about Bitcoin is how it contributes to the spread of deeply right-wing ideas in economics and political philosophy without those right-wing associations being made at all explicit.  Call it, “moving the Overton window to the right” (although I find the concept of the “Overton window” troubling, not least for its own origins on the political right) especially along some axes that may not even be altogether legible to many in the general public. So many people have heard of Bitcoin and the blockchain as technologies that promote “freedom” and “democratization,” and resist interference by “central authorities”; many fewer understand what those words mean in relation to Bitcoin and the blockchain, where the words are used almost identically to the way extremists like Alex Jones and the John Birch Society use them.

Nevertheless, these foundational politics do at times intersect with ordinary electoral politics. Though this isn’t really what The Politics of Bitcoin is about, when people in social media saw the title they quickly presumed that was what I meant, and some of those comments prompted me to reflect a bit on how the politics of Bitcoin and the blockchain are intersecting with the current US presidential election.

* * *

First, the GOP. A Bitcoin supporter responded to some positive comments about the book by others on Twitter by writing:

There are several interesting ways that this comment strikes me as symptomatic. First, it tries to manage the narrative—defining the critique I’m offering, despite the fact that the tweet writer admits not to having read the book—by suggesting what the book does not, that right wing ideologues like Trump directly promote Bitcoin, or vice-versa. Second, it offers the very familiar story being promulgated by Google and others that we need to be very worried now about a superpowerful AI, which itself is a product of what thinkers like Dale Carrico and I think is already a profoundly conservative discourse, for reasons I won’t go into here.

The Trump comment is particularly interesting. There is certainly a fair amount of support for Trump among Bitcoin enthusiasts, though I’m not aware of any polling that would allow us to break that down into numbers. But it is pretty funny to be told that I shouldn’t be worried about Trump right now, because I am very worried, and I think anyone with a remote interest in politics and the—to put a point on it—fate of democracy itself should be worried about Trump. And to the degree that Bitcoin helps to spread the right-wing economic ideology that my book is really about, then I do think that there are connections between Bitcoin and Trump. Of course Bitcoin didn’t cause Trump—but the kinds of false, angry, other-targeting ideologies on which the Trump phenomenon depends can be readily found in all the kinds of online communities that create and promote the frightening range of right-wing political action we see everywhere today. We should be very worried about Trump, and we should be worried about how Bitcoin and other parts of online discourse feed the hate and studied ignorance that make so many people support him.

This site might be a parody but I don’t think it is.

It’s also fascinating that as we get down to the wire, Trump is sounding more and more like his ardent supporter Alex Jones, propagating the same falsehoods about “global financial powers” that we see in Bitcoin discourse (and, not unironically, being himself an incredibly wealthy person who made most of his money by cheating the system). In an October 13 speech in West Palm Beach, Florida, Trump stated that  “Hillary Clinton meets in secret with international banks to plot the destruction of U.S. sovereignty in order to enrich these global financial powers, her special interest friends, and her donors.”

* * *

So that’s the ostensibly mainstream right. What about the mainstream left? Here the story is even more interesting. Daniel Latorre, a Twitter friend and civic technologist, tweeted the following:

and

which links to an excellent piece Latorre wrote in 2015 called “Why Our Tech Talk Needs A Values Talk.”

Dan pointed me to video of the “Connectivity” session at the Clinton Global Initiative (CGI) 2016 conference (yes, the annual conference sponsored by the famous foundation), where around minute 32 two speakers appear talking about the blockchain. The first is Jamie Smith, Global Chief Communications Officer of the Bitfury Group, “your leading full service Blockchain technology company,” who gives a very brief introduction that is full of some very serious imploring of the audience, quite a few buzzwords that don’t really seem to go together, and graphics like this one:

blockchain transformation CGI

It is really an insult to the intelligence of the audience of a charitable organization to distribute venture capital promotional materials like these as if they mean something very concrete and beneficent—let alone the fact that the putative top benefit of the blockchain, distributed control of a verifiable ledger that all users can examine—has not even been mooted for phones of any sort, let alone inexpensive phones, so that whatever Smith is advertising here is at best a derivative of blockchain technology.

Like so many Bitcoin and blockchain promoters (and, to be fair, digital utopians and salespeople everywhere) Smith, too, engages in some serious management of the narrative via rhetorical sleight-of-hand: “the missing piece of the internet,” “the blockchain is the most transformational technology since the internet,” “without going through a trusted emissary.” Words like these are deployed to mystify, or to mislead, or both, but not to explain.

Managing the narrative: Smith says, “I can see why you think that [Bitcoin isn’t promising] because the coverage has not been great”: that is, because the coverage has in part accurately focused on the most popular uses of Bitcoin in Dark Web markets for illegal products like the long-shuttered Silk Road, and on the almost shocking frequency with which individuals lose the money they put into Bitcoin—which would be shocking even if one of the major advertised benefits of Bitcoin wasn’t its supposed superior safety vs. other forms of payment.

It’s worth dwelling on one statement Smith makes: “the Bitcoin Blockchain has never been hacked.”

Really? Remember that her talk is focused on the blockchain, not Bitcoin per se. But are blockchains unhackable? Does the CGI audience notice that the vital word in that construction is not “blockchain” but “Bitcoin,” because other blockchains most certainly have been hacked—most famously the first “autonomous” “smart contract” blockchain, TheDAO—which was very famously hacked no more than four months ago.

And of course, “hack” is used in a particular technical sense in the talk, since as I discuss in the book, putting money into Bitcoin is one of the riskiest things a person can do, both because of Bitcoin’s own wild volatility, and the ease with which Bitcoin exchanges can be and have been repeatedly hacked—by some estimates including up to a third of all exchanges—and millions of dollars vanished into thin air, or more often into the pockets of scam artists.

This is part of how political ideology gets promulgated—as opposed to actual political work getting done. Wildly contradictory sentiments are offered with outsize passion, imploring audiences to take action and support whatever scheme the speaker is suggesting, but not actually to research the question for themselves, not to ask whether what the speaker is promising actually makes sense.

* * *

Yet Smith is only the first speaker. At the end of her talk she introduces arguably the star of the blockchain panel, Peruvian economist Hernando de Soto.

Hernando de Soto.

Hernando de Soto, one of the world’s biggest blockchain promoters.

Hernando de Soto, one of the chief architects of the actual economic plans critics like Philip Mirowski call neoliberalism. And this is core, right-wing neoliberalism, meaning direct involvement with the most poisonous and influential figures and institutions of neoliberalism—the Mont Pelerin society, Friedrich Hayek, Milton Friedman, and many others—not simply the varieties of “outer shell” neoliberalism that does not always even know their own name. This is far-right, world-dominating economics.

Hernando de Soto. Vocal opponent of the highest-profile left economist in the last decade, Thomas Piketty. Author of two books, The Other Path: The Economic Answer to Terrorism (1989), which argues the solution to the problems that created Peru’s Shining Path was found in entrepreneurship and deregulation, and The Mystery of Capital (2000), described as “an elaborate smokescreen to hide the uglier truth” that corporations and wealthy individuals “run [developing] countries for the maximum extractive benefit of the west” which de Soto’s “solutions” may exacerbate much more than ameliorate.

Hernando de Soto. who talks glowingly about “reglobalizing the world.” Who lumps together ISIS and progressive anti-globalization protestors (though he is careful not to say they are the same–after he lumps them together in the first place). Who “spins the Arab Spring not as a populist opposition to dictators (most of whom are backed by the lynchpin of capitalism, the United States), but a scrappy revolt of entrepreneurs against state interference in commerce.” Who received the 2004 Milton Friedman prize for Advancing Liberty. Whose work in Peru was “the first and most successful outcome” (Mitchell 2009, 396) of the work of the Atlas Foundation for Economic Research, not just ideologically but historically directly connected to Hayek, the Cato Foundation, and the Mont Pelerin Society.

Being promoted at the charitable foundation of the Democratic candidate for President.

Under the name of blockchain.

Without anybody standing up and saying, “what is the ‘Friedrich Hayek of Latin America’ doing speaking for the Democratic candidate for president? And why is the Democratic party promoting it, without even noting who this person is and where his ideas come from?”

The product de Soto says he is making is one that uses the blockchain—the “public blockchain,” he and Bitfury call it, although the only current candidate for that is Bitcoin, and he does not explain whether or how the service he describes could run on the Bitcoin blockchain, or who would host the “public blockchain” he talks about. The product is one that will help to fulfill de Soto’s lifelong plan to record property rights for the poorest people in the world. (See “Hillary Clinton and the Blockchain” for what reads an awful like that a sales presentation of this and other blockchain projects the Clinton campaign has under consideration.)

That sounds noble, unless you are familiar with political science and political economy, in which the focus on property rights is precisely the hallmark of rightist politics going all the way back to what is now called the “classical liberalism” associated with Locke, although that term is now used to describe what is essentially right libertarianism and the relationship between these doctrines is profoundly fraught. Then it sounds like another version of the neoliberal, neo-colonial extractive development plans that have, in the opinion of many anti-globalization activists, been the cause of significant destruction to lives and property the world over, and contributed significantly to the poverty they claim to be helping (Gravois 2005, Johnson 2016, Mitchell 2009).

Here’s a bit on de Soto’s work from a piece by journalists Mark Ames and Yasha Levine (2013)

De Soto’s pitch essentially comes down to this: Give the poor masses a legal “stake” in whatever meager property they live in, and that will “unleash” their inner entrepreneurial spirit and all the national “hidden capital” lying dormant beneath their shanty floors. De Soto claimed that if the poor living in Lima’s vast shantytowns were given legal title ownership over their shacks, they could then use that legal title as collateral to take out microfinance loans, which would then be used to launch their micro-entrepreneurial careers. Newly-created property holders would also have a “stake” in the ruling political and economic system. It’s the sort of cant that makes perfect sense to the Davos set (where de Soto is a star) but that has absolutely zero relevance to problems of entrenched poverty around the world.

To be clear, de Soto could speak, and has spoken, to audiences from the center-left—the “left neoliberals,” the Tony Blairs and Bill and Hillary Clintons of the world—for a long time. Bill Clinton himself has called de Soto “the world’s greatest living economist,” and yet there he is the recipient of equally fulsome praise from figures like George HW Bush and Ronald Reagan. I don’t argue that Bitcoin is unique, or uniquely destructive; on the contrary part of the point is how easily and how transparently it fits into the shift toward the right we see in so many places today.

Still, when you listen to de Soto’s speech, it’s remarkable to hear him say things like “Western imperialism did a good job” or asking whether “blockchain can save globalization”—and that nobody in the audience raises objections to this sort of thing. You expect that on the right, not the left.

But, welcome to blockchain.

So yes, the politics of blockchain should make you worried about Trump, and the people who support Trump, and the rightward shift of all electoral politics today.

Works Cited

 

Posted in bitcoin, cyberlibertarianism, rhetoric of computation | Tagged , , , , , , , , , , , , , | 17 Responses

“Neoliberalism” Has Two Meanings

The word “neoliberalism” comes up frequently in discussions on and of digital media and politics. Use of the term is frequently derided by actors across the political spectrum, especially but not only by those at whom the term has been directed. (Nobody wants to be called a neoliberal and everyone always denies it, much as everyone denies being a racist or a misogynist: it is today an analytical term applied by those who disagree with it, although it has been used for self-identification in the past.) Sometimes the derision indicates genuine disagreement, but even more frequently it is part of an outright denial that there is any such thing as “neoliberalism,” or that the meaning of the term is so fuzzy as to make its application pointless.

There are many causes for this, but one that can be identified and addressed is fairly straightforward once it’s identified: neoliberalism has two meanings. Of course it has many more than two meanings, but it has two important, current, distinct, somewhat related meanings, and they get invoked in close enough proximity to each other so as to sometimes cause serious confusion. (The correct title for this post should really be “‘Neoliberalism’ Has (at Least) Two Meanings,” but the simpler version sounds better.) The existence of these two meanings may even explain some of the denials that the term means anything at all.

Meaning 1: “Neoliberal” as a modifier of “liberal” in the largely recent political sense of US/UK party politics, where the opposite is “conservative.” In this sense, the term is typically applied to people even more than political programs or dogma. Examples of “neoliberals” in this sense: Tony Blair, Bill Clinton, (arguably) Barack Obama, (arguably) Hillary Clinton.

This version of “neoliberal” is meant to identify a tendency among the political left to accommodate policies, especially economic policies, associated with the right, while publicly proclaiming identification with the left. Sometimes this is called “left neoliberalism.”

The best recent piece I know on this version of neoliberalism is Corey Robin’s “The First Neoliberals,” Jacobin (April 28, 2016), which includes pointers at the (brief) time when the term was introduced by those who described themselves that way:

[Neoliberalism is] the name that a small group of journalists, intellectuals, and politicians on the Left gave to themselves in the late 1970s in order to register their distance from the traditional liberalism of the New Deal and the Great Society.

The original neoliberals included, among others, Michael Kinsley, Charles Peters, James Fallows, Nicholas Lemann, Bill Bradley, Bruce Babbitt, Gary Hart, and Paul Tsongas. Sometimes called “Atari Democrats,” these were the men — and they were almost all men — who helped to remake American liberalism into neoliberalism, culminating in the election of Bill Clinton in 1992.

These were the men who made Jonathan Chait what he is today. Chait, after all, would recoil in horror at the policies and programs of mid-century liberals like Walter Reuther or John Kenneth Galbraith or even Arthur Schlesinger, who claimed that “class conflict is essential if freedom is to be preserved, because it is the only barrier against class domination.” We know this because he so resolutely opposes the more tepid versions of that liberalism that we see in the Sanders campaign.

It’s precisely the distance between that lost world of twentieth century American labor-liberalism and contemporary liberals like Chait that the phrase “neoliberalism” is meant, in part, to register.

We can see that distance first declared, and declared most clearly, in Charles Peters’s famous “A Neoliberal’s Manifesto,” which Tim Barker reminded me of last night. Peters was the founder and editor of the Washington Monthly, and in many ways the eminence grise of the neoliberal movement.

It’s important to say that, while this usage of the term may well be the one that is frequently applied in social media, and it’s certainly the one that gets mentioned most often in the context of electoral politics (see, for example, Nina Illingworth’s hilarious “Neoliberal Wall of Shame”), it isn’t really the one that scholars tend to use.

This usage came up recently in the controversy surrounding the #WeAreTheLeft publicity campaign, in which critics from the left, with whom I generally agree in this regard, criticized the organizers of that campaign as neoliberals: see Jeff Kunzler, “Dear #WeAreTheLeft, You Are Not the Left: The Rot of Liberal White Supremacy,” Medium (July 13, 2016) and Meghan Murphy, “#WeAreTheLeft: The Day Identity Politics Killed Identity Politics,” Feminist Current (July 14, 2016). Not surprisingly, this critique was met with the characteristic denial that the word means anything:

Meaning 2: “Neoliberal” as a modifier of “liberal” in the economic sense of the word “liberal,” as used for example in “liberal trade policies.” Also understood as a modifier of the liberalism associated with philosophers like Locke and Mill, which is itself frequently taken to be largely economic in nature. (The source I like best on the relationship between economic liberalism and rightist political programs is Ishay Landa’s The Apprentice’s Sorcerer: Liberal Tradition and Fascism, Studies in Critical Social Science, 2012.)

This is a movement of the political right, not of the left; its opposite would be something like “protectionism” or “planned economies” or even “socialism,” although in caricatured senses of those terms.  The lineage here begins with Hayek and von Mises, through the Mont Pelerin Society and Chicago School economics. Philip Mirowski is the go-to theorist of this movement. Examples: Hayek, Mises, self-identified right-wing “libertarians” like the Koch brothers, Murray Rothbard, Lew Rockwell, Ron and Rand Paul, and also hard-right politicians like Reagan and Thatcher, Scott Walker and Ted Cruz. These figures are better understood not with neoliberal as a term of personal identification (that is, “Ted Cruz is a neoliberal” doesn’t really mean much), but rather as advocating neoliberal policies or providing foundational theory for them. In contrast to the first meaning, this is sometimes referred to as “right neoliberalism.”

This is what scholars like David Harvey, Wendy Brown, Aihwa Ong, Will Davies, and Philip Mirowski mean when they talk about neoliberalism, even if the details of their usages of the term differ slightly. It’s the usage most often employed by scholars across the board. It’s the meaning the Wikipedia entry on neoliberalism currently invokes, barely mentioning the other.

When Daniel Allington, Sarah Brouillete and I wrote a piece called “Neoliberal Tools (and Archives): A Political History of the Digital Humanities,” in the Los Angeles Review of Books in May, it is this meaning we had in mind.

Neoliberalism in this sense is often understood, not entirely inaccurately, as free-market fundamentalism. But as Mirowski in particular explains it (Davies is also very good on this), neoliberalism has just as much to do with taking the reins of state power so as to favor commercial interests while publicly disparaging the idea of governmental power (or at least of democratic control of state power). Although this is most clearly explained in his book Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown (Verso, 2013), the summary he provides in “The Thirteen Commandments of NeoliberalismThe Utopian (June 19, 2013) explains a lot:

Although many secondhand purveyors of ideas on the right might wish to crow that “market freedom” promotes their own brand of religious righteousness, or maybe even the converse, it nonetheless debases comprehension to conflate the two by disparaging both as “fundamentalism”—a sneer unfortunately becoming commonplace on the left. It seems very neat and tidy to assert that neoliberals operate in a modus operandi on a par with religious fundamentalists: just slam The Road to Serfdom (or if you are really Low-to-No Church, Atlas Shrugged) on the table along with the King James Bible, and then profess to have unmediated personal access to the original true meaning of the only (two) book(s) you’ll ever need to read in your lifetime. Counterpoising morally confused evangelicals with the reality-based community may seem tempting to some; but it dulls serious thought. It may sometimes feel that a certain market-inflected personalized version of Salvation has become more prevalent in Western societies, but that turns out to be very far removed from the actual content of the neoliberal program.

Neoliberalism does not impart any dose of Old Time Religion. Not only is there no ur-text of neoliberalism; the neoliberals have not themselves opted to retreat into obscurantism, however much it may seem that some of their fellow travelers may have done so. You won’t often catch them wondering, “What Would Hayek Do?” Instead they developed an intricately linked set of overlapping propositions over time — from Ludwig Erhard’s “social market economy” to Herbert Giersch’s cosmopolitan individualism, from Milton Friedman’s “monetarism” to the rational-expectations hypothesis, from Hayek’s “spontaneous order” to James Buchanan’s constitutional order, from Gary Becker’s “human capital” to Steven Levitt’s “freakonomics,” from the Heartland Institute’s climate denialism to the American Enterprise Institute’s geo-engineering project, and, most appositely, from Hayek’s “socialist calculation controversy” to Chicago’s efficient-markets hypothesis. Along the way they have lightly sloughed off many prior classical liberal doctrines — for instance, opposition to corporate monopoly power as politically debilitating, or skepticism over strong intellectual property, or disparaging finance as an intrinsic source of macroeconomic disturbance — without coming clean on their reversals.

George Monbiot’s “Neoliberalism—The Ideology at the Root of All Our Problems,” The Guardian (April 15, 2016), offers an excellent if slightly less scholarly primer to the history of the term and the best-known instances of neoliberal policies, though he doesn’t include the crucial Mirowski/Davies insight about neoliberalism’s capture of state power:

As Naomi Klein documents in The Shock Doctrine, neoliberal theorists advocated the use of crises to impose unpopular policies while people were distracted: for example, in the aftermath of Pinochet’s coup, the Iraq war and Hurricane Katrina, which Friedman described as “an opportunity to radically reform the educational system” in New Orleans.

Where neoliberal policies cannot be imposed domestically, they are imposed internationally, through trade treaties incorporating “investor-state dispute settlement”: offshore tribunals in which corporations can press for the removal of social and environmental protections. When parliaments have voted to restrict sales of cigarettes, protect water supplies from mining companies, freeze energy bills or prevent pharmaceutical firms from ripping off the state, corporations have sued, often successfully. Democracy is reduced to theatre.

Monbiot also points out that this usage too comes from the promoters of the doctrine themselves: “In 1951, Friedman was happy to describe himself as a neoliberal. But soon after that, the term began to disappear. Stranger still, even as the ideology became crisper and the movement more coherent, the lost name was not replaced by any common alternative.” Mirowski and his colleagues explain this history in much more detail.

It’s important to note that, as Monbiot suggests, but as Manuela Cadelli, President of the Magistrates’ Union of Belgium, says outright, “Neoliberalism Is a Form of Fascism” (French original). If one sees the corporate-State nexus as a critical component of Fascism as a political-economic system, it’s not at all hard to see the connection (Landa’s book is the key source on this).

neoliberalism

***

Is there a relationship between the two meanings? Arguably, the most obvious relationship is that the Mont Pelerin Society’s long-term plan to turn the entire country (and frankly the entire world) toward a rightist solidifying of corporate power can’t help but entail the capitulation of the left toward those goals. We certainly read and hear much less of the Clintons and Blair bashing democratic governance as an ideal, even though their actions have tended toward the Mont Pelerin program. No doubt, these are prongs of the same movement at some level, but they have very different profiles and effects in the world at large.

In a recent piece in Counterpunch, “The Time is Now: To Defeat Both Trump and Clintonian Neoliberalism” (July 19, 2016), Mark Lewis Taylor writes: “Trumpian authoritarianism and Clintonian neoliberalism are actually co-partners in a joint system of rule. Trump’s authoritarianism is often a hidden bitter fruit of Clintonian neoliberalism. Social movements for democracy must fight them both together.”

It is interesting to note that the term “neoconservative” as it is typically invoked (heavy reliance on military power to pursue what are largely economic objectives) is not properly speaking in opposition to either meaning of “neoliberal”; figures like GW Bush and Cheney easily fall under the second definition of neoliberal as well as the typical definition of neoconservative. Tony Blair looks a lot like a neoliberal in the first sense and also a neocon in the same Bush-Cheney sense, although perhaps slightly less self-starting (maybe).

Nothing I’m saying here is new. Most—though unfortunately not all—academic studies of neoliberalism note this issue, often using the right/left terminology. But critiques of the use of the word often appear to blur the two meanings, using the fact that some figures fall into one group or the other as a means of disqualifying use of the term altogether. And as Mirowski among others says, the argument that “neoliberalism does not exist” appears to do important work in solidifying the Mont Pelerin program.

When I write, I almost always intend the second meaning, but I recognize that I haven’t always been as clear about that as I might have been, even if I’ve been quoting Mirowski in the process. I plan to try to distinguish my uses of these meanings in the future and I can’t help thinking it would be useful if more people did.

Posted in cyberlibertarianism, theory | Tagged , , , , , , , , , , , , , , | 2 Responses