Do You Oppose Bad Technology, or Democracy?

Calls to Limit the Use of Bad Technologies Only by Law Enforcement and Governments, Largely Via “Ethics” and Self-Regulation, Exacerbate Rather than Ameliorate the Anti-Democratic Harms of Digital Technology

Recently, more of us have started to realize just how destructive digital technologies can be. That’s good. As someone who has been nearly screaming about the topic for over two decades now, I can only say that it’s about time.

Yet one of the most prominent strains of this criticism is one that we should be almost as concerned about. Among other things, it is a big part of what got us here in the first place.

Read the complete piece on Medium

Posted in cyberlibertarianism, privacy, surveillance, uncomputing | Tagged , , , , , , , | Leave a comment

The Great White Robot God: Artificial General Intelligence and White Supremacy

It may seem improbable at first glance to think that there might be connections between the pursuit of artificial general intelligence (AGI) and white supremacy. Yet the more you examine the question the clearer and the disturbing the links get.

Inspired in part by some recent work mentioned below, conversations with Chris Gilliard, a Twitter thread by Scoobyliu, ongoing work by Dale Carrico, and some other recent research mentioned below, I decided to try to see where the threads might lead. The more you examine the question the clearer and more disturbing the links get.

This is a brief think piece intended to stimulate additional reflections. It is not meant as a personal indictment of those who pursue AGI (although it is not meant to exonerate them either), but instead a structural analysis that starts from an acknowledgment of the ways that race and whiteness work in our society, and how they connect to other phenomena that may seem distant from them. In the case of AGI, there is an odd persistence of discourse that seems far in excess of what science allows, and those most captivated by that excess are often the same people captivated by excesses about race. Part of this is visible through the unusual amount of overlap between AGI promoters and those who believe in a strong correlation between what they call “race” and what they call “IQ.” I suspect it would be possible to read through a lot of the media and texts about AGI and find many marks of a commitment to white supremacy that promoters do not recognize in themselves.,,,

Read the complete post at Medium

Posted in cyberlibertarianism, rhetoric of computation, theory | Tagged , , | Leave a comment

How to Prove Bitcoin Isn’t Swarming With Racism

A few weeks ago we got a nice object lesson in the rhetorical strategies the right tries to rebut arguments about itself. Whether these are deliberate trolling strategies or just ones that propagate among the right in a more-or-less organic fashion, they are still fascinating to observe. My point here in collecting them, beyond the bemusement they may produce in seeing them together, is as part of the general project to understand those strategies and so to think about why the right relies on them, and how and why they work to whatever extent they do work.

On Nov 24, the NYU economist and long-time financial observer Nouriel Roubini (@Nouriel) retweeted a comment from Dogecoin developer Jackson Palmer (@ummjackson), one of the sanest people in the cryptocurrency space, who made the pretty unremarkable observation that this tweet from major cryptocurrency guru John McAfee is, at best, problematic. It’s also not all that unusual:

Roubini went further (and tagged me and mentioned my book, which is how I got wind of this particular set of comments):

Roubini’s retweet prompted responses that are supposed to show that Bitcoin is not racist (whatever exactly that is supposed to mean–as I’ve pointed out many times, this is not actually the thesis of my book ThePolitics of Bitcoin, despite what definitely non-rightist and non-racist commentators may say, having fully informed themselves of the book’s title), the people around Bitcoin are not racist, and Bitcoin is definitely not aligned with the political right.

However, to the rest of us, these responses are… curious. To anyone to the left of #MAGA, they might seem to be proving Roubini’s point. The fact that people can think they rebut that point is part of a larger phenomenon of what I call “right reaction” that deserves separate treatment.

Say “I Have Friends Who Are…” (or “He Has Friends Who Are…”)

racism in bitcoin
racism in bitcoin
racism in bitcoin
racism in bitcoin

Say It’s Not Racist to Be Racist Because It’s Correct to Be Racist

racism in bitcoin
racism in bitcoin
racism in bitcoin
racism in bitcoin

Personally Insult the Speaker

racism in bitcoin

racism in bitcoin

racism in bitcoin

racism in bitcoin

racism in bitcoin

racism in bitcoin

racism in bitcoin

racism in bitcoin

Say the Speaker Is Racist for Pointing Out Racism

racism in bitcoinracism in bitcoin

racism in bitcoin

racism in bitcoin

Be Incoherent and Angry

racism in bitcoin

racism in bitcoin

racism in bitcoin

racism in bitcoin racism in bitcoin

racism in bitcoin racism in bitcoin

Accuse the Speaker of Being Incoherent and Angry

racism in bitcoin
racism in bitcoin
racism in bitcoin
racism in bitcoin
racism in bitcoin
racism in bitcoin
racism in bitcoin
racism in bitcoin
racism in bitcoin
Posted in "social media", bitcoin, cyberlibertarianism, rhetoric of computation | Tagged , , , , , | Leave a comment

We Don’t Know What ‘Personal Data’ Means

It’s Not Just What We Tell Them. It’s What They Infer.

Many of us seem to think that “personal data” is a straightforward concept.  In discussions about Facebook, Cambridge Analytica, GDPR, and the rest of the data-drenched world we live in now, we proceed from the assumption that personal data means something like “data about myself that I provide to a platform.” Personal data means my birthdate, my gender, my family connections, my political affiliations. It is this data that needs special protection and that we should be particularly concerned about providing to online services.

This is partly true, but in another way it can be seriously misleading. Further, its misleading aspects trickle down into too much of our thinking about how we can and should protect our personal or private data, and more broadly our political institutions. The fact is that personal data must be understood as a much larger and even more invasive class of information than the straightforward items we might think.

A key to understanding this can be found in a 2014 report by Martin Abrams, Executive Director of the Information Accountability Foundation (a somewhat pro-regulation industry think tank), called “The Origins of Personal Data and Its Implications for Governance.” Abrams offers a fairly straightforward description of four different types of personal data: provided data, which “originates via direct actions taken by the individual in which he or she is fully aware of actions that led to the data origination”; observed data, which is “simply what is observed and recorded,” a category which includes an enormous range of data points: “one may observe where the individual came from, what he or she looks at, how often he or she looks at it, and even the length of pauses”; derived data, “derived in a fairly mechanical fashion from other data and becomes a new data element related to the individual”; and inferred data, “the product of a probability-based analytic process.”

To this list we’d need to add at least two more categories: anonymized data and aggregate data. Anonymized data is data that in some way has the identifying information, for example a person’s name, stripped off of it; unlike the other categories, the GDPR does directly address anonymized and pseudonymized data, noting in GDPR Recital 26 that the “regulation does not therefore concern the processing of such anonymous information”; this might be more comforting if it were not clear that “true data anonymization is an extremely high bar, and data controllers often fall short of actually anonymizing data.”

Aggregate data, as I’m using the term here, refers to data that is collected at the level of the group, but does not allow drilling down to specific individuals. In both cases, the lack of direct personal identification may not interfere with the ability to target individuals, even if they can’t necessarily be targeted by name (although in the case of anonymized data, a major concern is that advances in technology all too often make it very possible to de-anonymize what had once been anonymized). GDPR’s impact on aggregate data is one of the areas of the regulation that remains unclear.

To understand why this data that may on the surface seem “impersonal,” but can in fact be highly targeted at us as individuals, consider, for example, one form of analysis that is found all over the Facebook/Cambridge Analytica story, the so-called Big Five personality traits that sometimes go by the acronym OCEAN: openness, conscientiousness, extroversion, agreeableness, and neuroticism. Researchers and advertisers focus on the Big Five because they appear to give marketers and behavioral manipulators particularly strong tools with which to target us.

A recent New York Times story takes some examples from the research of Michael Kosinski, a Stanford University researcher whose work is often placed at the center of these conversations, and which may have been used by Cambridge Analytica. While Kosinski might be a particularly good salesperson for the techniques he employs, we do not have to accept at face value that everything he says is correct to see that the general methods he uses are widely employed, and seem to have significant validity.

Kosinski provided the Times with inferences he made about individuals’ OCEAN scores based on nothing more than Facebook “like” data. He generated these inferences by taking a group of experimental participants, ranking their own “like” scores against their own OCEAN scores, and then using machine learning to infer probabilities about the associations of their likes with their OCEAN ranks. In addition to their OCEAN rankings, in some places Kosinski has claimed even more for this technique; in a famous segment in Jamie Bartlett’s recent Secrets of Silicon Valley documentary, Kosinski correctly infers Bartlett’s religious background from his Facebook likes. This is also exactly the kind of aggregate, inferential data about us that the infamous “This Is Your Digital Life” app Facebook has said Cambridge Analytica used to gather information about not just individuals but our friends.

Even in the examples Kosinski gives, it is tempting to draw causal relationships between the proffered data and the inferred data. Perhaps people who like A Clockwork Orange are particularly interested in alternate versions of reality, so perhaps this makes them “open” to new experiences; perhaps Marilyn Manson gives off visual or aural cues for being neurotic. This kind of reasoning is just the mistake we need to avoid. These causal relationships may even exist, but it does not matter, and that is not what Kosinski’s techniques aim to discover. The software that decides that more “open” people like A Clockwork Orange is not asking why that is the case; it just is.

The fact that some of these data points look like they have causal relationships to the inferred personality traits is misleading. This was just a bunch of data with many points (that is, thousands of different things people could “like”) so that it was possible to create rich mappings between that set of data and an experimental set between people whose characteristics on the OCEAN scale are well-known, and their own “likes.”

The same kind of comparisons can be done with any set of data. It could be done with numbers, colors, the speed of mouse clicks, the timber of voice, and many, many other kinds of data.

These facts create a huge conundrum for privacy advocates. We might think we have told Facebook that we like A Clockwork Orange, and that that is the end of the story. But what if, by telling them that fact, we have also told them that we are gay, or have cirrhosis of the liver, or always vote against granting zoning permits to large box stores? What if we tell them that not even by something concrete like “liking” a movie, but simply by the speed and direction characteristics of how we use our computer mouse?

It is critical to understand that it does not matter whether this information is accurate at the level of each specific individual. Data collectors know it is not entirely accurate. Again, people mistake the correlation for a linear, causal relationship: “if I like hiking as a hobby, I am conscientious.” They then tend to evaluate that relationship on grounds of whether or not it makes sense to them. But this is a mistake. What is accurate about these techniques involves statistical inference. That inference is something along the lines of “75% of those who report they like hiking rank high on the ‘openness’ scale.” Of course it is wrong in some cases. That does not matter. If an advertiser or political operative wants to affect behavior, they look for triggers that motivate people who score highly on the “openness” score. Then they offer products, services, and manipulative media with similar scores.

That is, they don’t have to know why a point of data implies another point of data. They only have to know that it does, within a certain degree of accuracy. This is why descriptions like inferential, aggregate and anonymized data must be of primary concern in understanding what we typically mean by “privacy.”

Research into big data analytics is replete with examples of the kinds of inferential and derived data that we need to understand better. Data scientist and mathematician Cathy O’Neil’s important 2016 book Weapons of Math Destruction includes many examples. Some of her examples make sense, even if they offend pretty much any sense of justice we might have: the use of personality tests to screen job applicants based on inferences made about them rather than anything about their work history or qualifications (105-6), or O’Neil’s own experience building a system to determine a shopper’s likelihood to purchase based on their behavior clicking on various ads (46-7). Others derive inferences from data apparently remote from the subject, such as O’Neil’s citation of a Consumer Reports investigation that found a Florida auto insurer basing insurance rates more heavily on credit scores than on accident records (164-5). In that case, observed data (use of credit) is converted into derived data (the details of the credit report) and then, via big data analysis, converted into inferential data (likelihood of making an insurance claim).

Automating Equality: How High-Tech Tools Profile, Police, and Punish the Poor, a 2017 volume by political scientist and activist Virginia Eubanks, contains many examples of derived and inferential data being used to harm already-vulnerable members of society, such as an Allegheny County (in Pennsylvania) algorithm that attempted to predict which families were likely to be at high risk for child abuse, but which was based on big data analysis of hundreds of variables, many of which have no proven direct relationship to child abuse, and which ended up disproportionately targeting African American families (Chapter 4). In Chapter of the book Eubanks writes about massive data collection programs in Los Angeles intended to provide housing resources to the homeless, but which end up favoring some individuals over others for reasons that remain opaque to human case workers, and not clearly based on the directly relevant observations about clients that those case workers prefer to make life-changing decisions.

In the Cambridge Analytica case, inferential data appears to play a key role. When David Carroll and I requested our data from the company under UK law, the most interesting part of the results was a table of 10 ranked hot-button political issues. No information was provided about how this data was produced, but it clearly cannot have been provided data, as it is not data I have directly provided to anyone; I have not even thought about these issues in this form, and if the data is correct, much of it is news to me. The data is likely not observed, since that would require there to be a forum in which I had taken actions to indicate the relative importance of these issues to me, and I can’t think of any forum in which I’ve done anything close to that. So that leaves inferential and derived data, and both Carroll and I and the analysts we’ve been working with presume that this data is in fact inferred from some large body of data (Cambridge Analytica has at some points claimed to hold upwards of 4000 data points on individuals, on which it performs what Kosinski and others call “psychographics,” which is just the kind of inferential personal data I’ve been talking about used to determine very specific aspects of an individual’s personality, including their susceptibility to various kinds of behavioral manipulation). While it is hard to judge the accuracy of the ranked list precisely (in part because we don’t really know how it was meant to be used), overall it seems quite accurate, and thus offers a fairly complete inferred and/or derived political profile of me based on provided and observed data that likely only had at best a partial relationship to my political leanings.

CA SCL data

Yes, we should be very concerned about putting direct personal data out onto social media. Obviously, putting “Democrat” or even “#Resist” in your public Twitter profile tells anyone who asks what party we are in. We should be asking hard questions about whether it is wise to allow even that minimal kind of declaration in public and whether it is wise to allow it to be stored in any form, and by whom. But perhaps even more seriously, and much less obviously, we need to be asking who is allowed to process and store information like that, regardless of where they got it from, even if they did not get it directly from us.

A side note: academics and activists sometimes protest the inaccessibility of some kinds of data due to the importance of understanding what companies like Facebook are doing with our data. That’s an important conversation to have, but it’s worth noting that both Kosinski and Alexander Kogan, another researcher at the heart of the Cambridge Analytica story, got access to the data they used because they were academics.

In his testimony before the US House of Representatives Energy and Commerce Committee on April 11, 2018, Facebook CEO Mark Zuckerberg offered the following reassurance to Facebook users:

The content that you share, you put there. You can take it down at any time. The information that we collect, you can choose to have us not collect. You can delete any of it, and, of course, you can leave Facebook if you want.

At first glance, this might seem to cover everything users would care about. But read the language closely. The content “users share” and the content that Facebook “collects” name much thinner segments of Facebook’s user data than the words might seem to suggest.

Just taking Zuckerberg’s language literally, “the content you share” sounds like provided data, and “the information that we collect” sounds like some unspecified mix of provided and observed data.

Mark Zuckerberg Data

But what about derived, inferred, and aggregate data?

What this kind of data can do for those who want to manipulate us is unknown, but its potential for harm is too clear to be overlooked. The existing regulations and enforcement agreements imposed on Facebook and other data brokers have proven insufficient. If there is one takeaway from the Cambridge Analytica story and the Facebook hearings and so on, it is that democracies, and that means democratic governments, need to get a handle on these phenomena right away, because the general public does not and cannot know the extent to which giving away apparently “impersonal” data might, in fact, reveal our most intimate secrets.

Further, as a few commentators have noted, Facebook and Google are the most visible tips of a huge iceberg. The hundreds of data brokers whose entire business consists in selling data about us that we never directly gave to them may be even more concerning, in part because their actions are so much more hidden from the public. Companies like Acxiom aggregate, analyze and sell data, both for advertising and for a wide range of other activities that impact us in many ways we don’t understand at all well enough, up to and including the “social credit score” that the Chinese government appears to be developing to track and control many aspects of public behavior.  Possibly even worse, the data fuels the activities of full-scale surveillance companies like Peter Thiel’s Palantir, with which Mark Zuckerberg himself said in his Congressional testimony declared he “isn’t that familiar,” despite Thiel being a very visible and outspoken early Facebook investor mentor to Zuckerberg. Facebook itself has a disturbing interest in the data of people who have not signed up for the service, which just illustrates its similarity to data brokers like Acxiom.

If Facebook and Google and the data brokers were to say, “you can obtain, and if you choose to, delete, all the data we have about you,” or better yet, “you have to actively opt-in to give us your data and agree to the processing we do with it,” that might go a long way toward addressing the kind of concerns I and others have been raising for a long time about what is happening with surveillance and behavioral manipulation in digital technology. But would that even be enough? Is it clear that data “about” me is all the data that is directly attached to my name, or whatever other unique personal identifier Facebook uses? Would these companies even be able to stay in business if they offered users that much control?

Even the much-vaunted and very important GDPR is not at all as clear as it could be about the different kinds of data.  If we are to rein in the massive invasions of our privacy found in social media, we need to understand much more clearly and specifically what that data is, and what social media companies and data brokers and even academics do with it.

Posted in "social media", privacy, rhetoric of computation | Tagged , , , , , , , , , , , , , , , | 1 Response

The Terribly Thin Conception of Ethics in Digital Technology

Thanks in part to ongoing revelations about Facebook, there is today a louder discussion than there has been for a while about the need for deep thinking about ethics in the fields of engineering, computer science, and the commercial businesses built out of them. In the Boston Globe, Yonatan Zunger wrote about an “ethics crisis” in computer science.  In The New York Times, Natasha Singer wrote about “tech’s ethical ‘dark side.’”

Chris Gilliard wrote an excellent article in the April 9, 2018 Chronicle of Higher Education focusing specifically on education technology titled  “How Ed Tech is Exploiting Students.” Since students are particularly affected by academic programs like computer science and electrical engineering, one might imagine and hope that teachers of these topics would be particularly sensitive to ethical concerns. (Full disclosure: I consider Chris a good friend, and he and I have collaborated on work in the past, intend to do so in the future, and I read an early draft of his Chronicle piece and provided comments to him.)

robot teacher fooled students

image source: YouTube

Gilliard’s concerns, expressed repeatedly in the article, have to do with 1) what “informed consent” means in the context of education technology; 2) with the fact that participating in certain technology projects entails that students are, often unwittingly, contributing their labor to projects that benefit someone else—that is, they are working for free; and 3) with the fact that the privacy implications of many ed-tech projections are not at all clear to the students:

Predictive analytics, plagiarism-detection software, facial-recognition technology, chatbots — all the things we talk about lately when we talk about ed tech — are built, maintained, and improved by extracting the work of the people who use the technology: the students. In many cases, student labor is rendered invisible (and uncompensated), and student consent is not taken into account. In other words, students often provide the raw material by which ed tech is developed, improved, and instituted, but their agency is for the most part not an issue.

Gilliard gives a couple of examples of ed-tech projects that concern him along these lines. One of them is a project by Prof. Ashok Goel of the Georgia Institute of Technology.

Ashok K. Goel, a professor at the Georgia Institute of Technology, used IBM’s “Watson” technology to test a chatbot teaching assistant on his students for a semester. He told them to email “Jill” with any questions but did not tell them that Jill was a bot.

Gilliard summarizes his concerns about this and other projects as focusing on:

how we think about labor and how we think about consent. Students must be given the choice to participate, and must be fully informed that they are part of an experiment or that their work will be used to improve corporate products.

In an April 11 letter to the Chronicle, Goel objected to being included in Gilliard’s article. Yet rather than rebutting Gilliard’s critique, Goel’s response affirms both its substance and its spirit. In other words, despite claiming to honor the ethical concerns Gilliard raises, Goel seems not to understand them, and to use his lack of understanding as a rebuttal. This reflects, I think, the incredibly thin understanding of ethics that permeates the world of digital technology, especially but not at all only in education technology.

Here are the substantive parts of Goel’s response:

In this project, we collect questions posed by students and answers given by human teaching assistants on the discussion forum of an online course in artificial intelligence. We use this data exclusively for partially automating the task of answering questions in subsequent offerings of the course both to reduce teaching load and to provide prompt answers to student questions anytime anywhere. We deliberately elected not to inform the students in advance the first time we deployed Jill Watson as a virtual teaching assistant because she is also an experiment in constructing human-level AI and we wanted to determine if actual students could detect Jill’s true identity in a live setting. (They could not!)

In subsequent offerings of the AI class over the last two years, we have informed the students at the start of the course that one or more of the teaching assistants are a reincarnation of Jill operating under a pseudonym and revealed the identity of the virtual teaching assistant(s) at the end of the course. The response of the several hundred students who have interacted with Jill’s various reincarnations over two years has been overwhelmingly positive.

In what follows I am going to assume that Goel raised all the issues he wanted to in his letter. It’s possible that he didn’t; the Chronicle maintains a tight word limit on letters. But it is clear that the issues raised in the letter are the primary ones Goel saw in Gilliard’s article and that he thinks his project raises.

In almost every way, the response affirm the claims Gilliard makes, rather than refuting them. First, Gilliard’s article clearly referred to “a semester,” which can only be the first time the chatbot was used, and Goel indicates, without explanation or justification, that he “deliberately elected not to inform the students in advance” about the project during that semester. Yet that deliberation is exactly one of Gilliard’s points: what gives technologists the right to think that they can conduct such experiments without student consent in the first place? Goel does not tell us. That subsequent instances had consent—of a sort, as I discuss next—only reinforces the notion that they should have had consent the first time as well.

There are even deeper concerns, which happen also to be the specific ones Gilliard raises. First, what does “informed consent” mean? The notion of “informed consent” as promulgated by, for example, the Common Rule of the HHS, the best guide we have to the ethics of human experimentation in the US, insists that one can only give consent if one has the option not to give consent. This is not rocket science. Not just the Common Rule, but the 1979 Belmont Report on which the Common Rule is based, itself reflecting on the Nuremberg Trials, defines “informed consent” specifically with reference to the ability of the subject to refuse to participate. This is literally the first paragraph of the Belmont Report’s section on “informed consent”:

Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to choose what shall or shall not happen to them. This opportunity is provided when adequate standards for informed consent are satisfied.

If anything the idea of “informed consent” has grown only richer since then. Perhaps Goel allows students to take a different section of the Artificial Intelligence class if they do not want to participate in the Jill Watson experiment. Such a choice would be required for student consent to be “consent.” His letter reads as if Goel does not realize that “informed consent” without choice is not consent at all. If so, this is not an isolated problem. Some have argued, rightly in my opinion, that failure to understand the meaning of “consent” is a structural problem in the world of digital technology, one that ties the behavior of software, platforms and hardware to the sexism and misogyny of techbro culture. Even the Association for Computing Machinery (ACM, the leading professional organization for computer scientists) maintains a “Code of Ethics and Professional Conduct” that speaks directly of “respecting the privacy of others” in a way that is hard to reconcile with the Jill Watson experiment and with much else in the development of digital technology.

Further, Goel indicates that the point of the Jill Watson experiment is “an experiment in constructing human-level AI.” He does not make clear whether students are told that this is part of the point of the Watson experiment. He does not make clear that both the pursuit of what he calls “human-level AI,” but many philosophers, cognitive scientists and other researchers, including myself, consider a misapplication of ordinary language, raises on its own significant ethical questions, the nature and extent of which certainly go far beyond what students in a course about AI can possibly have covered before the course begins, if they are covered at all. Do the students truly understand the ethical concerns raised by so-called AIs that can effectively mimic the responses of human teachers? Is their informed consent rich with discussions of the ethical considerations raised by this? Do they understand the labor implications of developing chatbots that can replace human teachers at universities? If so, Prof. Goel does not indicate they are.

The sentence about the Watson “experiment” appears to contradict another sentence in the same paragraph, where Goel writes that data generated by the Jill Watson experiment is used “exclusively for partially automating the task of answering questions in subsequent offerings of the course” (emphasis added). Perhaps the meaning of “exclusively” here is that the literal data collected to train Jill Watson is segregated into that project. But the implication of the “experiment” sentence is that whether or not that is the case, the project itself generates knowledge that Goel and his collaborators are using in their other research and even commercial endeavors. This is exactly the concern that is front and center in Gilliard’s article. When the students are fully informed about the ethical and privacy considerations raised by the technology in the course they are about to take, are they provided with a full accounting of Goel’s academic and commercial projects, with detailed explanations of how results developed in the Jill Watson project may or may not play into them? Once again, if so, Goel appears not to think such concerns needed to be mentioned in his letter.

At any rate, Goel certainly makes it sound as if the work done by students in the course helps his own research projects, whether in providing training data for the Jill Watson AI model, and/or in providing research feedback for future models. So do the press releases Georgia Tech issues about the project. I presume it is quite possible that this research could lead directly or indirectly to commercial applications. It may already be leading in that direction.

Gilliard concludes his article by writing that “When we draft students into education technologies and enlist their labor without their consent or even their ability to choose, we enact a pedagogy of extraction and exploitation.” In his letter Goel entirely overlooks the question of labor and claims that consent simply to have a bot as a virtual TA with no apparent alternative, and without clear discussion of the various ways their participation in this project might inform future research and commercial endeavors, mitigates the exploitation Gilliard writes about. This exchange only demonstrates how much work ethicists have left to do in the field of education technology (and digital technology in general), and how uninterested technologists are in listening to what ethicists have to say.

UPDATE, May 3, 5:30pm: soon after posting this story, I was directed by my friends Evan Selinger and Audrey Watters to two papers by Goel that indicate that he had Institutional Review Board (the bodies that implement the Common Rule) approval for the Jill Watson project, and two papers (paper one; paper two) in which he writes at greater length than he does in the Chronicle letter about some ethical implications of the Jill Watson project. I will update this post soon with some reflections on these papers.

Posted in cyberlibertarianism, privacy, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , | Leave a comment

Please Consider Supporting Our Legal Challenge to Cambridge Analytica’s Role in the Trump Election

Since December of last year, I have been part of a small group of concerned citizens engaged in a series of actions against Cambridge Analytica (CA) and its parent corporation, SCL Group.

I am writing this post in the hopes of gathering support (that is, funds) we need to continue this action. You can support us at our Crowd Justice page, which has more information.

Here I’ve tried to lay out some of the background behind our efforts.

Crowd Justice campaign header

Our actions are driven by concern for claims made by CA, and those whose work it relies on, regarding the level of behavioral manipulation of which they are capable, and specifically about whether the techniques CA has developed have been used to manipulate voting behavior, especially in the 2016 US Presidential election and the UK Brexit election (although as US citizens our inquiry is limited to the US election). Carol Cadwalladr of The Guardian is the journalist who has covered the topic most extensively: see, for example, her piece on this campaign, “British Courts May Unlock Secrets of How Trump Campaign Profiled US Voters,” and her earlier pieces on the Brexit/Leave.EU campaign, “Follow the Data: Does a Legal Document Link Brexit Campaigns to US Billionaire?” and “The Great British Brexit Robbery: How Our Democracy Was Hijacked” and on the US Presidential election, including “Cambridge Analytica Affair Raises Questions Vital to Our Democracy” and “When Nigel Farage Met Julian Assange.”

The UK has more extensive data protection laws than does the US. Its laws and regulations are administered by the Information Commissioner’s Office (here not standing for Initial Coin Offering). Because CA/SCL, a British company (SCL Group) with an American subsidiary (CA) appears to have directly collected and used data about US citizens in its work for the Cruz and Trump campaigns in the 2016 election, the UK Data Protection Act (DPA) is triggered. The DPA has many provisions for individuals to discover exactly what data is being collected on them and how it is being used.

In late 2016, David Carroll of Parsons School of Design and I and a few others, working with data researcher Paul-Olivier Dehaye, and the project he runs, submitted formal requests to CA/SCL under the UK Data Protection Law, which allows us to see all data companies have about us.

During the spring, UK attorney (aka “solicitor”) Ravi Naik of Irvine Thanvi Natas Solicitors took an interest in our efforts and helped to coordinate our requests to CA/SCL. Ravi himself has written an article in The Guardian about the campaign.

It took longer than the 40 days the law allows, but eventually (in March) CA/SCL did return files to David and myself. The file consisted of a single Excel spreadsheet with 3 tabs. Two of these were relatively innocuous identifying information (date of birth, address, records of which elections I’ve voted in) that are available to marketers via public election records. The third, though, is shocking in its implications:

CA SCL data

What is startling about this data, in part, is that it appears to be specifically about how manipulable I might be with regard to central hot button issues in the political public sphere—not necessarily what my opinions are, but whether I would be susceptible to manipulation about issues like “Gun Rights” and “Traditional Social and Moral Values.” In general this psychographic profile strikes me as being plausible, though not necessarily how I consciously think I’d rank all of these issues for myself: but then again, the point of psychographic data is that it knows us better than we know ourselves.

We don’t think this information can possibly be complete, since it gives very little sense of what I think about all of these issues, which a marketer like CA/SCL would surely need to take targeted actions based on it—for example, even if environmental issues are at level 10 importance to me, this data does not indicate whether that means I consider the problem to be climate change, or the fact that climate change is a fraud.

CA/SCL provided no information whatsoever on where and how this information was gathered, whether it represents a purchase of existing information or analytics performed on a body of data CA/SCL also has but has not disclosed, and so on.

In 2017, the ICO issued a document called “Guidance for Political Campaigning.” Among the many provisions of this guidance that CA/SCL would appear not to have followed scrupulously even based on this limited amount of data is this:

79. The big data revolution has made available new and powerful technical means to analyse extremely large and varied datasets. These can include traditional datasets such as the electoral register but also information which people have made publicly accessible on Facebook, Twitter and other social media. Research and profiling carried out by and on behalf of political parties can now benefit from these advanced analytical tools. The outputs may be used to understand general trends in the electorate, or to find and influence potential voters. (16)

80. Whatever the purpose of the processing, it is subject to the DPA if it involves data from which living individuals can be identified. This brings with it duties for the party commissioning the analytics and rights for the individuals to whom the data relates. It includes the duty to tell people how their data is being used. While people might expect that the electoral register is used for election campaigns they may well not be aware of how other data about them can be used and combined in complex analytics. If a political organisation is collecting data directly from people eg via a website or obtains it from another source, it has to tell them what it is going to do with the data. In the case of data obtained from another source, the organisation may make the information available in other ways, eg on its website, if contacting individuals directly would involve disproportionate effort. It cannot simply choose to say nothing, and the possible complexity of the analytics is not an excuse for ignoring this requirement. Our code of practice on privacy notices, transparency and control provides advice on giving people this information. (16-17)


81. Even where information about individuals is apparently publicly accessible, this does not automatically mean that it can be re-used for another purpose. If a political organisation collects and processes this data, then it is a data controller for that data, and has to comply with the requirements of the DPA in relation to it. (17)

In its responses to us, other than the data mentioned above, CA/SCL has engaged in a pattern of bullying and denial that suggests to me, at least, that it has much more to disclose and will do everything in its power not to.

In order to take the next step in our legal challenge to CA/SCL, we need to raise £25,000. That is a lot of money. None of the money is going to us; we are raising the money using the established legal crowdfunding site Crowd Justice. The money is needed for two reasons: first, because in the UK, the loser of a lawsuit can be forced to pay the winner’s legal fees (so-called “adverse fees”). If we sue CA/SCL and lose, we could be liable for the fees CA/SCL has paid to its attorneys. With the Mercers backing CA/SCL, we are already certain that they will be using some of the highest-priced corporate attorneys available. Second, the money is needed to pay our own legal fees and to partly reimburse the solicitors working on the case for their time, even though most of their time is being donated.

We believe that continuing to force this issue could ultimately cause CA/SCL to release all of its data on the 2016 Presidential election, and possibly even the Brexit campaign. We also believe it may have extremely positive effects in preventing CA/SCL and other organizations from engaging in similar actions in the future.

We have at this point raised about £20,000 of the initial £25,000 we need to raise to start actions beyond making our subject data requests under the DPA. If you are at all inclined to help us in this effort, please visit our Crowd Justice page.

Posted in "social media", materality of computation, privacy, revolution, surveillance, we are building big brother, what are computers for | Tagged , , , , , , , , , , , | Leave a comment

Article: “The Militarization of Language: Cryptographic Politics and the War of All against All”

I have an article in the latest boundary 2 titled “The Militarization of Language: Cryptographic Politics and the War of All against All.” It is my most sustained attempt to locate and critique a political philosophy in the discourse of encryption advocates, a project I’ve addressed as well in pieces like “Code Is Not Speech” and “Tor, Technocracy, Democracy.” It’s a piece I haven’t before posted drafts of, in part because it includes a relatively strong critique of some of Jacob Appelbaum’s talks, especially his infamous 30c3 talk, “To Protect and Infect: The Militarization of the Internet (Part Two; in three acts).” The title of Applebaum’s talk was part of what motivated me to write this piece, as it appears as part of and was commissioned for a boundary 2 dossier called “The Militarization of Knowledge.”

Here’s the formal abstract:

The question of the militarization of language emerges from the politics surrounding cryptography, or the use of encryption in contemporary networked digital technology, and the intersection of encryption with the politics of language. Ultimately, cryptographic politics aims to embody at a foundational level a theory of language that some recent philosophers, including Charles Taylor and Philip Pettit, locate partly in the writings of Thomas Hobbes. As in Hobbes’s political theory, this theory of language is closely tied to the conception of political sovereignty as necessarily absolute and as the only available alternative to absolute sovereignty being a state of nature (or more accurately what Pettit 2008 calls a “second state of nature,” one in which language plays a key role). In that state of nature, the only possible political relation is what Hobbes calls a war of “all against all.” While Hobbes intended that image as a justification for the quasi-absolute power of the political sovereign, those who most vigorously pursue cryptographic politics appear bent on realizing it as a welcome sociopolitical goal. To reject that vision, we need to adopt a different picture of language and of sovereignty itself, a less individualistic picture that incorporates a more robust sense of shared and community responsibility and that entails serious questions about the political consequences of the cryptographic program.

boundary 2 cover

If you’d like a copy and do have institutional access, please use this official link to the article at boundary 2 at Duke University Press.

If you don’t have institutional access and would like a copy, please email me (dgolumbia-at-gmail-dot-com) or access a copy at

Posted in "hacking", cyberlibertarianism, privacy, rhetoric of computation, surveillance | Tagged , , , , , , , , , , , | Leave a comment

The Destructiveness of the Digital Humanities (‘Traditional’ Part II)

In what purport to be responses or rebuttals to critiques I and others have offered of Digital Humanities (DH), my argument is routinely misrepresented in a fundamental way. I am almost always said to oppose the use of digital technology in the humanities. This happens despite the fact that I and those I have worked with use digital technologies in hundreds of ways in our research and that our critiques—typically including exactly the ones DHers are responding to—make this explicit.

It is undeniable that DH is in some sense organized around the use of some digital tools (but not others, and this gap is itself is a very important part of how, on my analysis, the DH formation operates, a matter I have written about at some length). What I and the scholars I work with, as opposed to some conservative pundits, worry about is not the use of digital technology in the humanities. Speaking only for myself, what I oppose most strongly is the attitude toward the rest of the humanities I find widespread in DH circles: the view that the rest of the humanities (and particularly literary studies) are benighted, old-fashioned, out of date, and/or “traditional.”

This is what I mean when I describe DH as an ideological formation, more than it is a method or set of methods. The destructiveness in the ideological formation is what I oppose, not the use of tools per se, or even the actual work done by at least some in DH. The ideological formation, I have argued, is what distinguishes DH from the fields that preceded it (that is not to say that computational ideologies were not present in Humanities Computing—they certainly were—but they had failed to find the institutional purchase and power DH was seeking, which is why Humanities Computing needed to be transmuted into DH). Further, I have argued repeatedly that this destructiveness is an inherent feature of DH, perhaps even its most important constitutive feature: that is to say that the most common shared feature in DH work is its “incidental” destructiveness toward whatever it declares not to be part of itself.

There are deep and interesting ideological reasons why the apparent championing of digital tools should overlap with this overtly destructive attitude toward humanistic research, some of which I’ve just touched on in a recent post, “The Destructiveness of the Digital.” It has something to do with the destructiveness toward whatever is considered “non-digital” among digital partisans, which is part of why I have called DH the “cyberlibertarian humanities” (a claim that is just as routinely misrepresented by DH responders as is the rest of my critique).

I want to leave that aside, in favor of presenting just one unexpectedly clear and symptomatic public example of the destructiveness embedded in DH. In her interview in the LARB “Digital in the Humanities” series, senior DH scholar Laura Mandell approvingly quotes another senior DH scholar, Julia Flanders, saying: “We don’t want to save the humanities as they are traditionally constituted.”

We don’t want to save the humanities as they are traditionally constituted

That, to me, summarizes DH—or at least the part of DH that concerns me and others very greatly—in one compact sentence.

Now I’m sure, as soon as I point it out, there will be a lot of backtracking and spin to claim that this sentence means something other than what it clearly seems to. This is true even though practice shows that the plain reading is correct, and that DHers frequently speak in disparaging and dismissive ways about the rest of literary studies. Yet when pressed, and this is part of why I see DH as resembling so many other rightist formations, rather than simply owning and admitting its disparagement of other approaches, DH starts to blame those who point it out and portray itself as the victim.

In context, I don’t think there is any other reasonable way to read the sentence. What “the humanities as they are traditionally constituted” means here is “the humanities other than DH.”

(It seems worth noting that the characteristic double-mindedness in DH about what constitutes DH itself makes this even more problematic and more transparently a kind of power politics: the only kinds of humanities research that should be saved are not “the kind that uses digital tools,” since virtually all humanities research these days uses digital tools in many different ways, but instead, “whatever scholars who are identified with DH say is part of DH,” a fact which in certain moods even some DHers themselves acknowledge.)

Further, that quotation has been out there now for over a year, and nobody has, as far as I know, bothered to comment or push back on it, despite plenty of opportunities to do so; that fact in itself shows the insensitivity in DH to its own institutional politics.

For reference, here is the entire exchange in which Mandell’s statement appears:

Another concern that has come up deals with public intellectualism, which many scholars and journalists alike have described as being in decline (for example, Nicholas Kristof’s New York Times essay last year). What role, if any, do you think digital work plays? Could the digital humanities (or the digital in the humanities) be a much-needed bridge between the academy and the public, or is this perhaps expecting too much of a discipline?

I have a story to tell about this. I was at the digital humanities conference at Stanford one year and there was a luncheon at which Alan Liu spoke. His talk was a plea to have the digital humanities help save the humanities by broadcasting humanities work — in other words, making it public. It was a deeply moving talk. But to her credit, Julia Flanders stood up and said something along the lines of, “We don’t want to save the humanities as they are traditionally constituted.” And she is right. There are institutional problems with the humanities that need to be confronted and those same humanities have participated in criticizing the digital humanities. Digital humanists would be shooting themselves in the foot in trying to help the very humanities discipline that discredits us. In many ways Liu wasn’t addressing the correct audience, because he was speaking to those who critique DH and asking that they take that critical drive that is designed to make the world a better place and put it into forging a link with the public — making work publicly available. Habermas has said that the project of Enlightenment is unfinished until we take specialist discourses and bring them back to the public. This has traditionally been seen as a lesser thing to do in the humanities. For Habermas, it is seen as the finishing of an intellectual trajectory. This is a trajectory that we have not yet completed and it is something, I think, the digital humanities can offer.

The archness and self-contradictory nature of this passage are emblematic of a phenomenon we see more and more of in DH circles. Literally within the same paragraph where she declares that the rest of the humanities should go away, in a remarkable instance of what I like to call right reaction and what Michael Kimmel calls aggrieved entitlement, Mandell says that is the rest of the humanities that are engaged in “discrediting” DH. One has to ask: what is the proper way for “non-DH” humanists to react to a very successful effort—in many places, literally the only thing administrators know about what is happening in English departments these days—that says that the humanities shouldn’t be saved? To simply stop practicing our discipline? And given that your project is predicated on ending the rest of the humanities, how could any response that disagrees with that not also be (wrongly) construed as “discrediting” your practice?

It’s also worth noting that in Mandell’s story, Alan Liu is the one making the request to support the humanities, and that Liu is one of the few English professors identified with and accepted by the DH community who has refused to give ground on the importance of non-DH literary studies. In other words, his request could and should have been met with sympathy and respect, if DH really did not contain a kernel of destructive animus toward the rest of the humanities. It’s worth noting that as this microcosmic scene suggests, Liu’s efforts to get the DH community to support non-DH literary studies have seen very little uptake.

In fact, if we step back from the scene just a bit, it is a bit bizarre to imagine the scene, where one digitally-respected senior scholar says “please don’t kill the rest of the humanities,” and a few others say, “no, we want to kill them.” Of all the people in the world who should be speaking up to kill the rest of the humanities, how did we get to the place where it is people who are nominally literature scholars leading the charge? The answer to that is DH: not the use of tools and building of archives—more power to them—but the destructive, “digitally”-grounded ideology that DH is built from and that it revels in. The one that says all other forms of knowledge have suddenly become outmoded and “traditional,” and this one new way is now the exclusive way forward.

Late last year I wrote a post where I discussed the way that Immanuel Wallerstein analyzes the concept of “traditional” and its place in the global system of capital. This piece builds on that one, and I recommend reading the whole thing, but I’ll just quote one paragraph from Wallerstein that is especially germane to this point:

Historical capitalism has been, we know, Promethean in its aspirations. Although scientific and technological change has been a constant of human historical activity, it is only with historical capitalism that Prometheus, always there, has been ‘unbound,’ in David Landes’s phrase. The basic collective image we now have of this scientific culture of historical capitalism is that it was propounded by noble knights against the staunch resistance of the forces of ‘traditional,’ non-scientific culture. In the seventeenth century, it was Galileo against the Church; in the twentieth, the ‘modernizer’ against the mullah. At all points, it was said to have been ‘rationality’ versus ‘superstition,’ and ‘freedom’ versus ‘intellectual oppression.’ This was presumed to be parallel to (even identical with) the revolt in the arena of the political economy of the bourgeois entrepreneur against the aristocratic landlord. (Immanuel Wallerstein, Historical Capitalism, 75)

I doubt it will surprise anyone familiar with my way of thinking that I wrote this with an eye toward precisely the way that the idea of “traditional” is used in DH: DH has always cast the rest of the humanities as “traditional,” in just the way that Wallerstein notes—and this despite the incredibly variety of approaches (including the very approaches that ground DH) that “traditional” seems to indicate.

This alignment of the DH project against what it falsely projects as “traditional” academic practice is part of why I see it as closely aligned with neoliberalism, in a deep and fundamental way that can’t be ameliorated by ad-hoc patches applied here and there. Until DH confronts the way that it has from its inception been deeply imbricated in a cultural conversation according to which technology points toward the future and everything (supposedly) “non-technological” points to the past, it will be unable to come to terms with itself as the ideological formation I and many others see it as.

The fact that this can occur within a disciplinary milieu where the identification of ideological formations had until very recently been a major part of the stock in trade is not just ironic, but symptomatic of DH as a politics. When you think about it, one way of looking at the social scene there is that DH scholars, who have in general eschewed and even dismissed the project of interpretation, especially politicized interpretation, in favor of formalism and textual editing and “building,” turn to their colleagues who still do specialize in interpreting ideologies and say, in this one instance, our own profession, that we don’t know how to use the methods in which we specialize. Is that credible? Is it credible that the critics of DH, who typically are people who specialize in sniffing out ideologies, don’t understand how to do ideology critique in our own field, but DHers, who in general avoid ideology critique like the plague, can somehow do it better than we do? Who is attacking whose professionalism here? And what could be more destructive to literary studies than to say that literary scholars do not understand how to do their own work?

To end on a positive note: despite being frequently accused of wanting to “end” DH, whatever in the world that would mean, that is only true in a very limited sense: I want to “end” the practice within DH of calling the rest of the humanities “traditional” and “anti-technology” and “out of touch” and “the past.” I want to “end” the rejection of theory and politics and the weird idea that “close reading” is some kind of fetish, within the context of literary studies. I want to end the view that “building” is “doing something,” whereas “writing” is not, and even the view that “building” and “writing” are different kinds of things. I want to end the view that DH is “public” and the rest of literary studies is not. I want DHers to embrace the fact that they are part of the humanities. This might end “DH” per se as an ideological formation, but it would not end the careers of scholars who want to use digital tools in the service of humanities research, of whom I am very much one. One might think that would be asking virtually nothing at all—embrace and support the disciplines of which you are a part—but as the twinned quotation from Flanders and Mandell shows, especially given that it is offered specifically as a rebuke to exactly that request coming from “within” DH, it turns out to be asking a great deal.

Posted in cyberlibertarianism, definitions that matter, digital humanities, rhetoric of computation, theory | Tagged , , , , , , , , | Comments closed

The Destructiveness of the Digital

I’ve argued for a long time along different ways that despite its overt claims to creativity, “building,” “democratization,” and so on, digital culture is at least partly also characterized by profoundly destructive impulses. While digital promoters love to focus on the great things that will come from some new version of something over its existing version, I can’t help focusing on what their promotion says—implicitly or explicitly—about the thing they claim to be replacing, typically at profit to themselves, whether in terms of political or personal power (broadly speaking) or financial gain.

Note that it is in no way a contradiction for both destructiveness and creativity to exist at the same time, something I repeatedly try to explain without much success. In fact it would be odd for only one or the other to exist, and one does not negate the other, at least not as a rule. The fact that there is a lot of creativity in digital culture does not directly address the question of whether there is also destructiveness. Further, the continual response that creativity does negate the destructiveness shifts the terms of discussion so that we can’t really deal as directly as we should with the destructiveness.

I’m not going to go into these arguments in detail right now, but just want to present a particularly clear example of digital destructiveness I happened to hear recently. On April 11 on a regular segment called “Game On” of a BBC Radio 4 program called “Let’s Talk About Tech” (the episode is available for listening and download through early May), host Danny Wallace interviews Hilmar Veigar Pétursson, CEO of CCP, the publisher of the MMORPG EVE Online (“a space-based, persistent world massively multiplayer online role-playing game”), on the occasion of that game’s winning a BAFTA award in the “evolving games” category.

EVE Online

Screen cap from the 2013 “largest space battle EVE Online has ever seen,” from the subreddit /r/eve via an article by Ian Birnbaum at PC Gamer

In the final exchange in the interview (starting around minute 23), Wallace asks Pétursson to reflect more generally about the nature of games and gaming. I’ve transcribed the whole exchange below.

Wallace sets the stage by invoking the defensive aggrieved entitlement of the gamer, which is symptomatically portrayed in the voice of the scolding critic who essentially declares video games not to be art (with no interrogation of what “art” means exactly–just not “frivolous”), but Pétursson’s response goes well beyond what Wallace asks. Asked to articulate the value of EVE Online, Pétursson turns to attack (literally) all other forms of media, and in particular to disparage the entire project of reading. The claim on the surface has to be that all the kinds of philosophical and narrative engagement one experiences in books can be better experienced in video games than by reading the books and other texts (and experiencing the other media) out of which all world cultures emerge.

So we move from a largely fictional dismissal of the value of one medium—games—to an explicit and disparaging rejection of all other forms of media. Further, this disparagement rests on an unsurprising and completely unsophisticated account of what media consumption is really like—a wildly undertheorized presumption that looking at screens and using a pointing device constitutes “interaction” in a way that reading or listening to the radio or even watching screens without a pointing device at the ready does not. That whole frame is inaccurate: it suggests something massively untrue about the experience of reading (and even more of listening and talking) that no careful study of the subject would support, and also a conception of what happens when we play games that is deeply interested. After all, anyone who has ever participated in a raid in World of Warcraft knows that the feeling of suture and of interactivity that players have is, at best, profoundly weird: most of what is going on in the game and on the screen is absolutely not under the player’s “control,” and what is under “control” is a highly limited set of device clicks and gestures that certainly give or go along with the feeling of being “in the game,” but are in fact very different from actually playing a game with one’s body (here thinking of a game like basketball or baseball). Further, that fictional relationship—the immersive sense that one is in the game and participating with the other elements of it—is philosophically much harder to distinguish than one might expect from the suturing relationship the viewer or reader has to text and media of various sorts. The questions of why and how I identify with my avatar in a video game as over against why and how I identify with the main character or analytical perspective offered in a book, or a movie, and so on, are fascinating ones without easy answers. Of course, digital utopians long ago decided that digital media are “interactive” in a way other media aren’t, a notion that is itself built on a serious disparagement of anything non-digital (or anything digital utopians don’t like).

Pétursson tells us that the testimony we have from thousands of years and literally hundreds of millions of people regarding narrative and visual and linguistic media can be dismissed, while the “thousands” of people who play EVE Online provides evidence that this new medium proves the fruitlessness of all other forms of media. In other words, the testimony of EVE Online players is valid, but that of non-players is not valid. It may seem subtle, but this privileging of the testimony of those one values and dismissal of those one doesn’t is one critical root of the development of hate. (Some readers will know that Pétursson’s complaint echoes a famously vicious and totally inaccurate assessment of Tolstoy’s novels [and a fortiori all novels] by digital guru and venture capital-consultant Clay Shirky.)

One of my main concerns with the destructiveness of digital culture has been precisely its disparagement and dismissal of all forms of knowledge that the digerati deem “traditional” or “out of date” or “fruitless,” typically with very little exposure to those forms of knowledge. I am especially concerned with what this perspective teaches with regard to politics. Politics is very complicated terrain for all of us, even those of us who study it for a living. Understanding how various political forces operate and take advantage of popular energy and opinion is among the most urgent political tasks of our time. It is beyond doubt that the rise of authoritarian populism in our time is fueled in part with a studied agnotology, with the promotion of ignorance about politics that makes people particularly vulnerable to manipulation.

So what politics does EVE Online teach, “fruitfully” as against the “fruitless” pursuit of political knowledge from reading and other forms of media? As a non-player of the game I’m in no position to judge, but it’s notable that the game is known for a fairly destructive take on governance. Here’s a bit from Wikipedia’s section on “griefing” in EVE Online:

Due to the game’s focus on freedom, consequence, and autonomy, many behaviors that are considered griefing in most MMOs are allowed in EVE. This includes stealing from other players, extortion, and causing other players to be killed by large groups of NPCs.

I don’t know if there’s any connection between Pétursson’s destructive attitude toward non-game forms of media and this overt hostility toward the ethical principles of social behavior that many of us adhere to. I don’t know whether players of EVE Online share Pétursson’s hostility. But it’s hard not to wonder.

And of course that isn’t even really the point. The point is that this  hostility to anything that is currently identified as not being part of the digital is visible all over the place in digital culture. It is far in excess of what celebration of cool new things requires. And it is completely unmotivated. Large-category new forms of media do not eliminate or obviate older ones: movies didn’t eliminate books, television didn’t actually eliminate radio, and so on. You don’t have to hate books and movies and tv to enjoy games. You don’t have to hate to be part of the “new.” Unfortunately, too many people apparently think otherwise.

Transcribed From April 11 “Game On” segment of  BBC Radio 4 “Let’s Talk About Tech”:

Question (Danny Wallace, BBC)

The old brain, the old parts of the media, for instance, and social commentators, and people who are cultural commentators, will say all video games, they’re just video games. They’re just for kids, or miscreants living in their parents’ basements. That is.. that’s firmly disappearing now, that point of view, isn’t it. You’ve lasted long enough to outlive the people who said, why on earth are you making, spending all this time and all this effort making something as frivolous as video games.

Answer (Hilmar Veigar Pétursson, CEO of CCP, publisher of EVE Online)

Yeah, I mean, in some aspects, we’re making computer games. And many aspects of EVE are like that. But there are also aspects of EVE which is nothing like that, which are so fundamentally unique, you can’t really… you would have to scramble for analogies. It really is a virtual life, where people live out. They do work, they trade, they build social structures, they make friends, they succeed, they fail, they learn, they have lessons in leadership, trust. It’s an extraordinarily beneficial activity, I would argue. And that’s not just my own point. I have thousands, tens of thousands of people that just fundamentally agree with me. So it’s an element of truth, once you get enough people to agree with it. So I’ve never really looked at it like that. The fact that we’re classified as a computer game, I mean, doesn’t really bother me. It helps people understand what it is. It’s not like I have a very good classification for what we really are: something virtual worlds, virtual life, social economy, I mean there are many analogies you can bring to it. But yeah, we’ve never really thought of it as just being computer games.

I would then argue, I mean there’s a lot of other things in human endeavors which are frankly uninteresting. If you look at most media, it’s broadcast to consumer and there’s no participation. Why is reading a book considered a better activity than playing a game? At least in a game you’re participating. In a book you’re just wallowing in some other’s imagination. How is that a fruitful activity? It’s very equivalent to watching TV. I find reading books… I generally frown upon it. I would rather play a game. I learn more from computer games than books.

Posted in cyberlibertarianism, games, rhetoric of computation | Tagged , , , , , , , , , , , | 2 Responses

Race, Technology, and the Word “Traditional” in the World-System

“Traditional” is one of the more interesting words to keep track of in contemporary discourse, particularly when it comes up in discussions of technology.

For the most part, it is used as a slur.

It is a word used to disparage an object or practice, to compare it to whatever one wants to posit as “new” or “innovative” or even “worthwhile” or “useful.”

It’s an implicit slur: after all, in a variety of contexts, “traditions” and “traditional” are words that point to good things, things we (apparently) value, things we don’t necessarily want to change. Though these days, more and more, especially in discussions of technology and the economy, it’s the slur meaning that predominates.

I’ve always noticed this and meant to write a brief note about it, since it seems to me the question of what is “traditional” and what isn’t is highly relative and mobile. Before I could get around to that, though, I ran across a surprisingly pointed discussion of this term in an unexpected source: the short 1983 book Historical Capitalism (London: Verso), by the Marxist world-systems theorist and sociologist Immanuel Wallerstein.

Wallerstein’s work is usually, rightly, seen as an effort to understand how capitalism works across the globe, with a particular focus on international flows of trade and the ways classes can be played off against each other among as well as within countries. His best-known work is the multivolume The Modern World-System. Wikipedia provides the following fairly accurate if quite general summary of some key parts of his work:

A lasting division of the world into core, semi-periphery, and periphery is an inherent feature of world-system theory. Other theories, partially drawn on by Wallerstein, leave out the semi-periphery and do not allow for a grayscale of development. Areas which have so far remained outside the reach of the world-system enter it at the stage of “periphery”. There is a fundamental and institutionally stabilized “division of labor” between core and periphery: while the core has a high level of technological development and manufactures complex products, the role of the periphery is to supply raw materials, agricultural products, and cheap labor for the expanding agents of the core. Economic exchange between core and periphery takes place on unequal terms: the periphery is forced to sell its products at low prices, but has to buy the core’s products at comparatively high prices. Once established, this unequal state tends to stabilize itself due to inherent, quasi-deterministic constraints. The statuses of core and periphery are not exclusive and fixed geographically, but are relative to each other. A zone defined as “semi-periphery” acts as a periphery to the core and as a core to the periphery. At the end of the 20th century, this zone would comprise Eastern Europe, China, Brazil, and Mexico. It is important to note that core and peripheral zones can co-exist in the same location.

Yet what is sometimes less understood is that Wallerstein is a theorist of race and its critical role in the establishment of capitalism, that much of his early work focused on Africa, that he considers himself profoundly influenced by the anticolonial writer Frantz Fanon.

Wallerstein describes Historical Capitalism as an attempt “to see capitalism as a historical system, over the whole of its history and in concrete unique reality” (7). The book is made up of revisions of three lectures Wallerstein gave in 1982 along with a new conclusion. The first chapter, Wallerstein says, is largely devoted to economics; the second to politics, and the third, which I’ll be discussing here, to culture. Its title is “Truth as Opiate: Rationality and Rationalization.” Somewhat surprisingly, the word “traditional” occupies a central place in Wallerstein’s analysis.

Wallerstein, Historical Capitalism (Verso, 1982)

These, for example, are the third chapter’s first two paragraphs:

Historical capitalism has been, we know, Promethean in its aspirations. Although scientific and technological change has been a constant of human historical activity, it is only with historical capitalism that Prometheus, always there, has been ‘unbound,’ in David Landes’s phrase. The basic collective image we now have of this scientific culture of historical capitalism is that it was propounded by noble knights against the staunch resistance of the forces of ‘traditional,’ non-scientific culture. In the seventeenth century, it was Galileo against the Church; in the twentieth, the ‘modernizer’ against the mullah. At all points, it was said to have been ‘rationality’ versus ‘superstition,’ and ‘freedom’ versus ‘intellectual oppression.’ This was presumed to be parallel to (even identical with) the revolt in the arena of the political economy of the bourgeois entrepreneur against the aristocratic landlord.

This basic image of a worldwide cultural struggle has had a hidden premise, namely one about temporality. ‘Modernity’ was assumed to be temporally new, whereas ‘tradition’ was temporally old and prior to modernity; indeed, in some strong versions of the imagery, tradition was ahistorical and therefore virtually eternal. This premise was historically false and therefore fundamentally misleading. The multiple cultures, the multiple ‘traditions’ that have flourished within the time-space boundaries of historical capitalism, have been no more primordial than the multiple institutional frameworks. They are largely the creation of the modern world, part of its ideological scaffolding. Links of the various ‘traditions’ to groups and ideologies that predate historical capitalism have existed, of course, in the sense that they have often been constructed using some historical and intellectual materials already existent. Furthermore, the assertion of such transhistorical links has played an important role in the cohesiveness of groups in their political-economic struggles within historical capitalism. But, if we wish to understand the cultural forms these struggles take, we cannot afford to take ‘traditions’ at their face value, and in particular we cannot afford to assume that ‘traditions’ are in fact traditional. (75-6)

So for Wallerstein, the very act of naming a practice “traditional” is an important part of the cultural work of global capitalism, tied directly to the historical creation of what we today call “race.” The very allegation that some practices are “traditional” “has formed one of the most significant pillars of historical capitalism, institutional racism” (78); “racism was the mode by which various segments of the work-force within the same economic structure were constrained to relate to each other,” he goes on, “racism was the ideological justification for the hierarchization of the work-force and its unequal distribution of reward.”

Wallerstein uses the past tense in these formulations because he is discussing the historical formation of racial discrimination, especially when racial categorizations were explicit and legal; he is not suggesting that racism does not still exist. But because “in the past fifty to one hundred years, it has been under sharp attack” (80), a complementary ideology has moved to center stage, namely what Wallerstein calls “universalism.” Belief in universalism “has been the keystone of the ideological arch of historical capitalism” (81).

By universalism Wallerstein in part means “pressures at the level of culture” to create and enforce norms around a single model of culture and cultural progress, via a “complex of processes we sometimes label ‘westernization,’ or even more arrogantly ‘modernization’” (82) and which includes phenomena like “Christian proselytization; the imposition of European language; instruction in specific technologies and mores; changes in legal codes.”

The process of modernization, Wallerstein writes, “required the creation of a world bourgeois cultural framework that could be grafted onto ‘national’ variations. This was particularly important in terms of science and technology, but also in the realm of political ideas and the social sciences” (83). Thus the

concept of a neutral ‘universal’ culture to which the cadres of the world division of labor would be ‘assimilated’ (the passive voice being important here) hence came to serve as one of the pillars of the world-system as it historically evolved. The exaltation of progress, and later of ‘modernization,’ summarized this set of ideas, which served less as true norms of social action than as status-symbols of obeisance and of participation in the world’s upper strata. The break from the supposedly culturally-narrow religious bases of knowledge in favor of supposedly trans-cultural scientific bases of knowledge served as the self-justification of a particular pernicious form of cultural imperialism.

The universalism of scientific culture “lent itself to the concept known today as ‘meritocracy’” (84), a “framework within which individual mobility was possible without threatening hierarchical work-force allocation. On the contrary, meritocracy reinforced hierarchy” (85).  “The great emphasis on the rationality of scientific activity,” he writes, “was the mask of the irrationality of endless accumulation.”

While “universalism was offered to the world as a gift of the powerful to the weak,” “the gift itself harbored racism, for it gave the recipients two choices: accept the gift, thereby acknowledging that one was slow on the hierarchy of received wisdom; refuse the gift, thereby denying oneself weapons that could reverse the unequal real power situation.”

There is much more to Wallerstein’s compact and dense argument, including many important reflections on the profoundly ambivalent relationship of technological progress and cultural nationalism to socialism, and I recommend the book in its entirety. But I am primarily interested here in the consequences of Wallerstein’s work for understanding the deployment of the concept of “traditional” in contemporary technological discourse. In my opinion, “traditional” is a word, and a concept, that should be avoided in thoughtful work about technology, economics, and “progress,” as it is an almost-entirely ideological label, one that is even more than what I earlier called it, a “slur.” Indeed, it is not merely a label: it is an ideological lever, a tool used to organize the world so as to maximize power for the ones doing the labeling, and to disempower the lives and cultures of those to whom the label is applied, and to make them available for resource exploitation.

Work on this piece benefited greatly from conversations with Tressie McMillan Cottom and Audrey Watters.

Next: “traditional” in vivo

Posted in definitions that matter, digital humanities, rhetoric of computation, theory | Tagged , , , , , , , , , , , , , , , | 1 Response