Although it didn’t break the major media until last week, around June 2 researchers led by Adam Kramer of Facebook published a study in the Proceedings of the National Academy of Sciences (PNAS) entitled “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” The publication has triggered an flood of complaints and concerns: is Facebook manipulating its users routinely, as it seems to admit in its defense of its practices? Did the researchers—two of whom were at universities (Cornell and the University of California-San Francisco) during the time the actual study was conducted in 2012—get proper approval for the study from the appropriate Institutional Review Board (IRB), required of all public research institutions (and most private institutions, especially if they take Federal dollars for research projects)? Was Cornell actually involved in the relevant part of the research (as opposed to analysis of previously-collected data)? Whether or not IRB approval was required, did Facebook meet reasonable standards for “informed consent”? Do Terms of Service agreements accomplish not just the letter but the spirit of the informed consent guidelines? Could Facebook see emotion changes in its individual users? Did it properly anonymize the data? Can Facebook manipulate our moods? Was the effect it noticed even significant in the way the study claims? Is Facebook manipulating emotions to influence consumer behavior?
While these are all critical questions, most of them seem to me to miss the far more important point, one that has so far been gestured at only by Zeynep Tufekci (“Facebook and Engineering the Public”; Michael Gurstein’s excellent “Facebook Does Mind Control,” which appeared almost simultaneously with this post, makes similar points to mine) and former Obama election campaign data scientist Clay Johnson (whose concerns are more than a little ironic). To see its full extent, we need to turn briefly to Glenn Greenwald and Edward Snowden. I have a lot to say about the Greenwald/Snowden story, which I’ll avoid going into too much for the time being, but for present purposes I want to note that one of the most interesting facets of that story is the question of exactly what each of them thinks the problem is that they are working so diligently to expose: is it a military intelligence agency out of control, an executive branch out of control, a judiciary failing to do its job, Congress not doing oversight the way some constituents would like, the American people refusing to implement the Constitution as Snowden/Greenwald think we should, and so on? Even more pointed, whomever it is we designate as the bad actors in this story, why are they doing it? To what end?
For Greenwald, the bad actors are usually found in the NSA and the executive branch (although, as an aside, his reporting seems often to show that all three branches of government are being read into or overseeing the programs as required by law, which definitely raises questions about who the bad guys actually are). More importantly, Greenwald has an analysis of why the bad actors are conducting warrantless, mass surveillance: he calls it “political control.” Brookings Institution Senior Fellow Benjamin Wittes has a close reading of Greenwald’s statements on this topic in No Place to Hide, (also see Wittes’s insightful review of Greenwald’s book) where he finds the relevant gloss of “political control” in this quotation from Greenwald:
All of the evidence highlights the implicit bargain that is offered to citizens: pose no challenge and you have nothing to worry about. Mind your own business, and support or at least tolerate what we do, and you’ll be fine. Put differently, you must refrain from provoking the authority that wields surveillance powers if you wish to be deemed free of wrongdoing. This is a deal that invites passivity, obedience, and conformity. The safest course, the way to ensure being “left alone,” is to remain quiet, unthreatening, and compliant.
That is certainly a form of political control, and a disturbing one (though Wittes I think very wisely asks, if this is the goal of mass surveillance, why is it so ineffective with regard to Greenwald himself and the other actors in the Snowden releases? Further, how was suppression-by-intimidation work supposed to work when the programs were entirely secret, and exposed only by the efforts of Greenwald and Snowden) But it’s not the only form of political control, and I’m not at all sure it’s the most salient or most worrying of the kinds of political control enabled by ubiquitous, networked, archived communication itself: that is to say the functionality, not the technology, of social media itself.
Why I find it ironic that Clay Johnson should worry that Mark Zuckerberg might be able to “swing an election by promoting Upworthy posts 2 weeks beforehand” is that this is precisely, at a not very extreme level of abstraction, what political data scientists do in campaigns. In fact it’s not all that abstract: “The [Obama 2012] campaign found that roughly 1 in 5 people contacted by a Facebook pal acted on the request, in large part because the message came from someone they knew,” according to a Time magazine story, for example. In other words, the campaign itself did research on Facebook and how its users could be politically manipulated—swinging elections by measuring how much potential voters like specific movie stars, for example (in the case of Obama 2012, it turned out to be George Clooney and Sarah Jessica Parker). Johnson’s own company, Blue State Digital, developed a tool the Obama campaign used to significant advantage—“Quick Donate,” deployed so that “supporters can contribute with just a single click,” which might mean that it’s easy, or might mean that supporters act on impulse, before what Daniel Kahneman calls their “slow thinking” can consider the matter carefully.
Has it ever been thus? Yes, surely. But the level of control and manipulation possible in the digital era exceed what was possible before by an almost unfathomable extent. “Predictive analytics” and big data and many other tools hint at a means for manipulating the public in all sorts of ways without their knowledge at all. These methods go far beyond manipulating emotions, and so focusing on the specific behavior modifications and effects achieved by this specific experiment strikes me as missing the point.
Some have responded to this story along the lines of Erin Kissane— “get off Facebook”—or Dan Diamond—“If Facebook’s Secret Study Bothered You, Then It’s Time To Quit Facebook.” I don’t think this is quite the right response for several reasons. It puts the onus on individuals to fix the problem, but individuals are not the source of the problem; the social network itself is. It’s not that users should get off of Facebook; it’s that the kind of services Facebook sells should not be available. I know that’s hard for people to hear, but it’s a thought that we have not just the right but the responsibility to consider in a democratic society: that the functionality itself might be too destructive (and disruptive) to what we understand our political system to be.
More importantly, despite these protestations, it isn’t possible to get off Facebook. For “Facebook” here read “data brokers,” because that’s what Facebook is in many ways, and as such it is part of a universe of hundreds and perhaps even thousands of companies (of which the most famous non-social media company is Acxiom) who make monitoring, predicting, and control the behavior of people—that is, in the most literal sense, political control—their business. As Julia Angwin has demonstrated recently, we can’t get out of these services even if we want to, and to some extent the more we try, the more damaging we make the services to us as individuals. Further, these services aren’t concerned with us as individuals, as Marginal Utility blogger and New Inquiry editor Rob Horning (among his many excellent pieces on the structural as opposed to the person impact of Facebook, see “Social Graph vs. Social Class,” “Hollow Inside,” “Social Media Are Not Self-Expression,” and some of his pieces at Generation Bubble; it’s also a frequent topic on his Twitter feed; Peter Strempel’s “Social Media as Technology of Control” is a sharp reflection on some of Horning’s writing) and I and others have both been insisting for years: these effects occur at population levels, as probabilities: much as the Obama campaign did not care that much whether you or your neighbor voted for him, but did care that if they sprayed Chanel No. 5 in the air one June morning, one of the two of you was 80% likely to vote for him, and the other was 40% likely not to go to the polls. Tufekci somewhat pointedly argued for this in the aftermath of the 2012 election:
Social scientists increasingly understand that much of our decision making is irrational and emotional. For example, the Obama campaign used pictures of the president’s family at every opportunity. This was no accident. The campaign field-tested this as early as 2007 through a rigorous randomized experiment, the kind used in clinical trials for medical drugs, and settled on the winning combination of image, message and button placement.
Further, you can’t even get off Facebook itself, which is why I disagree pretty strongly with the implications of a recent academic paper of Tufekci’s, in which she writes fairly hopefully about strategies activists use to evade social media surveillance, by performing actions “unintelligible to algorithms.” I think this only provides comfort if you are looking at individuals and at specific social media platforms, where it may well be possible to obscure what Jim is doing by using alternate identities, locations, and other means of obscuring who is doing what. But most of the big data and data mining tools focus on populations, not individuals, and on probabilities, not specific events. Here, I don’t think it matters a great deal whether you are purposely obscuring activities or not, because those “purposely obscured” activities also go into the big data hopper, also offer fuel for the analytical fire, and may well reveal much more than we think about intended future actions and behavior patterns, and also leave us much more susceptible than we know to relatively imperceptible behavioral manipulation.
Here it’s ironic that Max Schrems is in the news again, having just published a book in German called Kämpf um deine Daten (English: Fight for Your Data) and spokesman for the Europe v. Facebook group that is challenging not so much the NSA itself but the cooperation between NSA and Facebook in European courts. A recent story about Schrems’s book in the major German newspaper Frankfurter Allgemeine Zeitung (FAZ) notes that what got Schrems concerned about the question of data privacy in the first place was this:
Schrems machte von seinem Auskunftsrecht Gebrauch und erwirkte im Jahr 2011 nach längerem Hin und Her die Herausgabe der Daten, die der Konzern über ihn gespeichert hatte. Er bekam ein pdf-Dokument mit Rohdaten, die, obwohl Schrems nur Gelegenheitsnutzer war, ausgedruckt 1222 Seiten umfassten – ein Umfang, den im letzten Jahrhundert nur Stasi-Akten von Spitzenpolitikern erreichten. Misstrauisch machte ihn, dass das Konvolut auch Daten enthielt, die er mit den gängigen Werkzeugen von Facebook längst gelöscht hatte.
Here’s a rough English translation, with help from Google Translate:
Schrems exercised his Freedom of Information rights and after a long back and forth obtained in 2011 the data Facebook hold on him. He got a PDF document with raw data which, although Schrems was only an occasional user, included 1222 printed pages—a scale that in the last century could have been reached only in the Stasi files of top politicians. What he found especially suspicious was that the documents also contained data that he had long since erased with the normal Facebook tools.
In fact, it’s probably even worse, both if we consider data brokers like Acxiom (who maintain detailed profiles on us whether we like it or not), or even Facebook itself, which it is reasonable to assume does just the same thing, whether we have signed up for it or not. And it is no doubt true that, as the great, skeptical data scientist Cathy O’Neill says over at her MathBabe blog, “this kind of experiment happens on a daily basis at a place like Facebook or Google.” This is the real problem; marking this specific project out as “research” and an unacceptable but unusual effort misses the point almost entirely. Google, Facebook, Twitter, the data brokers, and many more are giant research experiments, on us. “Informed consent” for this kind of experiment would have to be provided by the whole population, even those who don’t use social media at all, and the possible consequences would have to include “total derailing of your political system without your knowledge.”
(As an aside those who have gone out of their way to defend Facebook—see especially Brian Keegan and Tal Yarkoni—provide great examples of cyberlibertarianism in action, emotionally siding with corporate capital as itself a kind of social justice or political cause; Alan Jacobs provides a strong critique of this work.)
This, in the end, is part of why I find very disturbing Greenwald’s interpretation of Snowden’s materials, and his relentless attacks on the US government, and no less his concern for US companies only in so far as their business has been harmed by the Snowden information he and others have publicized. Political control, in any reasonable interpretation of that phrase, refers to the manipulation of the public to take actions and maintain beliefs that they might not arrive at via direct appeal to logical argument. Emotion, prejudice, and impulse substitute for deep thinking and careful consideration. While Greenwald has presented some—but truthfully, only some—evidence that the NSA may engage in political control of this sort, he blames it on the government rather than on the existence of tools, platforms and capabilities that do not just enable but are literally structured around such manipulation. Bizarrely, even Julian Assange himself makes this point in his book Cypherpunks, but it’s a point Greenwald continues to shove aside. Social media is by its very nature a medium of political control. The issue is much less who is using it, and why they are using it, than that it exists at all. What we should be discussing—if we take the warnings of George Orwell and Aldous Huxley at all seriously—is not whether NSA should have access to these tools. If the tools exist, and especially if we endorse some form of the nostrum that Greenwald in other modes rabidly supports, that information must be free, then we have no real way to prevent it from being used to manipulate us. How the NSA (and Facebook, and Acxiom) uses this information is of great concern, to be sure: but the question we are not asking is whether it is not the specific users and uses we should be curtailing, but the very existence of the data in the first place. It is my view that as long as this data exists, its use for political control will be impossible to stop, and that the lack of regulation in private companies means that we should be even more concerned about how they use it (and how it is sold, and to whom) than we are about what heavily-regulated governments do with it. Regardless, in both cases, the solution cannot be to chase after the tails of these wild monkeys—it is to get rid of the bananas they are pursuing in the first place. Instead, we need to recognize what social media and data brokerage does: it does a kind of non-physical violence to our selves, our polities, and our independence. It is time at least to ask whether social media itself, or at least some parts of it—the functionality, not the technology—is too antithetical to basic human rights and to the democratic project for it to be acceptable in a free society.