The Volkswagen Scandal: The DMCA Is Not the Problem and Open Source Is Not the Solution


The solution to the VW scandal is to empower regulators and make sure they have access to any and all parts of the systems they oversee. The solution is not to open up those systems to everyone. There is no “right to repair,” at least for individuals. Whether or not it deserves to be called a “freedom,” the “freedom to tinker” is not a fundamental freedom. The suggestion that auto manufacturers be forced to open these systems is wrongheaded at best and disingenuous at worst. We have every reason to think that opening up those systems would make matters worse, not better.

full post

Volkswagen’s manipulation of the software that runs the emissions control devices in its cars has rightly produced outrage, concern, and condemnation. Yet buried in the responses have been two very different lessons—lessons that may at first sound very similar, but on closer examination are as different as night and day. Some writers have not been careful to distinguish them, but this is a huge mistake, as they end up embodying two entirely different philosophies that in most important ways contradict each other.

The two philosophies will be familiar to my readers:

  • The cyberlibertarian, cypherpunk, FOSS perspective: we can prevent future VWs by mandating all such software be available for inspection and even modification by users;
  • The democratic response: regulators like the EPA should have access to proprietary code like that in VW vehicles.

These two responses may sound similar, but they are radically different. One suggests that the wisdom of the crowd will result in regulations and laws being followed. The other puts trust in bodies specifically chartered and licensed to enforce regulations and laws.

One says—and this is everywhere in the discussions of the topic—that it’s the fault of the Digital Millennium Copyright Act (DMCA) and EPA’s resistance to the proposed grant of an exemption for users to access automobile software, an exemption for which the Electronic Frontier Foundation was a principle advocate. It is hard to find an account of this story that does not excoriate EPA for opposing that exemption and even blame that response for the long period of time it took to uncover VW’s cheating. Contained within that sentiment is the inherent view that regulators can’t do their jobs and that ordinary citizen security researchers can and will do it better; this typical cyberlibertarian contempt and narcissistic lust-for-power is visible in much of the discourse adopting this view. This, by far, has been the dominant response to the story. The only real challenge to this narrative has come from the estimable Tom Slee, whose “Volkswagen, IoT, the NSA and Open Source Software: A Quick Note” is, along with Jürgen Geuter’s “Der VW-Skandal Zeigt unser Vertrauensproblem mit Software” (approximately: “The VW Scandal Demonstrates Our Confidence Problem with Software”) the best thing I’ve read on the whole scandal, and with which I am in strong agreement.

The other says that certain social functions are assigned to government. for good reason. From this perspective, one might want to look at the massive defunding of regulatory agencies and the political rejection of regulation engineered by not just the right in general but the digital technology industries themselves as a huge part of the problem. Critically, from this perspective, the DMCA just has nothing to do with this issue at all. Regulators can and do look inside of products that are covered by trade secrecy and other intellectual property agreements. They have to.

These aren’t just abstract differences. They embody fundamentally different ways of seeing the world, and in how we want the world to be organized.

I think the first view is incoherent. It says, on the one hand, we should not trust manufacturers like Volkswagen to follow the law. We shouldn’t trust them because people, when they have self-interest at heart, will pursue that self-interest even when the rules tell them not to. But then it says we should trust an even larger group of people, among whom many are no less self-interested, and who have fewer formal accountability obligations, to follow the law.

If anything, history shows the opposite. The more we make it optional to follow the law—and to be honest, the nature of an “optional law” is about as oxymoronic as they come, but it is at the core of much cyberlibertarian thought—the more we put law into the hands of those not specifically entrusted to follow it, the more unethical behavior we will have. Not less. That’s why we have laws in the first place—because without them, people will engage even more in the behavior we are trying to curtail.

Now consider the current case. What the cyberlibertarians want, even demand, is for everyone to have the power to read and modify the emissions software in their cars.

They claim that this will eliminate wrongdoing. In my opinion, and there is a lot of history to back this up, it will encourage and even inspire wrongdoing.

This is where the cyberlibertarian claim turns into pure fantasy, of a sort that underlies much of their thinking in general. Modifying cars has a significant history. You don’t need to go far to find it.

Show me the history of car owners modifying their cars to meet emissions and safety standards when they don’t otherwise meet them.

VW Super Beetle (1972)

VW Super Beetle (1972). Image source:

Because what I’ll show you is the overwhelming majority of car modifications, in so far as they deal with regulatory standards, are performed to bypass standards like those and many others.

You don’t have to read far in the automotive world to see how deeply car owners want to bypass those standards, in the name of performance and speed. You’d have to read much farther and much deeper to find evidence of automobile owners selflessly investigating whether or not their vehicles are meeting mandates.

Not only that: we don’t have to look far to find this pattern directly regarding diesel Volkswagens. A recent story on the VW scandal at the automotive-interest site The Truth About Cars notes that

the aftermarket community has released modifications for the DPF and Adblue SCR systems long before there was any talk of reduced power and economy coming from a potential fix for the emissions scandal. They looked to gain more power and better fuel economy by modifying or deleting the DPF system.

These “aftermarket tuner” kits like DPF and Adblue SCR have to be marketed “as off-road only as they violate federal emissions laws.” These are the selfless regulation-focused folks we should rely on to protect our environment? Seriously?

In fact, EPA has already studied the specific question of software modification to emissions systems (which is part of what makes me wonder whether those who have excoriated EPA’s response have actually read the letter):

Based on the information EPA has obtained in the context of enforcement activities, the majority of modifications to engine software are being performed to increase power and/or boost fuel economy. These kinds of modifications will often increase emissions from a vehicle engine, which would violate section 203(a) of the CAA (Clean Air Act), commonly known as the “tampering prohibition.” (2)

It is beyond ironic that this scandal has been taken to demonstrate that “we need to open up the Internet of Things,” or that “DMCA exemptions could have revealed Volkswagen fraud,” or that the scandal makes clear the “dangers of secret code.” I would argue that the lesson is entirely different: people will cheat. Making it easy for people to cheat means they will cheat more. Regulators need access to the code that runs things, but just because people will cheat, ordinary people should not have that access. They should not have access to the code that runs medical devices, to the code that runs self-driving cars, to the code that runs airplanes, or to the code that controls security systems in our houses.

Rather than showing that EPA was wrong to oppose the DMCA exception and that people like Eben Moglen are right about opening up proprietary software, we would do better to observe what he himself did about the elevator that the New York Times writes about in their paean to him and his work. That story begins and frames itself around a discussion of elevator safety. Here is the elevator anecdote in its entirety, from a 2010 talk by Moglen:

In the hotel in which I was staying here, a lovely establishment, but which I shall not name for reasons that will be apparent in a moment, there was an accident last week in which an elevator cable parted and an elevator containing guests in the hotel plummeted from the second story into the basement. When you check in at the hotel you merely see a sign that says “We are sorry that this elevator is not working. And we are apologetic about any inconvenience it may cause.” I know that the accident occurred because a gentleman I met in the course of my journey from New York to Edinburgh earlier this week was the employer of the two people who were in the car. And in casual conversation waiting for a delayed airplane the matter came out. I have not, I admit, looked into the question of elevator safety regulation in the municipality. But in every city in the world where buildings are tall (and they have been tall here in proportion to the environment for longer than they have in most parts of the world) elevators safety is a regulated matter, and there are periodic inspections and people who are independent engineers, working at least in theory for the common good, are supposed to conduct such tasks as would allow them to predict statistically that there is a very low likelihood of a fatal accident until the next regular inspection.

While it is taken as an argument for user access to the code that runs elevators, it is actually anything but. It is an argument for regulators having access to that code, period. I do not want the hackers in my building to have access to the elevator code, and neither should you. I do not want them to have access to the code in voting machines.

Moglen made this remarkable statement in the New York Times article:

If Volkswagen knew that every customer who buys a vehicle would have a right to read the source code of all the software in the vehicle, they would never even consider the cheat, because the certainty of getting caught would terrify them.

I don’t know about terror, but I would be distinctly concerned, as EPA is, that this “right” would mean a regime of emissions cheating by individuals that would not only far outflank what Volkswagen has apparently done, but, by dint of its being realized in a thousand different schemes for software alteration, would make those modifications virtually impossible to check. What is particularly striking is that this reasoning, which builds on obvious, well-understood facts, could be jettisoned in favor of an idealistic and obviously false view of human political conduct for which virtually no evidence can be generated.

In fact, to the degree that we have evidence, we know that the opposite is true: Linux, Android, and many other OS projects are routinely attacked by hackers, while the Apple iPhone operating system—contained in its famous “walled garden”—continues to be one of the safest software environments around. (Reports have indicated that up to 97% of all mobile malware is found on the open source Android system.) Contrast this to the “jailbroken” iOS software, which is pretty much the best way to ensure that your iPhone gets malware. (In fact, just this week we have the first-ever report of malware on iPhones that aren’t jailbroken.) Really? Opening things up protects us? Who’s zooming who?

Not only that: we have plenty of evidence that even in small, isolated cases that are critical to security and that thousands of coders care deeply about (I am specifically thinking of OpenSSL, as Geuter discusses in his article), open source still does not produce secure products—certainly no more secure than closed source does.

All of this should really raise questions for people about the motivations behind the demand that security software be opened: I think, just as in this case, that selfless desire to improve the world for everyone is at best what motivates some of those involve in this question. Just as prominent—perhaps more prominent—is an egotistical drive to control and to deny the legitimacy of any authority but oneself. That attitude is exactly the one that leads to regulation-flouting modifications and the production of malware, not to combating them. Like everything else in the world of the digital, it relies on an extremely individualistic, libertarian conception of the self and its relationship to society.

One additional thing that I find a bit dispiriting about this is that one of the best books to come out in recent years about digital politics, Frank Pasquale’s Black Box Society, is specifically focused on the question of technological and particularly algorithmic “black boxes.” Pasquale specifically argues that regulators must be given not just the power (some of which they already have, much of which—for example in the case of algorithms used by Facebook, Google, and Acxiom—they do not) but also the capability (which means resources) to see into these algorithmic systems that affect all of us. Pasquale makes a long and detailed argument and an impassioned plea for a “Federal Search Commission,” parallel to the FDA and EPA, that would be able to see into important technologies whether or not they are protected by trade secrets. Pasquale has been suggesting this for a very long time. He is among the most prominent legal theorists addressing these issues. How is it that when an event occurs that should cause at least some well-informed commentators to show how it validates that thought, virtually nobody does? And worse: the New York Times actually writes a story saying that the scandal validates the work of Eben Moglen, who might well be thought of as the political opposite of Pasquale—despite the fact that Moglen’s apparent version of this story makes very little sense and contradicts his own analysis of similar situations.

That is part of why cyberlibertarianism must be understood as an ideology, not an overt political program. Like all ideologies, it twists issues into parodies of themselves in order to advance its agenda, even when the facts point in exactly the opposite direction.


In a response to this piece on Twitter, Jürgen Geuter said that he read me to be saying that closed source is more secure than open source. I don’t mean to be saying that; I mean to make only the more modest claim that open source is not inherently more secure than closed source. As for which is more secure, I am not sure that question has an answer on the open/closed axis, but I really don’t know. In fact I take that to be part of the lesson of Geuter’s excellent recent piece in Wired Germany that I link to above. The number of people who can accurately evaluate any project for its total security profile is somewhere between “very small” and “nonexistent.” The Android vs. iOS example I use is meant only to show that open source projects are not inherently more secure; there is much more to that story than open vs. closed, and of course iOS is itself at least partly based on the Free Software Unix operating system BSD, as are other Apple operating systems. But it is fair to say that Apple’s “walled garden” has long been a target of ire from the developer community, and has yet been one of the most secure platforms available–at least so far. I do draw from this fact the conclusion that the demands by FOSS advocates that all systems be opened because it will make them more secure are at best unfounded and at worst dishonest–dishonest because a significant number of people in those communities want that access not to increase security but specifically to learn how to defeat it more easily. And I do think, whether it makes the systems as a totality less secure or not, that exposing the complete internals of systems to everyone gives attackers an informational advantage no matter how you slice it.

Posted in "hacking", cyberlibertarianism, materality of computation | Tagged , , , , , , , , , , , , , , , , , | 3 Responses

Encryption and Responsibility: A Note on Symphony

Typically, those of us concerned about the widespread use of encryption and anonymization technologies like Tor are depicted by crypto advocates as “anti-encryption” or “freedom haters” or “mind-murdering censors” or worse. Despite the level of detail these people can bring to technological matters, they often portray the political options as very stark: either “encryption” or “no encryption.” Like so many other things today, it can be like arguing with the proverbial wall to get our opponents to see that we do not want “no encryption.” All encryption and anonymization schemes are not the same. We don’t want encryption not to be used. We want encryption (and anonymization) schemes that make sense. We want them to be used responsibly.

Finally we have a fairly clear case at which we can point to make this clear. Over the past year, a coalition of investment banks have been working on a comprehensive secure communications package called Symphony. The makers of Symphony state that the software provides “a platform for communities of financial services professionals to communicate securely and efficiently using compliant standards and end-to-end encryption.”

Let me be as clear as possible: I think this is a good idea. It is necessary. It is appropriate.

But the Symphony marketing literature included some statements that made people like me worry, for just the reason I worry about services like Tor. It looked as if the service was structured not just to protect the integrity of bank communications, but to hide them from regulators. The marketing language distinguished Symphony from previous communication tools for the financial industry that were “limited in reach and effectiveness by strict regulatory compliance,” while Symphony would “prevent government spying” and would “guarantee that data deletion is permanent.” These promises go well beyond encryption per se.

This caused the New York Department of Financial Services (NYDFS), one of the major regulators of the financial industry, and Senator Elizabeth Warren, one of the leading consumer advocates in the US, to raise concerns about Symphony. NYDFS has to be able, in the proper legal context, to see any and all communications in which the banks it regulates engage. They do not and should not need warrants to see those communications. Even the “legal fiction” of corporations being persons does not go so far as to grant them, qua corporation, the full protection of the Bill of Rights. Regulators can, do, and must examine corporate communications according to the regulatory rules in place, not under criminal or even civil warrant. The companies exist according to certain rules with which they agree to comply, including regulatory oversight. That is the law. It is a good law. In many cases, the application of this law is the only thing that has uncovered major misdealing in the financial industry, including, as both Warren and NYDFS point out, the Libor price-fixing scandal. If anything, “freedom” as I understand it requires much more thorough and rigorous oversight of the financial industry, not less. Among other things, banks are not, in general, allowed to delete any of the data generated in the course of doing business, in order that regulators can backtrack through their actions to ensure compliance.


Despite Symphony appearing to advertise itself specifically to bypass regulatory oversight, at least one well-known crypto advocate attacked Warren for daring to question any part of the Symphony system, as if regulatory oversight of corporations is an affront to “freedom,” while the use of encryption is such an absolute right, even for corporations, that the integrity of such a widely-praised consumer advocate as Warren could be called into question for daring to say anything that even smacked of concern about encryption.

Well, now we have a resolution to this story, one that I hope gives clarity to what “people like me” want, and why encryption is something we should be concerned about while at the same time not wanting to eliminate it. On Monday, NYDFS announced a settlement agreement with Symphony and four of the banks sponsoring it. The agreement allows the project to move forward almost as originally proposed, with the following provisos, relating specifically to what concerned both NYDFS and Elizabeth Warren:

  • Symphony will retain for seven years a copy of all e-communications sent through its platforms to or from the four banks;
  • The four banks will store duplicate copies of the decryption keys for their messages with independent custodians (i.e., custodians not controlled by the banks).

Among the many interesting things about this development is the second point constitutes a form of key escrow. Key escrow is one of the technologies that crypto advocates frequently dismiss as destructive of security; one of the most prominent and reasonable crypto advocates, Matthew Green of Johns Hopkins University, is no fan of key escrow. I have so far found the arguments against it unpersuasive, in part because they take such a big-picture view of the world that they suggest there might be one giant escrow authority holding all the keys to everything. Here, although the details haven’t been made public, Symphony and the banks appear to have agreed to create an escrow authority specific to their software platform. Perhaps that will introduce vulnerabilities into their system; perhaps not. We have a good test case from which to observe. Observing from the outside, it is hard not to think that Symphony and the banks would not have agreed so quickly to something that the numerous cryptography experts on Symphony’s (and the banks’) payrolls thought made them vulnerable. If this works, as I suspect it will, we have a model that might be applicable elsewhere.

This agreement sounds like exactly what I hope for. Encryption is widely used to secure communications in an appropriate fashion. But it is not deployed so as to put the powerful, especially corporations, above the law.

One thing this shows is that all encryption and anonymization schemes are not the same. Responsible encryption schemes are not just welcome; they are necessary. But irresponsible encryption schemes really do threaten fundamental political principles, especially including the rule of law. Despite the fact that many crypto advocates appear to strongly endorse it, I remain very concerned about Apple’s iMessage encryption, which is designed to make the service of all warrants impossible, and which the New York Attorney General and others have claimed has blocked a variety of fully-legal warrants in the few months since it has become available (Matthew Green has posted some comprehensive discussions of the iMessage system, though I think he gives too much credence to the crypto advocates’ typical excessively paranoid skepticism toward all statements made by law enforcement officers). Many crypto advocates have promoted this system for reasons that I find incomprehensible within our system of constitutional governance. All encryption is not the same. Symphony may be a responsible encryption scheme, while iMessage may not be.

It is hard for me not to wonder whether DFS and others have noticed the part of Tor’s promotional materials where they boast that “business executives use Tor.” The arguments for using Tor inside businesses have never made much sense to me, since most businesses are required, contractually and/or legally, to be aware of and record all relevant communications that take place under their name.

Thus, when what must be a luddite and technophobic company called IBM recommended in August that businesses should block Tor, one of Tor’s original developers weighed in on this discussion on the Tor-Talk list, not to take IBM’s concerns seriously, but instead to point out reasons why “your company would have a reason for you to use Tor.”

This is exactly one of the main things that has had worried me about Tor all along. Most of the people involved with the Tor projects have become political advocates, pressing hard for one side of a debate that should be nuanced and of which the other side should be taken very seriously. (Aside to certain people who have asked: when I use the term “politics” like this I mean it in the sense used by political scientists and other academics: matters that affect the arrangements of power that structure society and social institutions. I do not mean “politics” in the sense of being a Republican or Democrat, although the kinds of politics I’m talking about certainly can have consequences for these more formal party politics). Personally, I hope that NYDFS decides to look into the use of Tor by the same banks that use Symphony; exactly for the reason that Symphony has to be configured so as to fit into sensible regulatory requirements, Tor, which cannot (as far as I know) be configured in this way, or at least does not come that way out of the box (i.e., in the Tor Browser Bundle) should be blocked by banks, and by all corporations that have regulatory oversight. So should all tools that enable unrecordable. undecryptable, electronic transactions (which goes far beyond “communication”). The fact that spoken word conversations not held on the phone are not recordable (but also not encrypted) does not somehow entail that we should, let alone that we must, proliferate tools that expand this capability over distance and time. Nobody who loves “freedom” should want corporations to conduct their business outside the law.

Posted in cyberlibertarianism, materality of computation, privacy, surveillance | Tagged , , , , , , , , , , , , , , , | 1 Response

Right Reaction and the Digital Humanities

A while back, I had an encounter that struck me at the time, and continues to strike me, as perfectly emblematic of the Digital Humanities as an ideological formation. While it includes a kind of brutal incivility that I associate with much of the politics that persists very near the “nice” surface of DH (of which one needs no more shocking example than the recent deeply personal and brutally mocking responses by two people I had thought were her close friends and colleagues to a perfectly reasonable and non-personal piece by Adeline Koh), I try to avoid such directly uncivil tactics if I can, and so I have deliberately let a significant amount of time pass so as to remove as much as possible the appearance of personal attack in writing this up. I have also omitted a significant amount of information so as to (hopefully) obscure the identities or institutional affiliations or even professional specializations of the persons involved, including avoiding all pronouns that would reveal the gender of any of the speakers, as I am much less interested in criticizing one individual than in showing how this person’s conduct represents a significant part of the psychology and politics that drives parts of DH.

The bare bones of the story are as follows. I am the co-leader of a “Digital Humanities and Digital Studies Working Group” (DHDS) at Virginia Commonwealth University (VCU), where I teach in the English Department. The group usually proceeds by reading and discussing texts, although sometimes we look at projects, consider projects by group participants, and have visits from outside speakers. Recently, a DHDS meeting was scheduled with a group of speakers who had been invited to campus for other reasons. These speakers included fairly well-known members of the DH community. One of them, to whom I’ll refer as A, occupies a position of some significant seniority and power in that community. (The other speakers, who don’t play much of a role in what follows, I’ll refer to as B and C.) Nevertheless, I had never met, talked to, or read anything by A prior to this meeting, in part because A has published very little about DH.

The meeting was attended by the group of speakers, a few faculty members from VCU, and a half-dozen PhD students from the MATX program in which I am a core faculty member, all of whom I had worked or was currently working with in some form or another.

The meeting began with the convener who had organized this event asking me to speak about a symposium we held at VCU a few years ago called “Critical Approaches to Digital Humanities,” about my own experience in DH, and about the overall course of discussions we had had to that point in the DHDS working group. I spoke for just over 5 minutes. I gave a brief overview describing how I came to the views I hold and how the symposium came into being. My main focus was my own experience: I mentioned that I was one of the first two people (along with Rita Raley of UCSB) hired as a “digital humanist” in the country and that despite being employed as a “Digital Humanities professor” since 2003, and despite a large number of projects and publications, my name does not occur in any of the typical journals, conferences, list, organizations, etc. I described my view, familiar to those who know my work or me, that DH is seen at least as profitably as a politics than it is a method, and that as a politics its function has been to unseat other sites of authority in English departments and to establish alternate sites of power from existing ones, and in no small part to keep what I broadly call “cultural studies of the digital” out of English departments, and generally to work against cultural studies & theoretical approaches, while not labeling itself as such. I discussed how frequently I am published in forums devoted to debating the purpose of DH, but that as far as DH “proper” goes, the unacceptability of my work to that community has been a signal and defining part of my career—despite my continuing to be employed as a professor of DH. Needless to say, it was clear that none of A, B, or C had ever heard of me or read anything I’ve written, which is fine: for just the reasons that make me so skeptical of DH as an enterprise, the main part of my work is not the sort of stuff that interests DHers, although it does seem to be of significant interest to those who see studying the digital per se to be important, which I am of course glad about.

B and C first responded to what I’d said for a while, saying something positive about the concerns I’d raised.

Then A started talking, with a notably hostile tone, which I found remarkable in itself given that A was in part my guest and that I’d said nothing whatsoever directed at or about A (it’s also worth noting that A is not in English). “I have to take issue with what David has said,” A said. “DH is not a monolith.” I hadn’t described it as a monolith (I had said it is profitably viewed as a politics as well as a method) and as usual the point of this familiar claim wasn’t clear (“not a monolith” suggests that my critique is valid for some parts of DH, but that there are others of whom it isn’t true; but A went on immediately, as do almost all of those who use the “not a monolith” response, to dispute every allegation I’d made across the board), except that I was very wrong. Yet the disrespect and hostility emanating from A were palpable. So was A’s complete dismissal of my own reports of my experience, and perhaps even more stunningly, of my own work as a scholar, with which A was clearly entirely unfamiliar, but whose quality A had already assessed based on my brief story.  I saw my students looking at me with jaws agape—they had heard my skepticism and critique of DH many times, of course, and a couple of them had seen something of what was on display here, but as several of them said to me later, they had never seen it in action as a political force, where the excess of emotion and the brute point of the emotion (in some sense to shut me down or disable my line of critique without engaging it directly) were so readily visible.

Some of the other points A made that I took notes on at the time: “I don’t accept analyses based on power,” apparently meaning that any analysis of any situation that looks at political power is inherently invalid, a claim I found not only remarkable coming from a humanist of any discipline, but also one that we typically hear only from the political right, which is to say, the party that benefits from its alignment with power, an alignment it often tries to downplay even as it benefits.

“The grants awarded by ODH [the Office of Digital Humanities at the National Endowment for the Humanities] are not uniform” (I had pointed out that ODH exclusively or near-exclusively funds tools-and-archives, a point that I am not alone in making and that I wrote up in a blog post with detailed analysis of one year’s awards). Interestingly, either B or C chimed in at this point to say that actually they agreed that the awards were remarkably uniform in their focus on tools and archives, the point I was making.


To this I responded, “yes they are, and I’ve done a statistical analysis that shows it. There have never been any grants awarded for critical study.”

A replied: “they aren’t uniform, and it is their prerogative to decide what to fund. And as we just saw [referring to a single recently-published article on big data] statistics aren’t reliable.” (I really struggled not to laugh at this point: a DHer committed to quantitative analysis so angry at me as to argue that statistics as a whole are not valid? But it happened; there are even witnesses.) I tried to point out that we were not dealing with sampling (aka the usual meaning of “statistics”) but with an analysis of the complete list of all ODH grants for a single year, and a briefer examination of all the grants for other years, in which virtually no grants are devoted to “theory” or cultural studies as such, or to critical analysis of the digital. A waved this off with A’s hand and a pronounced sneer. (Interestingly, this was one point where either B or C interjected a bit in my favor, opining that there is a uniformity to the grants along the lines I suggested and that they are unprecedented, but A was unmoved.)

I asked: “is it the prerogative of funding agencies to provide unprecedented amounts of funding [A shakes head vehemently no, to disagree that they are unprecedented] for projects not requested by the units themselves?” A replied: “they aren’t unprecedented.” I insisted that they are and asked for the precedent to which A was referring, and A rejected my question as inappropriate without giving any actual response.

I said that despite the general truth that DH is not a “monolith,” there is a “core agenda” or view of “narrow DH” that is widely-understood in the field, often referred to as “tools and archives.” I referred to the Burdick et al Digital_Humanities volume as a recent example that explicitly sets out this single-focused agenda for DH. A interrupted again, angrily, dismissing my comments, insisting that “that volume has been widely discredited” and that the “narrow” view of DH was incorrect.

Toward the end of the conversation one of the more telling exchanges occurred. I had noted that “a main point of DH has been to reject the role of theory in English Departments, and it has been successful.” A replied quite a bit later, as if they comment had struck some kind of nerve: “the one thing I agree with David about is that DH is opposed to theory,” making it clear that this was a very good thing.

One dynamic that is worth pausing over: B and C are both relatively well-known members of the DH community. Not only were they visibly shocked by A’s conduct, but they both several times made comments in which they tried to “heal the breach” by granting that certain parts of my critique were probably right, and several times explicitly endorsed some of my specific comments. Yet anyone sitting in that room, no doubt including B & C themselves, walked away seeing the conflict between A and me as the thing that was happening, as the main political event. To me, B & C stand for all those perfectly well-meaning DHers who are not themselves directly invested in its poisonous political agenda. I do not resent the fact that B & C could not repair the event more fully. But I think they are emblematic of the role played by all those in the DH community who don’t understand or endorse or take seriously what I have tried for years to explicate as its politics. They are, broadly speaking, ineffective, and as such, end up adding gravitas to the power of those with an agenda. Their level of conviction and commitment, especially politically speaking, is far shallower than those who really do care. My impression, which may be self-serving, was that B & C were actually more inclined to take my statements seriously because of the wide-ranging and inexact vitriol of A’s performance; at some level I hope that the level of attack those of us who dare to try to locate the politics of DH might inspire others of reasonable mind to do the same.

Then in the evening we had a series of talks by the guests. It will surprise no one to know that A’s paper (composed, I am 99% sure, prior to the events of the day) explicitly and at length endorsed exclusively the tools-and-archives, “narrow” definition of DH that A had strenuously attacked me 6 hours earlier in the day for suggesting was the core of DH. A seemed not to recognize at all that this contradicted what A had so vehemently stated hours before. It even sounded like DH was a monolith after all, which I found a bit shocking.

I let this post sit for quite a while, though I took notes at the time for the purposes of writing it up. What I found remarkable about the encounter was the way that, as I have seen many times, any critique of DH in general receives what I take to be a typical rightist reaction form. First, hostility and belittling of the target; then, absolute rejection of anything the target says, typically without even having heard what that was; then, an assertion of positive principles that, more often than not, actually endorses what the critique was, but with the added affirmation that what is done was correct. This is the same pattern I encounter when I criticize Tor, or bitcoin, or cyberlibertarianism. I am an idiot; I am wrong for saying these things tend to the right; I don’t understand what the right is; actually, the right is correct, and these things should tend to the right–and despite this being my original thesis, I am completely wrong. I see that as part of the rightward tilt that is endemic to digital technology, absent careful and vigilant attendance to one’s political means and ends. “The digital” is strongly aligned with power. Power and capital in our society are inextricably linked, and in many ways identical. Strongly identifying with “the digital” almost always entails a strong identification with power. That identification works particularly well, as do all reactionary ideological formations, by burying that identification under a facade of neutrality. “I reject political analyses,” this position says, while enjoying and strongly occupying the position of power which it currently inhabits itself. Much like Wikipedia editors or GamerGate trolls, this simultaneous embrace of and disavowal of power is key to the maintenance of rightist political formations.

Posted in cyberlibertarianism, digital humanities, rhetoric of computation, what are computers for | Tagged , , , , , , , , , | Comments closed

Crowdforcing: When What I “Share” Is Yours

Among the many default, background, often unexamined assumptions of the digital revolution is that sharing is good. A major part of the digital revolution in rhetoric is to repurpose existing language in ways that advantage the promoters of one scheme or another. It is no surprise that while it may well have been the case that to earlier generations sharing was, in more or less uncomplicated ways, good, the rhetorical revolution works to dissuade us from considering whether the ethics associated with earlier terminology still apply, telling us instead that if we call it sharing, it must be good.

This is fecund ground for critics of the digital, and rightly so. Despite being called “the sharing economy”—a phrase almost as literally oxymoronic as “giant shrimp,” “living dead” or “civil war”—the companies associated with that practice have very little to do with what we have until now called “sharing.” As a rule, they are much more like digital sharecroppers such as Facebook than their promotional materials tell us, charging rent on the labor and property of individuals while centralized providers make enormous profits on volume (and often enough by offloading inherent employer costs to workers, while making it virtually impossible for them to act as organized labor). Of course there is a continuum; there are “sharing economy” phenomena that are not particularly associated with the extraction of profit from a previously unexploited resource, but there are many others that are specifically designed to do just that.

One phenomenon that has so far flown under the radar in discussions of peer-to-peer production and the sharing economy but that demands recognition on its own is one for which I think an apt name would be crowdforcing. Crowdforcing in the sense I am using it refers to practices in which one or more persons decides for one or more others whether he or she will share his or her resources, without the other person’s consent or even, perhaps more worryingly, knowledge. While this process has analogs and has even itself occurred prior to the digital revolution and the widespread use of computational tools, it has positively exploded thanks to them, and thus in the digital age may well constitute a difference in kind as well as amount.

Once we conceptualize it this way, crowdforcing can be found with remarkable frequency in current digital practice. Consider the following recent events:

  • In a recent triumph of genetic science, a company called DeCode Genetics has created a database with the complete DNA sequences for all 320,000 citizens of Iceland. Slightly less noted in the news coverage of the story is that DeCode collected actual tissue samples from only about 10,000 people, and then used statistical techniques to extrapolate the remaining 310,000 sequences. This is not population-level data: it is the full genetic sequence of each individual. As the MIT Technology Review reported, the study raises “complex medical and ethical issues.” For example, “DeCode’s data can now identify about 2,000 people with the gene mutation across Iceland’s population, and [DeCode founder and CEO Kári] Stefánsson said that the company has been in negotiations with health authorities about whether to alert them.” Gísli Pálsson, an anthropologist at the University of Iceland, is reported as saying that “This is beyond informed consent. People are not even in the studies, they haven’t submitted any consent or even a sample.” While there are unique aspects to Iceland’s population that makes it particularly useful for a study like this, scientists have no doubt that “This technique can be applied to any population,” according to Myles Axton, chief editor of Nature Genetics, which published some of DeCode’s findings. And while news coverage of the story has dwelt on the complex medical-ethics issues relating to informing people who may not want to know of their risk for certain genetic diseases, this reasoning can and must be applied much more widely: in general, thanks to big data analytics, when I give data to a company with my overt consent, I am often sharing a great deal more data about others to whom I am connected without even their knowledge, let alone any kind of consent. We can see this in the US on popular sites like 23andMe and, where the explicit goals often include specific inferential data about people who are not using the product. The US itself is in process of conducting a genetic database that may mimic the inferential capacities of the Icelandic one. Genetic information is is one of the better-regulated parts of the data sciences (though regulation even in this domain remains inadequate), and yet even it seems to have an impoverished vision of what is possible with this data; where do we look for constraints placed on this sort of data analysis in general?
  • Sharing pictures of your minor children on Facebook is already an interesting enough issue. Obviously, you have the parental right to decide whether or not to post photos of your minor children, but parents likely do not understand all the ramifications of such sharing for themselves, let alone for their children, not least since none of us know what Facebook and the data it harvests will be like in 10 or 20 years. Yet an even more pressing issue occurs when people share pictures on Facebook and elsewhere of other peoples’ minor children, without the consent or even knowledge of those parents. Facebook makes it easy to tag photos with the names of people who don’t belong to it. The refrain we hear ad nauseum—“if you’re concerned about Facebook, don’t use it”—is false in many ways, among which the most critical may be that those most concerned about Facebook, who have therefore chosen not to use it, may thereby have virtually no control over not just the “shadow profile” Facebook reportedly maintains for everyone in the countries where it operates, but even what appears to be ordinary sharing data that can be used by all the data brokers and other social analytic providers. Thus while you may make a positive, overt decision not to share about yourself, and even less about the minor children of whom you have legal guardianship, others can and routinely do decide you are going to anyway.
  • So-called “Sharing Economy” companies like Uber, Lyft, and particularly in this case AirBnB insist on drawing focus to the population who looks most sympathetic from the companies’ point of view: first, the individual service providers (individuals who earn extra money by occasionally driving a car for Uber, or who rent out their apartments when out of the city for a few weeks), and second, the individual users (those buying Uber car rides or renting out AirBnB properties). They work hard to draw attention away from themselves as companies (except when they are hoping to attract investor attention), and even more strongly away from their impact on the parts of the social world that are impacted by their services—in so far as these are mentioned at all, it is typically with derisive words like “incumbent” and in contexts where we are told that the world would beyond question be a better place if these “incumbents” would just go away. One does not need to look hard on, an astroturf industry lobbying group disguised as a grassroots quasi-socialist advocacy organization, to find paean after paean to the benefits brought to individuals by these giant corporate middlemen. (More objectively, much of the “sharing economy” looks like yet particularly broad cases of the time-honored rightist practice of privatizing profits while “socializing” losses.) One has to look much harder—in fact, one will look in vain—to find accounts of the neighbors of AirBnB hosts whose zoned-residential properties have turned into unregulated temporary lodging facilities, with all of the attendant problems these bring. One has to look even harder to find thoughtful economics analyses that discuss what the longer-term effects are of housing properties routinely taking in additional short-term income: it does not take much more than common sense to realize that the additional income flowing in will eventually get added to the value of the properties themselves, eventually pricing the current occupants out of the properties in which they live. The impact of these changes may be most pronounced on those who have played no role whatsoever in the decision to rent out units to AirBnB—in fact, in the case of condominiums, the community may have explicitly ruled out such uses for exactly this reason, and yet, absent intensive policing by the condo board, may find their explicit rules and contracts being violated in any number of ways. And condo boards are among the entities with the most power to resist these involuntary “innovations” on established guidelines; others have no idea they are happening. As Tom Slee’s persistent research has shown, AirBnB has a disturbing relationship with what appear to be a variety of secretive corporate bodies who have essentially turned zoned-residential properties into full-time commercial units, which not only violates laws that were enacted for good reason, but also promises to radically alter property values for residents using the properties as the law currently permits.
  • Related to the sharing of genetic information is the (currently) much broader category of information that we call right now the quantified self. In the largest sense we could include in this category any kind of GTD, to-do list, calorie and diet trackers, health monitors, exercise and fitness trackers, monitoring devices such as FitBit and even glucose monitors, and many more to come. On the one hand, there are certainly mild crowdforcing effects of collecting this data on oneself, just as there are crowdforcing effects of SAT prep programs (if they work as advertised, which is debatable) and steroid usage in sports. But when this data is collected and aggregated—whether by companies like FitBit or even user communities—they start to impact all of us in ways that only seem minor today. When these data get accessed by brokers to insurers or insurers themselves, they provide ample opportunities for those of us who choose not to use these methods at all to be directly impacted by other people’s use of them, from whether health insurers start to charge us “premiums” (ie, deny us “discounts”) if we refuse to give them access to our own data, to inferences made about us based on the data they do have run through big data correlations with the richer data provided by QS participants, and so on.
  • The concerns raised by practitioners and scholars like Frank Pasquale, Cathy O’Neill, Kate Crawford, Danielle Citron, Julia Angwin and others about the uses of so-called “big data analytics” resonate at least to some extent with the issue of crowdforcing. Big data providers create statistical aggregates of human behavior based on collecting what are thought to be representative samples of various populations. They cannot create these aggregates without the submission of data from the representative group. Those willing to submit their data may not themselves understand the purposes to which their data is being put. A simple example of this is found in the so-called “wellness programs” that are becoming increasingly attached to health insurance. In its most common formulation, insurers offer a “discount” to those customers willing to submit to a variety of tests, data collection routines, and to participate in certain forms of proactive health activities (such as using a gym). Especially in the first two cases, it looks to the user like the insurer is incenting them to take tests that may find markers of diseases that can be easily treated in their early stages and much less easily treated if caught later. Regardless of whether these techniques work or not, which is debatable, the insurers have at least one other motive in pushing these programs, which is to collect data on their customers and create statistical aggregates that affect not just those who submit to the testing, but their entire base of insured customers, and even those it does not currently insure. The differential pricing model that insurers call “discounts” or sometimes “bonuses” (but rarely “penalties,” which speaking financially, they also are: it is another characteristic of the kinds of marketing rhetoric swamping everything in the digital world that literally the same practice can appear noxious if called a “penalty” but welcome if called a “discount”) seem entirely at odds with the central premise of insurance, which is that risk is distributed across large groups regardless of their particular risk profile. Even in the age of Obamacare that discourages insurers from discriminating based on pre-existing conditions, and where large employers are required by law to provide insurance at the same cost to all their employees, these “discounts” allow insurers to do just the opposite, and suggest a wide range of follow-on practices that will discriminate with an even more finely-grained comb. If customers to these companies understood that the “bonuses” have been created to craft discriminatory data profiles at least as much as they are to promote customer wellness, perhaps they would resist these “incentives” more strenuously. As it is, not only do those being crowdforced by these policies have very little access to information that makes clear the purpose of this data collection, but those contributing to it have very little idea of what they are doing to other people or even themselves. And this, of course, is one of the deep problems with social media in general, most notably exemplified by the data brokerage industry.
  • As a final example, consider the proposed and so-far unsuccessful launch of Google Glass. One of the most maddening parts of the debate over Google Glass was that proponents would focus almost exclusively on the benefits Glass might afford them, while dismissing what critics kept talking about, which is the impact Glass has on others—on those who choose not to use Glass. Critics said: Glass puts all public (and much private) space under constant recorded surveillance, both by the individuals wearing Glass and, perhaps even more worryingly, by Google itself when it stores, even temporarily, that data on its servers. What was so frustrating about this debate was the recalcitrance of Google and its supporters to see that they were arguing very strongly that the rights of people who choose not to use Glass were not up for the taking by those who did want to use it; that Google’s explicit insistence that I have to allow my children to be recorded on video and Google to store that video simply by dint of their being alive where any person who happens to use Glass might be was nothing short of remarkable. It was not hard to find Glassholes overtly insisting that their rights (and therefore Google’s rights) trump those of everyone else. This controversy was a particularly strong demonstration of the fact that the “if you don’t like it, don’t use it” mantra is false. I think we all have to look at the failure of Google to (so far) create a market for Glass as a real success of the mass of public engagement to reject the ability of a huge for-profit corporation to dictate terms to everyone. It’s even a success of the free market, in the sense that Google’s market research clearly showed that this product was not going to be met with significant consumer demand. But it is partly the visibility of Glass that allowed this to happen; too much of what I talk about here is not immediately visible in the way Glass was.

To some extent, crowdforcing is a variation on the relatively wide class of phenomena to which economists refer as externalities. Externalities refer in general to costs or benefits that impact a party even when they played no direct role in a transaction. These are usually put into two main classes. “Negative” externalities occur when costs are attached to uninvolved parties, of which the classic examples usually have to do with environmental pollution by for-profit companies, for which cleanup and health costs are “paid” by individuals who may have no connection whatsoever with the company. “Positive” externalities occur when someone else’s actions with which I’m uninvolved benefit me regardless: the simplest textbook example is something like neighbors improving their houses, which may raise property values even for homeowners who have done no work on their properties at all. Externalities clearly occur with particular frequency in times of technological and market change; there were no doubt quite a few people who would have preferred to use horses for transportation even after so many people were driving cars that horses could no longer be allowed on the roadways. While these kinds of externalities might be in some way homologous with some of the crowdforcing examples that are economic in focus (such as the impact of AirBnB and Uber on the economic conditions of the properties they share), they strike me as not capturing so well the data-centric aspects of much current sharing. Collecting blood samples from those individuals who were tested certainly allowed researchers in the past to determine normal and abnormal levels of the various components of human blood, but they did not make it possible to infer much (if anything) about my blood without I myself being tested. In fact, in the US, Fifth   Amendment protections against self-incrimination extend to the collection of such personal data because it is considered unique to each human body: that is, it is currently impermissible to collect DNA samples from everyone unless a proper warrant has been issued. How do we track this right with the ability to infer everyone’s DNA without most of us ever submitting to collection?

a crowd

Crowdforcing effects also overlap with phenomena researchers refer to by names like “neighborhood effects” and “social contagion.” In each of these, what some people do ends up affecting what many other people do, in a way that goes much beyond the ordinary majoritarian aspects of democratic culture. That is, we know that only one candidate will win an election, and that therefore those who did not vote for that candidate will be (temporarily) forced to acknowledge the political rule of people with whom they don’t agree. But this happens in the open, with the knowledge and even the formal consent of all those involved, even if that consent is not always completely understood.

Externalities produced by economic transactions often look something like crowdforcing. For example, when people with means routinely hire tutors and coaches for their children for standardized tests, they end up skewing the results even more in their favor, thus impacting those without means in ways they frequently do not understand and may not be aware of. This can happen in all sorts of markets, even in cultural markets (fashion, beauty, privilege, skills, experience). But it is only the advent of society-wide digital data collection and analysis techniques that makes it so easy to sell your neighbor out without their knowledge and consent, and to have what is sold be so central to their lifeworld.

Dealing with this problem requires, first of all, conceptualizing it as a problem. That’s all I’ve tried to do here: suggest the shape of a problem that, while not entirely new, comes into stark relief and becomes widespread due to the availability of exactly the tools that are routinely promoted as “crowdsourcing” and “collective intelligence” and “networks.” As always, this is by no means to deny the many positive effects these tools and methods can have; it is to suggest that we are currently overly committed to finding those positive effects and not to exploring or dwelling on the negative effects, as profound as they may be. As the examples I’ve presented here show, the potential for crowdforcing effects on the whole population are massive, disturbing, and only increasing in scope.

In a time when so much cultural energy is devoted to the self, maximizing, promoting, decorating and sharing it, it has become hard to think with anything like the scrutiny required about how our actions impact others. From an ethical perspective, this is typically the most important question we can ask: arguably it is the foundation of ethics itself. Despite the rhetoric of sharing, we are doing our best to turn away from examining how our actions impact others. Our world could do with a lot more, rather than less, of that kind of thinking.

Posted in "social media", cyberlibertarianism, materality of computation, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , , , , , , , , , , , | 9 Responses

Tor, Technocracy, Democracy

As important as the technical issues regarding Tor are, at least as important—probably more important—is the political worldview that Tor promotes (as do other projects like it). While it is useful and relevant to talk about formations that capture large parts of the Tor community, like “geek culture” and “cypherpunks” and libertarianism and anarchism, one of the most salient political frames in which to see Tor is also one that is almost universally applicable across these communities: Tor is technocratic. Technocracy is a term used by political scientists and technology scholars to describe the view that political problems have technological solutions, and that those technological solutions constitute a kind of politics that transcends what are wrongly characterized as “traditional” left-right politics.

In a terrific recent article describing technocracy and its prevalence in contemporary digital culture, the philosophers of technology Evan Selinger and Jathan Sadowski write:

Unlike force wielding, iron-fisted dictators, technocrats derive their authority from a seemingly softer form of power: scientific and engineering prestige. No matter where technocrats are found, they attempt to legitimize their hold over others by offering innovative proposals untainted by troubling subjective biases and interests. Through rhetorical appeals to optimization and objectivity, technocrats depict their favored approaches to social control as pragmatic alternatives to grossly inefficient political mechanisms. Indeed, technocrats regularly conceive of their interventions in duty-bound terms, as a responsibility to help citizens and society overcome vast political frictions.

Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (“civic,” “ethical,” “white” and “black” hat alike), hacktivists, WikiLeaks fans, Anonymous “members,” even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands.

While these suggestions typically frame themselves in terms of the words we use to describe core political values—most often, values associated with democracy—they actually offer very little discussion adequate to the rich traditions of political thought that articulated those values to begin with. That is, technocratic power understands technology as an area of precise expertise, in which one must demonstrate a significant level of knowledge and skill as a prerequisite even to contributing to the project at all. Yet technocrats typically tolerate no such characterization of law or politics: these are trivial matters not even up for debate, and in so far as they are up for debate, they are matters for which the same technical skills qualify participants. This is why it is no surprise that amount the 30 or 40 individuals listed by the project as “Core Tor People,” the vast majority are developers or technology researchers, and those few for whom politics is even part of their ambit, approach it almost exclusively as technologists. The actual legal specialists, no more than a handful, tend to be dedicated advocates for the particular view of society Tor propagates. In other words, there is very little room in Tor for discussion of its politics, for whether the project actually does embody widely-shared political values: this is taken as given.

freedom is slavery

“Freedom Is Slavery,” a flag of the United Technocracies of Man, a totalitarian oligarchic state in the Ad Astra Per Astera alternate history by RvBOMally

This would be fine if Tor really were “purely” technological—although just what a “purely” technological project might be is by no means clear in our world—but Tor is, by anyone’s account, deeply political, so much so that the developers themselves must turn to political principles to explain why the project exists at all. Consider, for example, the Tor Project blog post written by lead developer Roger Dingledine that describes the “possible upcoming attempts to disable the Tor network” discussed by Yasha Levine and Paul Carr on Pando. Dingledine writes:

The Tor network provides a safe haven from surveillance, censorship, and computer network exploitation for millions of people who live in repressive regimes, including human rights activists in countries such as Iran, Syria, and Russia.

And further:

Attempts to disable the Tor network would interfere with all of these users, not just ones disliked by the attacker.

Why would that be bad? Because “every person has the right to privacy. This right is a foundation of a democratic society.”

This appears to be an extremely clear statement. It is not a technological argument: it is a political argument. It was generated by Dingledine of his own volition; it is meant to be a—possibly the—basic argument that that justifies Tor. Tor is connected to a fundamental human right, the “right to privacy” which is a “foundation” of a “democratic society.” Dingledine is certainly right that we should not do things that threaten such democratic foundations. At the same time, Dingledine seems not to recognize that terms like “repressive regime” are inherently and deeply political, and that “surveillance” and “censorship” and “exploitation” name political activities whose definitions vary according to legal regime and even political point of view. Clearly, many users of Tor consider any observation by any government, for any reason, to be “exploitation” by a “repressive regime,” which is consistent for the many members of the community who profess a variety of anarchism or anarcho-capitalism, but not for those with other political views, such as those who think that there are circumstances under which laws need to be enforced.

Especially concerning about this argument is that it mischaracterizes the nature of the legal guarantees of human rights. In a democracy, it is not actually up to individuals on their own to decide how and where human rights should be enforced or protected, and then to create autonomous zones wherein those rights are protected in the terms they see fit. Instead, in a democracy, citizens work together to have laws and regulations enacted that realize their interpretation of rights. Agitating for a “right to privacy” amendment to the Constitution would be appropriate political action for privacy in a democracy. Even certain forms of (limited) civil disobedience are an important part of democracy. But creating a tool that you claim protects privacy according to your own definition of the term, overtly resisting any attempt to discuss what it means to say that it “protects privacy,” and then insisting everyone use it and nobody, especially those lacking the coding skills to be insiders, complain about it because of its connection to fundamental rights, is profoundly antidemocratic. Like all technocratic claims, it challenges what actually is a fundamental precept of democracy that few across the political spectrum would challenge: that open discussion of every issue affecting us is required in order for political power to be properly administered.

It doesn’t take much to show that Dingledine’s statement about the political foundations of Tor can’t bear the weight he places on it. I commented on the Tor Project blog, pointing out that he is using “right to privacy” in a different way from what that term means outside of the context of Tor: “the ‘right to privacy’ does not mean what you assert it means here, at all, even in those jurisdictions that (unlike the US) have that right enshrined in law or constitution.” Dingledine responded:

Live in the world you want to live in. (Think of it as a corollary to ‘be the change you want to see in the world’.)

We’re not talking about any particular legal regime here. We’re talking about basic human rights that humans worldwide have, regardless of particular laws or interpretations of laws.

I guess other people can say that it isn’t true — that privacy isn’t a universal human right — but we’re going to keep saying that it is.

This is technocratic two-stepping of a very typical sort and deeply worrying sort. First, Dingledine claimed that Tor must be supported because it follows directly from a fundamental “right to privacy.” Yet when pressed—and not that hard—he admits that what he means by “right to privacy” is not what any human rights body or “particular legal regime” has meant by it. Instead of talking about how human rights are protected, he asserts that human rights are natural rights and that these natural rights create natural law that is properly enforced by entities above and outside of democratic polities. Where the UN’s Universal Declaration on Human Rights of 1948 is very clear that states and bodies like the UN to which states belong are the exclusive guarantors of human rights, whatever the origin of those rights, Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role.

We don’t have to look very hard to see the problems with that. Many in the US would assert that the right to bear arms means that individuals can own guns (or even more powerful weapons). More than a few construe this as a human or even a natural right. Many would say “the citizen’s right to bear arms is a foundation of a democratic society.” Yet many would not. Another democracy, the UK, does not allow citizens to bear arms. Tor, notably, is the home of many hidden services that sell weapons. Is it for the Tor developers to decide what is and what is not a fundamental human right, and how states should recognize them, and to distribute weapons in the UK despite its explicit, democratically-enacted, legal prohibition of them? (At this point, it is only the existence of legal services beyond Tor’s control that make this difficult, but that has little to do with Tor’s operation: if it were up to Tor, the UK legal prohibition on weapons would be overwritten by technocratic fiat.)

We should note as well that once we venture into the terrain of natural rights and natural law, we are deep in the thick of politics. It simply is not the case that all political thinkers, let alone all citizens, are going to agree about the origin of rights, and even fewer would agree that natural rights lead to a natural law that transcends the power of popular sovereignty to protect. Dingledine’s appeal to natural law is not politically neutral: it takes a side in a central, ages-old debate about the origin of rights, the nature of the bodies that guarantee them.

That’s fine, except when we remember that we are asked to endorse Tor precisely because it instances a politics so fundamental that everyone, or virtually everyone, would agree with it. Otherwise, Tor is a political animal, and the public should accede to its development no more than it does to any other proposed innovation or law: it must be subject to exactly the same tests everything else is. Yet this is exactly what Tor claims it is above, in many different ways.

Further, it is hard not to notice that the appeal to natural rights is today most often associated with the political right, for a variety of reasons (ur-neocon Leo Strauss was one of the most prominent 20th century proponents of these views). We aren’t supposed to endorse Tor because we endorse the right: it’s supposed to be above the left/right distinction. But it isn’t.

Tor, like all other technocratic solutions (or solutionist technologies) is profoundly political. Rather than claiming it is above them, it should invite vigorous political discussion of its functions and purpose (as at least the Tor Project’s outgoing Executive Director, Andrew Lewman, has recently stated, though there have yet to be many signs that the Tor community, let alone the core group of “Tor People,” agrees with this). Rather than a staff composed entirely of technologists, any project with the potential to intercede so directly in so many vital areas of human conduct should be staffed by at least as many with political and legal expertise as it is by technologists. It should be able to articulate its benefits and drawbacks fully in the operational political language of the countries in which it operates. It should be able to acknowledge that an actual foundation of democratic polities is the need to make accommodations and compromises between people whose political convictions will differ. It needs to make clear that it is a political project, and that like all political projects, it exists subject to the will of the citizenry, to whom it reports, and which can decide whether or not the project should continue. Otherwise, it disparages the very democratic ground on which many of its promoters claim to operate.

This, in the end, is one reason that Pando’s coverage of Tor is so important, and a reason it strikes me as seriously unfortunate to suggest that. I think many in Tor know much less about politics than they think they do. If they did, they might wonder as I do why it is that organizations like Radio Free Asia and the Broadcasting Board of Governors have been such persistent supporters of the project. These organizations are not in the business of supporting technology for technology’s sake, or science for the sake of “pure science.” Rather, they promote a particular view of “media freedom” that is designed to promote the values of the US and some of its allies. These organizations have strong ties to the intelligence community. Anyone with a solid knowledge of political history will know that RFA and BBG only fund projects that advance their own interests, and that those interests are those of the US at its most hegemonic, at its most willing to push its way inside of other sovereign states. Many view them as distributors of propaganda, pure and simple.

You don’t have to look hard to find this information: Wikipedia itself notes that Catharin Dalpino of the centrist Brookings Institution think tank (ie, no wild-eyed radical) says of Radio Free Asia: “It doesn’t sound like reporting about what’s going on in a country. Often, it reads like a textbook on democracy, which is fine, but even to an American it’s rather propagandistic.” It is no stretch to see the “media freedom” agenda of these organizations and the “internet freedom” agenda surrounding Tor as more alike than different. Further, Tor is arguably a much more powerful tool than are media broadcasts, despite how powerful those themselves are. This is not to say that it is absolutely wrong for the US to promote its values this way, or that everything about Radio Free Europe and Radio Free Asia was and is bad. It’s to say these are profoundly political projects, and democracy demands that the citizenry and its elected representatives, not technocrats, decide whether to pursue them.

We are often told that Tor is just trying to do good, trying to inspire respect for human decency and human rights, and that its community is just being attacked because it is “an easy target.” Yet the contrary story is much more rarely told: that Tor encourages a technocratic dismissal of democratic values, and promotes serious and seriously uninformed anti-government hostility. Further, despite the claims of its advocates that Tor is meant to protect “activists” against human rights abuses (as the Tor community construes these), the fact remains that to many observers, Tor is just as lucidly seen as a tool that promotes and encourages human rights abuses of the very worst kind: child pornography, child exploitation, all the crimes and suffering that go along with worldwide distribution of illegal drugs, assassination for hire, and much more. The Tor community dismisses these worries as “FUD” (or more poetically, the “Four Horsemen of the Info-Apocalypse”) but evidence that they are real is very hard for the objective observer to overlook (even lists on the open web of the most widely-used hidden services reveals very few that are not involved in circumventing laws that many my consider not only reasonable but important). The “use case” for encrypted messaging such as OTR (Off-The-Record messaging) is far easier to understand in a political sense than is the one for the hidden services that sell drugs, weapons, promote rape porn, and so on. It is beyond ironic that a tool for which the most salient uses may be the most serious affronts to human rights should be promoted as if its contributions to human rights are so obvious as to be beyond question. Does Tor do “good”? No doubt. But it also enables some very bad things, at least as I personally evaluate “good” and “bad.” You can’t say that on the one hand the good it enables accrues to Tor’s benefit, while the bad it enables is just an unavoidable cost of doing business. With very limited exceptions (e.g. speech itself, and even there the balance is contested) we don’t treat cultural phenomena that way. The only name for striking the right balance between those poles is politics, and it is entirely possible that the political balance Tor strikes is one that, were it better understood, few people would assent to. Making decisions about matters like this, not the expanded and putative “right to privacy,” is the foundation of democracy. Unless Tor learns not just to accommodate but to encourage such discussions, it will remain a project based on technocracy, not democracy, and therefore one that those of us concerned about the fate of democracy must view with significant concern.

Posted in "hacking", cyberlibertarianism, privacy, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment

‘Is It Compromised?’ Is the Wrong Question about US Government Funding of Tor

In many ways, the most surprising thing about Yasha Levine’s powerful reporting on US government funding of Tor at Pando Daily has been the response to it. From the trolling attacks and ad hominem insults by apparently respectable, senior digital privacy activists and journalists, to repeated, climate-denialist-style “I’m rubber you’re glue”-type (or, as I like to call them, “You’re a towel”), evidence-free insinuations about Levine and Pando’s possibly covert funding sources and intelligence world connections, almost none of the response has had the measured and reasonable tone, let alone the connection to established facts, that Levine’s own reporting has.

Much of this response tries to turn Levine’s reporting into a conspiracy theory, which it then pretends to defuse by positing even wilder conspiracy theories. The conspiracy theory is that funding from the State Department, BBG, and all the various CIA and other intelligence agency cut-outs means that the Tor developers are covert agents or assets, secretly doing something very different from what they say they are doing; and that Tor is deliberately compromised in ways that these leaders are not revealing.

This turns into the question: “If the CIA is funding Tor, where are the vulnerabilities they are secretly planting in it? Why haven’t they been found via the classic principle that ‘all bugs are shallow when many eyes are looking for them’?”

For example, here are two comments to Levine’s “Internet Privacy, Funded by Spooks: A Brief History of the BBG”:

User “SpryteEast” writes: “this article could be great if it had proof. Most of modern-day cryptography technology was funded by US government at some point. Does it mean that they can break into everything?”

User “grumpykocka” writes: “Simple question: do you have proof that these systems have been compromised in any way? Technically, wouldn’t these open systems be incredibly hard to compromise without us knowing it? Perhaps they could be cracked, but you are implying much more than that, intentional back doors built into the code. But again, how would that go undetected in these open source solutions?”

This is from Tor & First Look staffer Micah Lee’s mean-spirited and defensive “Fact-Checking Pando’s Smears Against Tor,” responding to Levine’s earlier pieces:

If there were evidence of an intentional design flaw in the Tor network, similar to NSA’s sabotage of encryption standards through their BULLRUN program, that would be a huge deal. Pando didn’t find anything that wasn’t published on

This is the wrong question.

It’s a question so wrong and so enticing that it often derails the conversation we really should be having. It’s asked so often that it has the appearance of a party line, talking points that those involved issue with a remarkable persistence and uniformity. We don’t need to ask whether that party line is dictated by someone; what is more interesting is the party line itself.

cia dissemination of propaganda

a 1977 New York Times story about CIA’s propagandistic use of the press (from Yasha Levine’s most recent Pando story)

CIA, like other intelligence agencies (and for whom I’ll use it as a shorthand for the time being), is not a mind control supervillain. It does not “own” assets (in the spy lingo, “agents” usually refers to actual employees, whereas “assets” are others who may in some way or another contribute to intelligence agency efforts on an ad-hoc basis) and prescribe every aspect of their behavior. Rather, it looks for assets whose interests may align with its own. At times it may nudge them in the direction it wants; only with some of those most closely tied in does it directly give orders. When CIA operates through cutouts those cutouts typically appear to have full autonomy, and many in the cutout may well have that autonomy: that’s what gives cutouts credibility and what makes them useful. If everyone could have easily seen that Life magazine was a CIA front, people would have taken it much less seriously than they did.

CIA uses cutouts and assets for a much subtler purpose: because those apparently “regular” people and organizations, in doing what they do anyway, align with US state interests. They advance CIA’s interests just by being themselves. CIA has no need to control, direct, or even directly influence these assets: in certain ways, this would be less productive than remaining in the background.

From this perspective, the wrong question is to ask what CIA and State and so on are doing to “mess” with the Tor Project. The right question is to ask: how does the development of Tor, and in a parallel fashion the promotion of “internet freedom,” align with the interests of CIA, the State Department, USAID, and so on?

This is a question that it is very hard for cyberlibertarians even to put to themselves. They are so convinced of the righteousness of “internet freedom” and of Tor, so sure of its purpose and its politics that many of them appear not even to be able to bear to ask whether these beliefs might be fallacious. That “internet freedom,” a slogan without a clear referent, might be a policy the US promotes for specific geostrategic reasons, in part because so many people hop on board without understanding that the “internet freedom” agenda is not what it sounds like. That Tor serves some very specific US interests.

Despite the conspiratorial accusations levied at Levine, his piece makes this focus very clear:

The BBG was formed in 1999 and runs on a $721 million annual budget. It reports directly to Secretary of State John Kerry and operates like a holding company for a host of Cold War-era CIA spinoffs and old school “psychological warfare” projects: Radio Free Europe, Radio Free Asia, Radio Martí, Voice of America, Radio Liberation from Bolshevism (since renamed “Radio Liberty”) and a dozen other government-funded radio stations and media outlets pumping out pro-American propaganda across the globe.

This does not mean, of course, that it’s uninteresting whether some people involved with Tor—perhaps especially those close to and/or funded by the OTF, as Levine points out—might be “assets” in some way or another, but we are likely never really to know the truth about covert shenanigans like that. It also doesn’t mean that questions about Tor being compromised are unimportant. It’s interesting to note that Micah Lee asks Levine to provide evidence of an “intentional design flaw in the Tor network”: evidence of intentionality would consist of communicative documentation that is only likely to turn up in unusual circumstances. But there is plenty of evidence of design flaws per se in the Tor network: they are found all the time, often by the Tor developers themselves. How did they get there? Who knows. But that is one reason why “is it compromised” is such a misguided question: we know Tor is compromised or has been compromised at times, and undoubtedly will be again. We don’t know who is responsible for its vulnerabilities: often they emerge from parts of the system nobody appears to have thought about, but sometimes nobody, not even those in the Tor community, knows their source.

But these are questions about which we can’t do much more than speculate. They are outweighed in importance by the central question about the ideology behind Tor. If you are asking how government funding compromises Tor and “internet freedom,” you are asking the wrong question. The right question is: how do Tor and “internet freedom” serve the interests of those who fund them so generously—and have virtually no history of funding (especially on an ongoing basis) projects that are contrary or even irrelevant to their interests? Why do major factions within the US Government so steadfastly promote an internet project whose supporters routinely insist that “the government sure does hate the Internet”?

We don’t have to look far or think that hard to develop answers to these questions. Just the other day, Shawn Powers and Michael Jablonski, authors of the new and fascinating-sounding book, The Real Cyber War: The Political Economy of Internet Freedom (University of Illinois Press, 2015), announced the publication of their book by writing:

Efforts to create a universal internet built upon Western legal, political, and social preferences is driven by economic and geopolitical motivations rather than the humanitarian and democratic ideals that typically accompany related policy discourse. In fact, the freedom-to-connect movement is intertwined with broader efforts to structure global society in ways that favor American and Western cultures, economies, and governments.

The inability of many Tor and “internet freedom” and even super-encryption supporters to understand (or at least, to talk as if they understand) this point of view is part of what is so disturbing about this whole situation. “Internet freedom” and “internet privacy” and even “Tor” have become like articles of religious faith: creeds whose fundamental tenets cannot be questioned, even if they also cannot be stated in anything like the clarity with which “freedom of the press” can be stated. The critique we need to consider is not merely that major powers are “paying lip service” to the idea of internet freedom; it is that the idea itself is bankrupt: it is a propagandistic slogan in search of a meaning, a set of meaningful-sounding (but meaningless) words, like “right to work,” that exists only to serve a powerful and disturbing agenda (which is one direction that the outsize “internet freedom” funding provided by the US State Department, and Google’s triumphalist support for the idea, should raise questions for everyone). Indeed, if the putative freedom of information on which “the internet” (and Tor, and “internet freedom,” etc.) is supposedly based is going to mean anything–if it at least entails the “freedom of speech” and “freedom of the press” that in my opinion it does not eclipse in especially legible ways–it has to mean being willing always to question our fundamental assumptions, making it beyond ironic that its fiercest champions work so hard to prevent us from doing just that.

Posted in cyberlibertarianism, privacy, revolution, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , , , , | 2 Responses

Wikipedia and the Oligarchy of Ignorance

In a recent story on Medium called “One Man’s Quest to Rid Wikipedia of Exactly One Grammatical Mistake: Meet the Ultimate WikiGnome,” Andrew McMillen tells the story of Wikipedia editor “Giraffedata”—beyond the world of Wikipedia, a software engineer named Bryan Henderson—who has edited thousands of Wikipedia pages to correct a single grammatical error and is one of the 1000 most active editors of Wikipedia. McMillen describes Giraffedata as one of the “favorite Wikipedians” of some employees at the Wikimedia Foundation, the umbrella organization that funds and organizes Wikipedia along with other projects. The area he works on is not controversial (at least not in the sense of hot topics like GamerGate or climate change); his edits are typically not reverted in the way that substantive edits to such controversial topics frequently are. While the area he focuses on is idiosyncratic, his work is extremely productive. As such he is understood by at least some of the core Wikipedians to exemplify the power of crowds, the benefits of “organizing without organization,” the fundamental anti-hierarchical principles that apparently point toward new, better political formations.

McMillen describes a presentation at the 2012 Wikimania conference by two Wikimedia employees, Maryana Pinchuk and Steven Walling:

Walling lands on a slide entitled, ‘perfectionism.’ The bespectacled young man pauses, frowning.

“I feel sometimes that this motivation feels a little bit fuzzy, or a little bit negative in some ways… Like, one of my favorite Wikipedians of all time is this user called Giraffedata,” he says. “He has, like, 15,000 edits, and he’s done almost nothing except fix the incorrect use of ‘comprised of’ in articles.”

A couple of audience members applaud loudly.

“By hand, manually. No tools!” interjects Pinchuk, her green-painted fingernails fluttering as she gestures for emphasis.

“It’s not a bot!” adds Walling. “It’s totally contextual in every article. He’s, like, my hero!”

“If anybody knows him, get him to come to our office. We’ll give him a Barnstar in person,” says Pinchuk, referring to the coveted virtual medallion that Wikipedia editors award one another.

Walling continues: “I don’t think he wakes up in the morning and says, ‘I’m gonna serve widows in Africa with the sum of all human knowledge.’” He begins shaking his hands in mock frustration. “He wakes up and says, ‘Those fuckers — they messed it up again!’”

Neither the presenters nor McMillen follow up on Walling’s aside that Giraffedata’s work might be “a little bit negative in some ways.” But it seems arguable to me that this is the real story, and the celebration of Henderson’s efforts is not just misplaced, but symptomatic. Rather than demonstrating the salvific benefits of non-hierarchical organizations, Giraffedata’s work symbolizes their remarkable tendency to turn into formations that are the exact opposite of what the rhetoric suggests: deeply (if informally) hierarchical collectives of individuals strongly attached to their own power, and dismissive of the structuring elements built into explicit political institutions.

This is a well-known problem. It has been well-known at least since 1970 when Jo Freeman wrote “The Tyranny of Structurelessness”; it is connected to what Alexander Galloway has recently called “The Reticular Fallacy.” These critiques can be summed up fairly simply: when you deny an organization the formal power to distribute power equitably—to acknowledge the inevitable hierarchies in social groups and deal with them explicitly—you inevitably hand power over to those most willing to be ruthless and unflinching in their pursuit of it. In other words, in the effort to create a “more distributed” system, except in very rare circumstances where all participants are of good will and relatively equivalent in their ethics and politics, you end up creating exactly the authoritarian rule that your work seemed designed specifically to avoid. You end up giving even more unstructured power to exactly the persons that institutional strictures are designed to curtail.

That this is a general problem with Wikipedia has been noted by Aaron Shaw and Benjamin Mako Hill in a 2014 paper called “Laboratories of Oligarchy? How The Iron Law Extends to Peer Production.” Shaw and Mako Hill are fairly enthusiastic about Wikipedia and peer production, and yet their clear-eyed research, much of which is based on empirical as well as theoretical considerations, forces them to conclude:

Although, invoking U.S. Supreme Court Justice Louis Brandeis, online collectives have been hailed as contemporary “laboratories of democracy”, our findings suggest that they may not necessarily facilitate enhanced practices of democratic engagement and organization. Indeed, our results imply that widespread efforts to appropriate online organizational tactics from peer production may facilitate the creation of entrenched oligarchies in which the self-selecting and early-adopting few assert their authority to lead in the context of movements without clearly defined institutions or boundaries. (23)[1]

In the current case, what is so striking about Giraffedata’s work is that, from the perspective of every reasonable expert angle on the question, Giraffedata is just plain wrong. It is not a fact that “comprised of” is ungrammatical or that it means only what Giraffedata says it does. This is not at all controversial. In an excellent piece at The Guardian, “Why Wikipedia’s Grammar Vigilante Is Wrong,” David Shariatmadari demonstrates the many reasons why this is the case (though as usual, read the comments for typically brusque and/or ‘anti-elite’ elitist opinions to the contrary). Even better is “Can 50,000 Wikipedia Edits Be Wrong?” by Mark Lieberman at Language Log, the leading linguistics site in the world, which has been covering this issue—that is, specifically the usage of “comprised of”—since at least 2011. Lieberman wryly notes that “It doesn’t seem to have occurred to Mr. McMillen to check the issue out in the Oxford English Dictionary or in Merriam-Webster’s Dictionary of English Usage, or for that matter in literary history, where he might have appreciated the opportunity to correct Thomas Hardy… and also Charles Dickens.” Bizarrely, Wikipedia itself has a page on “comprised of” that endorses the linguist’s view, rather than Giraffedata’s view.


Drawing the circle just a bit wider, Giraffedata is a linguistic prescriptivist in a world where the experts agree that prescriptivism is ideology rather than wisdom. Prescriptivism itself is an assertion of power in the name of one’s own authority that claims (erroneously) to be based on on higher authorities that do not, in fact, exist. It is, in fact, one of the most persistent targets in writing by actual linguists from across the political spectrum: Lieberman rightly calls it “authoritarian rationalism,” and he and Geoff Nunberg (another of the most prominent US linguists) have an interesting back-and-forth about its fit with general right-left politics.

At another level of abstraction, Henderson’s efforts exemplify a lust for power that entails a specific (if perhaps not entirely conscious) rejection of expertise over precisely the topic he cares about.[2] The development of “expertise” is exactly the kind of social, relatively ad-hoc but still structured distribution of power that the new structureless tyrants want to re-hierarchize, with themselves at top. Does Henderson ask linguists about the rightness or wrongness of his judgment? As Lieberman’s work points out, there are obvious, easily available resources in which Henderson might have checked his judgment; it does not appear even to have occurred to him to do so. As Shariatmadari points out, even in the McMillen article, it becomes clear very quickly that Henderson is aware that the “error” he is “correcting” is not actually a matter even of grammar, but a judgment of taste based on several well-known linguistic fallacies (that synonyms should not exist, or that a word’s origin dictates its current meaning).

None of this is to say that it is “right” or “wrong” to adjust the style of Wikipedia with regard to Henderson’s word choice hobby horse. But here again is another rejection of a perfectly reasonable and even useful form of distributed authority: editorial authority over a written product. Before Wikipedia, and even today, published encyclopedias and other publications had rules called “house styles.” These are guidelines made up provisionally by the publishing house to enforce consistency on their work; some are extremely detailed and some are much looser. The house style for any given publication would dictate whether or not to use “comprised of” in the sense that upsets Henderson so much. It would not be a fact whether “comprised of” is right or wrong, but only a fact within the context of the publication. And this is actually a better account of how language works, or usually works: “this is how we do it here,” rather than “this is correct” and “this is incorrect.” (Wikipedia does have a very detailed Manual of Style, but it largely refrains from guidelines pertaining to usage, unlike the in-house style manuals of publications like The New Yorker or The Wall Street Journal.)

At the next level of abstraction, perhaps the most important one, the Wikimedia Foundation’s endorsement of Giraffedata’s work as among their “favorite” displays a kind of agnotology—a studied cultivation of ignorance—that feeds structureless tyrannies and authoritarian anti-hierarchies. In order to rule over those whose knowledge or expertise challenges you, the best route is to dismiss or mock that expertise wholesale, to rule it out as expertise at all, in favor of your own deeply-held convictions that you trumpet as a “new kind” of expertise that invalidates the “old,” “incumbent” kinds. This kind of agnotology is widespread in current Silicon Valley and digital culture; it is no less prominent in reactionary political culture, such as the Tea Party and rightist anti-science movements.

Thus Henderson’s work connects to the well-known disdain of many core Wikipedia editors for actual experts on specific topics, and even more so for their stubborn resistance (speaking generally; of course there are exceptions) to the input of such experts, when one would expect exactly the opposite should be the case. (As a writer in Wired put it almost a decade ago, “The Wikipedia philosophy can be summed up thusly: ‘Experts are scum.’”) A world-leading expert on Topic A wants to help edit the page on that topic—is the right response to reach out to them and help guide them through the (what should be) minimal rules of your project? Or is it to mock and impugn them for having the temerity to think they are expert in something, in the face of the far more important project that you are expert in? One of its several pages addressing this problem, “Wikipedia: Expert Retention,” notes:

If by “Wikipedia” one means its values as expressed in policy, then it can be said that Wikipedia definitely does not value expertise. Attempts to establish a policy on credential verification have failed. There are competing essays that say credentials are irrelevant and that credentials matter. An attempt to push through a policy to ignore all credentials failed, though it received considerable support.

The culture of Wikipedia has no single commonly held view, as is illustrated in the discussion pages of the above cited essays and proposals. However, the lack of consensus (and indeed doggedly opposed parties) results in a perceived lack of respect for expertise, a deference normally found elsewhere in society. Anti-expertise positions often are not acted against, so they are in effect encouraged. And as they are encouraged, they more than negate any positive regard for expertise, since the latter is only expressed, at present, in the consideration given by individual editors to those whom they recognize as experts. (emphasis added)

This is why it’s important that the Wikimedia Foundation employees pass so quickly over the possible “negatives” in Giraffedata’s work, and choose to single him out for praise. This is exactly—if, I think, unconsciously, what the most persistent members of the Wikipedia community want—the disparagement of existing (or in Silicon Valley terminology, “incumbent”) structures for distributing power in the name of a “democratization” that is actually about people with a significant lust for power that is not patient enough to develop its own distributive structures (that is, to work on developing a house style for Wikipedia, or to, I don’t know, study linguistics). In this way, too much of peer production seems like a marketing sheen placed over a very clear and antidemocratic lust for personal power, much as the 1970s communes were, but writ large and with very central social pillars in its sights.

As Freeman’s work has always suggested, which makes the brute rejection of its reasoning in favor of Hayekian “spontaneous orders” of knowledge (or ignorance), Wikipedia’s structurelessness is very easily seen not as a social miracle of cooperation but as a breeding ground for tyrants.[3] Mako Hill and Shaw: “the adoption of peer production’s organizational forms may inhibit the achievement of enhanced organizational democracy” (22). That they do this in the name of democracy makes them characteristic of the contemporary, digitally-inspired agnotological oligarchy.


[1] It is worth noting that Shaw and Mako Hill rely in part on the so-called “Iron Law of Oligarchy” postulated by the proto-Fascist and Fascist sociologist Robert Michels in the early part of the 20th century. Michels actually thought it applies to all democratic organizations and cannot be prevented, but Shaw and Mako Hill rely on a great deal of post-Michels research that tends to give greater weight to formal methods of preventing oligarchy than Michels did.

[2] “Lust for power” is the usual English translation of the German word machtgelüst, which appears prior to the better-known “will to power” in Nietzsche, and unlike the latter term, is specifically meant to indicate the cathection of desire toward personal power.

[3] Wikipedia founder Jimmy Wales famously traces his philosophical inspiration for Wikipedia to Friedrich Hayek’s 1945 essay “The Use of Knowledge in Knowledge in Society”; Philip Mirowski, our most trenchant critic of neoliberalism, has repeatedly demonstrated the ways in which Hayek’s views specifically advocate ignorance.

Posted in "social media", cyberlibertarianism, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , , , , | 3 Responses

Tor Is Not a “Fundamental Law of the Universe”

In what I consider a very welcome act of journalistic open-mindedness, Pando Daily recently published a piece by Quinn Norton that responded both to Yasha Levine’s excellent and necessary piece on the US Government’s funding of the Tor Project, and perhaps even more so his even more necessary piece on the amazing attacks his piece received from some of the brightest stars in the encryption, “internet freedom,” and Tor universe.

I want to focus on a small part of Norton’s piece, in which she tries to explain the vicious attacks on Levine’s piece:

The incoherent frothing-at-the-mouth support for the fundamentals of Tor don’t arise from a set of politics, or money, or a particular arrangement of social trust like a statute or constitutional law. The support comes from an appeal to the fundamental laws of the universe, which not even the most vigorous of black budget ops can break.

Yes, the Tor people somehow believe that Tor itself implements a “fundamental law of the universe,” and that their privileged technical knowledge grants them special access that the rest of us lack. That is false, breathless narcissism and arrogance at its most outrageous, and very typical of our digital age.

There are fundamental laws of the universe: that something with mass cannot move faster than the speed of light in a vacuum; that matter can neither be created nor destroyed, but only converted into energy, and vice versa. These are fundamental laws that DoD can’t change. All technologies dip into these fundamental laws, to greater and lesser degrees.

Tor is not a fundamental law of the universe. Math is *fairly* fundamental, but even the simplest math–say, 2 + 2 =4–is NOT a fundamental law. Addition obtains under some circumstances, and not under others (this is part of the point of the revolutions in mathematical theory of the 19th century, including non-Euclidean geometry).

Grammatically, the phrase “Tor is/is not a fundamental law of the universe”—which, to be clear, is my phrase, not Norton’s—makes no sense. But other than the vague notion of “mathematical laws,” which she does not even directly invoke, Norton’s statement that Tor advocates “appeal” to the “fundamental laws of the universe” is conceptually no clearer. There are not that many fundamental laws. Tor “appeals” to them no more or less than, say, the NSA does when it uses satellites that rely in part on relativistic physics for geolocation. Relativity itself is a strange candidate for a “fundamental” law, for lots of interesting philosophical and scientific reasons, which does not mean it is not fundamental either, but my point is to show that the belief of Tor advocates that they are tapping into something over and above what the rest of us have access to is misbegotten and hubristic in the extreme. If what the Tor people are trying to show is that their cryptographic procedures are sound, fine. But that is not what we are talking about. We are talking about the use of Tor in the world.

Tor for freedom

The math on which Tor is based appears solid, as far as I can tell not being a cryptographic mathematician myself, according to both the Tor developers and outside analysts. Yet the actions Tor exists to enable–the use of digital communications systems in our world, supposedly opaque to traffic analysis (the main purpose Tor’s development team claim for their product)–are governed by no parallel solidity. It is part of a huge social apparatus. No matter how perfect the math, as long as that apparatus is large and involves people, it cannot be governed entirely by an unbreakable fundamental law of the universe. One is not challenging the laws of motion by buying out the operator of a Tor relay or exit node. It would be more accurate to say that a fundamental law of the social world is that people–including many of the same hackers Quinn Norton defends and celebrates in other writings–will do everything they can to find their way in to systems like Tor.

Further, we have historical evidence that proves this. People keep breaking Tor (something I am personally happy about). Sometimes the Tor developers seem mystified. But I think people will keep breaking it (and so, apparently, do the Tor developers themselves). And I do think those who fund it are aware of its systematic vulnerabilities in a way those hypnotized by its math are not, and that’s why the story of its funding is actually extremely relevant. It is not something that can’t be controlled by the state: it can be, or at least degraded significantly, in part because they could easily shut it down altogether through a program of disallowing a wide range of onion routing services, relays and nodes that Tor depends on to run, and continuing to disallow any new services that Tor were to set up.

Tor has edges. It has layers. It has connections with other services. It has physical systems on which it sits, binaries that must be compiled, and hundreds of other points of vulnerability even granted that onion routing is mathematically robust to breaking. Oddly enough, Google Chief Engineer and life-extension supplement magnate Ray Kurzweil, whom I consider a nut, believes Moore’s law is a “fundamental law of the universe” (or, more accurately, a corollary to the “law of accelerating returns” that he thinks is fundamental) which I think he is also wrong about, but whether it’s fundamental or not, Moore’s law suggests that computers will get fast enough eventually to break current encryption routines–perhaps breaking Tor, perhaps making historical records of Tor use much more subject to network analysis than they seem today. And who knows? Maybe there is some kind of non-Euclidean (or some other alternative) approach to cryptography nobody has stumbled on yet that will square the circle in a way none of us can foresee.

It is precisely the hubris and arrogance of Tor and its developers, the hubris of computationalists who believe their self-adjudicated superior knowledge of machines compared to everyone else, who believe that their access to fundamental computational/mathematical “laws” insulates them from society and politics–that makes this discussion so difficult. Tor’s amazing reactions to the Pando coverage are exactly the kind of petulant, arrogant, dismissive, power-hungry reaction computationalists typically have when anyone they deem unworthy dares actually to ask them to account for themselves. They think they are above us, in so many different ways. They are not, and they hate anyone who dares to suggest that they are just people too.

Posted in "hacking", cyberlibertarianism, revolution, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , , , | 1 Response

All Cybersecurity Technology Is Dual-Use

Dan Geer is one of the more interesting thinkers about digital security and privacy around. Geer is a sophisticated technologist with an extremely varied and rich background who has also, fairly recently, become a spook of some kind. Geer is currently the Chief Information Security Officer for In-Q-Tel, the technology investment subsidiary of the CIA, popularly and paradoxically known as a “not-for-profit venture capital firm,” but which gets much more directly involved with its investment targets with the intent of providing “‘ready-soon innovation’ (within 36 months) vital to the IC [intelligence community] mission,” and therefore shuns the phrase “venture capital.”

This might lead one to think that Geer would speak as what Glenn Greenwald likes to call a “government stenographer,” but I find his speeches and writings to be both unusually incisive and extremely independent minded. He often says things that nobody else says, and he says them from a position of knowledge and experience. And what he says often does not line up with either what one imagines “government” thinks, or with what many in industry want; he has recently suggested, contrary to what Google and many “digital freedom” advocates affirm, that the European “Right to Be Forgotten” actually does not go far enough in protecting privacy.

In his talk at the 2014 Black Hat USA conference, the same talk where he made remarks about the Right to Be Forgotten, called “Cybersecurity as Realpolitik” (text; video), Geer made the following deeply insightful observation:

All cyber security technology is dual use.

Here’s the full context of that statement:

Part of my feeling stems from a long-held and well-substantiated belief that all cyber security technology is dual use. Perhaps dual use is a truism for any and all tools from the scalpel to the hammer to the gas can — they can be used for good or ill — but I know that dual use is inherent in cyber security tools. If your definition of “tool” is wide enough, I suggest that the cyber security tool-set favors offense these days. Chris Inglis, recently retired NSA Deputy Director, remarked that if we were to score cyber the way we score soccer, the tally would be 462-456 twenty minutes into the game,[CI] i.e., all offense. I will take his comment as confirming at the highest level not only the dual use nature of cybersecurity but also confirming that offense is where the innovations that only States can afford is going on.

Nevertheless, this essay is an outgrowth from, an extension of, that increasing importance of cybersecurity. With the humility of which I spoke, I do not claim that I have the last word. What I do claim is that when we speak about cybersecurity policy we are no longer engaging in some sort of parlor game. I claim that policy matters are now the most important matters, that once a topic area, like cybersecurity, becomes interlaced with nearly every aspect of life for nearly everybody, the outcome differential between good policies and bad policies broadens, and the ease of finding answers falls. As H.L. Mencken so trenchantly put it, “For every complex problem there is a solution that is clear, simple, and wrong.”

geer at black hat

Dan Geer at the Black Hat USA 2014 conference (Photo: Threatpost)

Now what Geer means by “dual-use” here is one of the term’s ordinary meanings: all cybersecurity technology (and really all digital technology) has both civilian and military uses.

But we can expand that, as Geer suggests when he mentions the scalpel, hammer, and gas can, in another way the term is sometimes used: all cybersecurity technology has both offensive and defensive uses.

This basic fact, which is obvious from any careful consideration of game theory or military or intelligence history, seems absolutely lost on the most vocal and most active proponents of personal security: the “cypherpunks” and crypto advocates who continually bombard us with the recommendation we “encrypt everything.” (In “Opt-Out Citizenship” I describe the anti-democratic nature of the end-to-end encryption movement.)

Not only that: I don’t think “cybersecurity” technology is a broad enough term, either: it would be better to say that a huge amount of digital technology is dual-use. That is to say that a great deal of digital technology has uses to which it can be and will be put that are neither obvious nor, necessarily, intended by their developers and even users, and that often work in exactly the opposite way that their developers or advocates say (or think) they do.

This is part of what drives me absolutely crazy about the cypherpunks and other crypterati who have come out in droves in the wake of the Snowden revelations.

They act and write as if they control what they do; as if, unlike the rest of the people in the world, what they do will be accepted as-is, will end the story, will have only the direct effects they intend.

Thus, they write as if significantly upping the amount and efficacy of encryption on the web is something that “bad” hackers and “bad” cypherpunks will just accept.

Despite the fact that we know that’s not true. Any advance in encryption has both offensive and defensive uses. In its most basic form, that means that while encoding or encrypting information might look defensive, the ability to decrypt or decode that information is offensive.

In another form, it means that no matter how carefully and thoroughly you develop your own encryption scheme, the very act of doing that does not merely suggest but ensures—particularly if your new technology gets adopted—that your opponents will use every means available to defeat it, including the (often, very paradoxically if viewed from the right angle, “open source”) information you’ve provided about how your technology works.

This isn’t a recipe for peace or for privacy. It’s an arms war. Cypherpunks might see it as some kind of perverse “peace war,” because they see themselves “only” developing defensive techniques—although given the penchant of those folks for obscurity and anonymity, it’s really special pleading to think that the only people involved in these efforts are engaged in defense.

But they aren’t. They are developing at best new “missile shields,” and the response of offensive technologists has to be—it is required to be, and they are paid to do it—better missiles that can get by the shields.

Further, because these crypterati almost universally adopt an anarcho-capitalist or far-right libertarian hatred for everything about government, they seem unable to grasp the fact that the actual mission of law enforcement and military intelligence—the mission they have to do, even when they are following the law and the constitution perfectly—involves doing everything in their power to crack and penetrate every encryption scheme in use. They have to. One of the ways they do that is to hire the very folks who bray so loudly about the sweet nature of absolute technical privacy—and once on the other side, who is better at finding ways around cryptography than those who pride themselves on their superior hacking skills? And the very development of these skills entails the creation of the universal surveillance systems used by the NSA as revealed by Snowden and others.

The population caught in the middle of this arms war is not made more free by it. We are increasingly imprisoned by it. We are increasingly collateral damage. Rather than (or at least in addition to) escalation, we need to talk about a different paradigm entirely: disarmament.

Posted in "hacking", "social media", cyberlibertarianism, materality of computation, privacy, rhetoric of computation, surveillance | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Social Media as Political Control: The Facebook Study, Acxiom, & NSA

Although it didn’t break the major media until last week, around June 2 researchers led by Adam Kramer of Facebook published a study in the Proceedings of the National Academy of Sciences (PNAS) entitled “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” The publication has triggered an flood of complaints and concerns: is Facebook manipulating its users routinely, as it seems to admit in its defense of its practices? Did the researchers—two of whom were at universities (Cornell and the University of California-San Francisco) during the time the actual study was conducted in 2012—get proper approval for the study from the appropriate Institutional Review Board (IRB), required of all public research institutions (and most private institutions, especially if they take Federal dollars for research projects)? Was Cornell actually involved in the relevant part of the research (as opposed to analysis of previously-collected data)? Whether or not IRB approval was required, did Facebook meet reasonable standards for “informed consent”? Do Terms of Service agreements accomplish not just the letter but the spirit of the informed consent guidelines? Could Facebook see emotion changes in its individual users? Did it properly anonymize the data? Can Facebook manipulate our moods? Was the effect it noticed even significant in the way the study claims? Is Facebook manipulating emotions to influence consumer behavior?

While these are all critical questions, most of them seem to me to miss the far more important point, one that has so far been gestured at only by Zeynep Tufekci (“Facebook and Engineering the Public”; Michael Gurstein’s excellent “Facebook Does Mind Control,” which appeared almost simultaneously with this post, makes similar points to mine) and former Obama election campaign data scientist Clay Johnson (whose concerns are more than a little ironic). To see its full extent, we need to turn briefly to Glenn Greenwald and Edward Snowden. I have a lot to say about the Greenwald/Snowden story, which I’ll avoid going into too much for the time being, but for present purposes I want to note that one of the most interesting facets of that story is the question of exactly what each of them thinks the problem is that they are working so diligently to expose: is it a military intelligence agency out of control, an executive branch out of control, a judiciary failing to do its job, Congress not doing oversight the way some constituents would like, the American people refusing to implement the Constitution as Snowden/Greenwald think we should, and so on? Even more pointed, whomever it is we designate as the bad actors in this story, why are they doing it? To what end?

For Greenwald, the bad actors are usually found in the NSA and the executive branch (although, as an aside, his reporting seems often to show that all three branches of government are being read into or overseeing the programs as required by law, which definitely raises questions about who the bad guys actually are). More importantly, Greenwald has an analysis of why the bad actors are conducting warrantless, mass surveillance: he calls it “political control.” Brookings Institution Senior Fellow Benjamin Wittes has a close reading of Greenwald’s statements on this topic in No Place to Hide, (also see Wittes’s insightful review of Greenwald’s book) where he finds the relevant gloss of “political control” in this quotation from Greenwald:

All of the evidence highlights the implicit bargain that is offered to citizens: pose no challenge and you have nothing to worry about. Mind your own business, and support or at least tolerate what we do, and you’ll be fine. Put differently, you must refrain from provoking the authority that wields surveillance powers if you wish to be deemed free of wrongdoing. This is a deal that invites passivity, obedience, and conformity. The safest course, the way to ensure being “left alone,” is to remain quiet, unthreatening, and compliant.

That is certainly a form of political control, and a disturbing one (though Wittes I think very wisely asks, if this is the goal of mass surveillance, why is it so ineffective with regard to Greenwald himself and the other actors in the Snowden releases? Further, how was suppression-by-intimidation work supposed to work when the programs were entirely secret, and exposed only by the efforts of Greenwald and Snowden?). But it’s not the only form of political control, and I’m not at all sure it’s the most salient or most worrying of the kinds of political control enabled by ubiquitous, networked, archived communication itself: that is to say the functionality, not the technology, of social media itself.

Why I find it ironic that Clay Johnson should worry that Mark Zuckerberg might be able to “swing an election by promoting Upworthy posts 2 weeks beforehand” is that this is precisely, at a not very extreme level of abstraction, what political data scientists do in campaigns. In fact it’s not all that abstract: “The [Obama 2012] campaign found that roughly 1 in 5 people contacted by a Facebook pal acted on the request, in large part because the message came from someone they knew,” according to a Time magazine story, for example. In other words, the campaign itself did research on Facebook and how its users could be politically manipulated—swinging elections by measuring how much potential voters like specific movie stars, for example (in the case of Obama 2012, it turned out to be George Clooney and Sarah Jessica Parker). Johnson’s own company, Blue State Digital, developed a tool the Obama campaign used to significant advantage—“Quick Donate,” deployed so that “supporters can contribute with just a single click,” which might mean that it’s easy, or might mean that supporters act on impulse, before what Daniel Kahneman calls their “slow thinking” can consider the matter carefully.

Has it ever been thus? Yes, surely. But the level of control and manipulation possible in the digital era exceed what was possible before by an almost unfathomable extent. “Predictive analytics” and big data and many other tools hint at a means for manipulating the public in all sorts of ways without their knowledge at all. These methods go far beyond manipulating emotions, and so focusing on the specific behavior modifications and effects achieved by this specific experiment strikes me as missing the point.

NSA Facebook

Facebook Security Agency (Image source: uncyclopedia)

Some have responded to this story along the lines of Erin Kissane— “get off Facebook”—or Dan Diamond—“If Facebook’s Secret Study Bothered You, Then It’s Time To Quit Facebook.” I don’t think this is quite the right response for several reasons. It puts the onus on individuals to fix the problem, but individuals are not the source of the problem; the social network itself is. It’s not that users should get off of Facebook; it’s that the kind of services Facebook sells should not be available. I know that’s hard for people to hear, but it’s a thought that we have not just the right but the responsibility to consider in a democratic society: that the functionality itself might be too destructive (and disruptive) to what we understand our political system to be.

More importantly, despite these protestations, it isn’t possible to get off Facebook. For “Facebook” here read “data brokers,” because that’s what Facebook is in many ways, and as such it is part of a universe of hundreds and perhaps even thousands of companies (of which the most famous non-social media company is Acxiom) who make monitoring, predicting, and control the behavior of people—that is, in the most literal sense, political control—their business. As Julia Angwin has demonstrated recently, we can’t get out of these services even if we want to, and to some extent the more we try, the more damaging we make the services to us as individuals. Further, these services aren’t concerned with us as individuals, as Marginal Utility blogger and New Inquiry editor Rob Horning (among his many excellent pieces on the structural as opposed to the person impact of Facebook, see “Social Graph vs. Social Class,” “Hollow Inside,” “Social Media Are Not Self-Expression,” and some of his pieces at Generation Bubble; it’s also a frequent topic on his Twitter feed; Peter Strempel’s “Social Media as Technology of Control” is a sharp reflection on some of Horning’s writing) and I and others have both been insisting for years: these effects occur at population levels, as probabilities: much as the Obama campaign did not care that much whether you or your neighbor voted for him, but did care that if they sprayed Chanel No. 5 in the air one June morning, one of the two of you was 80% likely to vote for him, and the other was 40% likely not to go to the polls. Tufekci somewhat pointedly argued for this in the aftermath of the 2012 election:

Social scientists increasingly understand that much of our decision making is irrational and emotional. For example, the Obama campaign used pictures of the president’s family at every opportunity. This was no accident. The campaign field-tested this as early as 2007 through a rigorous randomized experiment, the kind used in clinical trials for medical drugs, and settled on the winning combination of image, message and button placement.

Further, you can’t even get off Facebook itself, which is why I disagree pretty strongly with the implications of a recent academic paper of Tufekci’s, in which she writes fairly hopefully about strategies activists use to evade social media surveillance, by performing actions “unintelligible to algorithms.” I think this only provides comfort if you are looking at individuals and at specific social media platforms, where it may well be possible to obscure what Jim is doing by using alternate identities, locations, and other means of obscuring who is doing what. But most of the big data and data mining tools focus on populations, not individuals, and on probabilities, not specific events. Here, I don’t think it matters a great deal whether you are purposely obscuring activities or not, because those “purposely obscured” activities also go into the big data hopper, also offer fuel for the analytical fire, and may well reveal much more than we think about intended future actions and behavior patterns, and also leave us much more susceptible than we know to relatively imperceptible behavioral manipulation.

Here it’s ironic that Max Schrems is in the news again, having just published a book in German called Kämpf um deine Daten (English: Fight for Your Data) and spokesman for the Europe v. Facebook group that is challenging not so much the NSA itself but the cooperation between NSA and Facebook in European courts. A recent story about Schrems’s book in the major German newspaper Frankfurter Allgemeine Zeitung (FAZ) notes that what got Schrems concerned about the question of data privacy in the first place was this:

Schrems machte von seinem Auskunftsrecht Gebrauch und erwirkte im Jahr 2011 nach längerem Hin und Her die Herausgabe der Daten, die der Konzern über ihn gespeichert hatte. Er bekam ein pdf-Dokument mit Rohdaten, die, obwohl Schrems nur Gelegenheitsnutzer war, ausgedruckt 1222 Seiten umfassten – ein Umfang, den im letzten Jahrhundert nur Stasi-Akten von Spitzenpolitikern erreichten. Misstrauisch machte ihn, dass das Konvolut auch Daten enthielt, die er mit den gängigen Werkzeugen von Facebook längst gelöscht hatte.

Here’s a rough English translation, with help from Google Translate:

Schrems exercised his Freedom of Information rights and after a long back and forth obtained in 2011 the data Facebook hold on him. He got a PDF document with raw data which, although Schrems was only an occasional user, included 1222 printed pages—a scale that in the last century could have been reached only in the Stasi files of top politicians. What he found especially suspicious was that the documents also contained data that he had long since erased with the normal Facebook tools.

In fact, it’s probably even worse, both if we consider data brokers like Acxiom (who maintain detailed profiles on us whether we like it or not), or even Facebook itself, which it is reasonable to assume does just the same thing, whether we have signed up for it or not. And it is no doubt true that, as the great, skeptical data scientist Cathy O’Neil says over at her MathBabe blog, “this kind of experiment happens on a daily basis at a place like Facebook or Google.” This is the real problem; marking this specific project out as “research” and an unacceptable but unusual effort misses the point almost entirely. Google, Facebook, Twitter, the data brokers, and many more are giant research experiments, on us. “Informed consent” for this kind of experiment would have to be provided by the whole population, even those who don’t use social media at all, and the possible consequences would have to include “total derailing of your political system without your knowledge.”

(As an aside those who have gone out of their way to defend Facebook—see especially Brian Keegan and Tal Yarkoni—provide great examples of cyberlibertarianism in action, emotionally siding with corporate capital as itself a kind of social justice or political cause; Alan Jacobs provides a strong critique of this work.)

This, in the end, is part of why I find very disturbing Greenwald’s interpretation of Snowden’s materials, and his relentless attacks on the US government, and no less his concern for US companies only in so far as their business has been harmed by the Snowden information he and others have publicized. Political control, in any reasonable interpretation of that phrase, refers to the manipulation of the public to take actions and maintain beliefs that they might not arrive at via direct appeal to logical argument. Emotion, prejudice, and impulse substitute for deep thinking and careful consideration. While Greenwald has presented some—but truthfully, only some—evidence that the NSA may engage in political control of this sort, he blames it on the government rather than on the existence of tools, platforms and capabilities that do not just enable but are literally structured around such manipulation. Bizarrely, even Julian Assange himself makes this point in his book Cypherpunks, but it’s a point Greenwald continues to shove aside. Social media is by its very nature a medium of political control. The issue is much less who is using it, and why they are using it, than that it exists at all. What we should be discussing—if we take the warnings of George Orwell and Aldous Huxley at all seriously—is not whether NSA should have access to these tools. If the tools exist, and especially if we endorse some form of the nostrum that Greenwald in other modes rabidly supports, that information must be free, then we have no real way to prevent it from being used to manipulate us. How the NSA (and Facebook, and Acxiom) uses this information is of great concern, to be sure: but the question we are not asking is whether it is not the specific users and uses we should be curtailing, but the very existence of the data in the first place. It is my view that as long as this data exists, its use for political control will be impossible to stop, and that the lack of regulation in private companies means that we should be even more concerned about how they use it (and how it is sold, and to whom) than we are about what heavily-regulated governments do with it. Regardless, in both cases, the solution cannot be to chase after the tails of these wild monkeys—it is to get rid of the bananas they are pursuing in the first place. Instead, we need to recognize what social media and data brokerage does: it does a kind of non-physical violence to our selves, our polities, and our independence. It is time at least to ask whether social media itself, or at least some parts of it—the functionality, not the technology—is too antithetical to basic human rights and to the democratic project for it to be acceptable in a free society.

Posted in "social media", cyberlibertarianism, digital humanities, privacy, surveillance, we are building big brother | Tagged , , , , , , , , , , , , , , , , , , , , , , , | 1 Response