‘Is It Compromised?’ Is the Wrong Question about US Government Funding of Tor

In many ways, the most surprising thing about Yasha Levine’s powerful reporting on US government funding of Tor at Pando Daily has been the response to it. From the trolling attacks and ad hominem insults by apparently respectable, senior digital privacy activists and journalists, to repeated, climate-denialist-style “I’m rubber you’re glue”-type (or, as I like to call them, “You’re a towel”), evidence-free insinuations about Levine and Pando’s possibly covert funding sources and intelligence world connections, almost none of the response has had the measured and reasonable tone, let alone the connection to established facts, that Levine’s own reporting has.

Much of this response tries to turn Levine’s reporting into a conspiracy theory, which it then pretends to defuse by positing even wilder conspiracy theories. The conspiracy theory is that funding from the State Department, BBG, and all the various CIA and other intelligence agency cut-outs means that the Tor developers are covert agents or assets, secretly doing something very different from what they say they are doing; and that Tor is deliberately compromised in ways that these leaders are not revealing.

This turns into the question: “If the CIA is funding Tor, where are the vulnerabilities they are secretly planting in it? Why haven’t they been found via the classic principle that ‘all bugs are shallow when many eyes are looking for them’?”

For example, here are two comments to Levine’s “Internet Privacy, Funded by Spooks: A Brief History of the BBG”:

User “SpryteEast” writes: “this article could be great if it had proof. Most of modern-day cryptography technology was funded by US government at some point. Does it mean that they can break into everything?”

User “grumpykocka” writes: “Simple question: do you have proof that these systems have been compromised in any way? Technically, wouldn’t these open systems be incredibly hard to compromise without us knowing it? Perhaps they could be cracked, but you are implying much more than that, intentional back doors built into the code. But again, how would that go undetected in these open source solutions?”

This is from Tor & First Look staffer Micah Lee’s mean-spirited and defensive “Fact-Checking Pando’s Smears Against Tor,” responding to Levine’s earlier pieces:

If there were evidence of an intentional design flaw in the Tor network, similar to NSA’s sabotage of encryption standards through their BULLRUN program, that would be a huge deal. Pando didn’t find anything that wasn’t published on torproject.org.

This is the wrong question.

It’s a question so wrong and so enticing that it often derails the conversation we really should be having. It’s asked so often that it has the appearance of a party line, talking points that those involved issue with a remarkable persistence and uniformity. We don’t need to ask whether that party line is dictated by someone; what is more interesting is the party line itself.

cia dissemination of propaganda

a 1977 New York Times story about CIA’s propagandistic use of the press (from Yasha Levine’s most recent Pando story)

CIA, like other intelligence agencies (and for whom I’ll use it as a shorthand for the time being), is not a mind control supervillain. It does not “own” assets (in the spy lingo, “agents” usually refers to actual employees, whereas “assets” are others who may in some way or another contribute to intelligence agency efforts on an ad-hoc basis) and prescribe every aspect of their behavior. Rather, it looks for assets whose interests may align with its own. At times it may nudge them in the direction it wants; only with some of those most closely tied in does it directly give orders. When CIA operates through cutouts those cutouts typically appear to have full autonomy, and many in the cutout may well have that autonomy: that’s what gives cutouts credibility and what makes them useful. If everyone could have easily seen that Life magazine was a CIA front, people would have taken it much less seriously than they did.

CIA uses cutouts and assets for a much subtler purpose: because those apparently “regular” people and organizations, in doing what they do anyway, align with US state interests. They advance CIA’s interests just by being themselves. CIA has no need to control, direct, or even directly influence these assets: in certain ways, this would be less productive than remaining in the background.

From this perspective, the wrong question is to ask what CIA and State and so on are doing to “mess” with the Tor Project. The right question is to ask: how does the development of Tor, and in a parallel fashion the promotion of “internet freedom,” align with the interests of CIA, the State Department, USAID, and so on?

This is a question that it is very hard for cyberlibertarians even to put to themselves. They are so convinced of the righteousness of “internet freedom” and of Tor, so sure of its purpose and its politics that many of them appear not even to be able to bear to ask whether these beliefs might be fallacious. That “internet freedom,” a slogan without a clear referent, might be a policy the US promotes for specific geostrategic reasons, in part because so many people hop on board without understanding that the “internet freedom” agenda is not what it sounds like. That Tor serves some very specific US interests.

Despite the conspiratorial accusations levied at Levine, his piece makes this focus very clear:

The BBG was formed in 1999 and runs on a $721 million annual budget. It reports directly to Secretary of State John Kerry and operates like a holding company for a host of Cold War-era CIA spinoffs and old school “psychological warfare” projects: Radio Free Europe, Radio Free Asia, Radio Martí, Voice of America, Radio Liberation from Bolshevism (since renamed “Radio Liberty”) and a dozen other government-funded radio stations and media outlets pumping out pro-American propaganda across the globe.

This does not mean, of course, that it’s uninteresting whether some people involved with Tor—perhaps especially those close to and/or funded by the OTF, as Levine points out—might be “assets” in some way or another, but we are likely never really to know the truth about covert shenanigans like that. It also doesn’t mean that questions about Tor being compromised are unimportant. It’s interesting to note that Micah Lee asks Levine to provide evidence of an “intentional design flaw in the Tor network”: evidence of intentionality would consist of communicative documentation that is only likely to turn up in unusual circumstances. But there is plenty of evidence of design flaws per se in the Tor network: they are found all the time, often by the Tor developers themselves. How did they get there? Who knows. But that is one reason why “is it compromised” is such a misguided question: we know Tor is compromised or has been compromised at times, and undoubtedly will be again. We don’t know who is responsible for its vulnerabilities: often they emerge from parts of the system nobody appears to have thought about, but sometimes nobody, not even those in the Tor community, knows their source.

But these are questions about which we can’t do much more than speculate. They are outweighed in importance by the central question about the ideology behind Tor. If you are asking how government funding compromises Tor and “internet freedom,” you are asking the wrong question. The right question is: how do Tor and “internet freedom” serve the interests of those who fund them so generously—and have virtually no history of funding (especially on an ongoing basis) projects that are contrary or even irrelevant to their interests? Why do major factions within the US Government so steadfastly promote an internet project whose supporters routinely insist that “the government sure does hate the Internet”?

We don’t have to look far or think that hard to develop answers to these questions. Just the other day, Shawn Powers and Michael Jablonski, authors of the new and fascinating-sounding book, The Real Cyber War: The Political Economy of Internet Freedom (University of Illinois Press, 2015), announced the publication of their book by writing:

Efforts to create a universal internet built upon Western legal, political, and social preferences is driven by economic and geopolitical motivations rather than the humanitarian and democratic ideals that typically accompany related policy discourse. In fact, the freedom-to-connect movement is intertwined with broader efforts to structure global society in ways that favor American and Western cultures, economies, and governments.

The inability of many Tor and “internet freedom” and even super-encryption supporters to understand (or at least, to talk as if they understand) this point of view is part of what is so disturbing about this whole situation. “Internet freedom” and “internet privacy” and even “Tor” have become like articles of religious faith: creeds whose fundamental tenets cannot be questioned, even if they also cannot be stated in anything like the clarity with which “freedom of the press” can be stated. The critique we need to consider is not merely that major powers are “paying lip service” to the idea of internet freedom; it is that the idea itself is bankrupt: it is a propagandistic slogan in search of a meaning, a set of meaningful-sounding (but meaningless) words, like “right to work,” that exists only to serve a powerful and disturbing agenda (which is one direction that the outsize “internet freedom” funding provided by the US State Department, and Google’s triumphalist support for the idea, should raise questions for everyone). Indeed, if the putative freedom of information on which “the internet” (and Tor, and “internet freedom,” etc.) is supposedly based is going to mean anything–if it at least entails the “freedom of speech” and “freedom of the press” that in my opinion it does not eclipse in especially legible ways–it has to mean being willing always to question our fundamental assumptions, making it beyond ironic that its fiercest champions work so hard to prevent us from doing just that.

Posted in cyberlibertarianism, privacy, revolution, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , , , , | 2 Responses

Wikipedia and the Oligarchy of Ignorance

In a recent story on Medium called “One Man’s Quest to Rid Wikipedia of Exactly One Grammatical Mistake: Meet the Ultimate WikiGnome,” Andrew McMillen tells the story of Wikipedia editor “Giraffedata”—beyond the world of Wikipedia, a software engineer named Bryan Henderson—who has edited thousands of Wikipedia pages to correct a single grammatical error and is one of the 1000 most active editors of Wikipedia. McMillen describes Giraffedata as one of the “favorite Wikipedians” of some employees at the Wikimedia Foundation, the umbrella organization that funds and organizes Wikipedia along with other projects. The area he works on is not controversial (at least not in the sense of hot topics like GamerGate or climate change); his edits are typically not reverted in the way that substantive edits to such controversial topics frequently are. While the area he focuses on is idiosyncratic, his work is extremely productive. As such he is understood by at least some of the core Wikipedians to exemplify the power of crowds, the benefits of “organizing without organization,” the fundamental anti-hierarchical principles that apparently point toward new, better political formations.

McMillen describes a presentation at the 2012 Wikimania conference by two Wikimedia employees, Maryana Pinchuk and Steven Walling:

Walling lands on a slide entitled, ‘perfectionism.’ The bespectacled young man pauses, frowning.

“I feel sometimes that this motivation feels a little bit fuzzy, or a little bit negative in some ways… Like, one of my favorite Wikipedians of all time is this user called Giraffedata,” he says. “He has, like, 15,000 edits, and he’s done almost nothing except fix the incorrect use of ‘comprised of’ in articles.”

A couple of audience members applaud loudly.

“By hand, manually. No tools!” interjects Pinchuk, her green-painted fingernails fluttering as she gestures for emphasis.

“It’s not a bot!” adds Walling. “It’s totally contextual in every article. He’s, like, my hero!”

“If anybody knows him, get him to come to our office. We’ll give him a Barnstar in person,” says Pinchuk, referring to the coveted virtual medallion that Wikipedia editors award one another.

Walling continues: “I don’t think he wakes up in the morning and says, ‘I’m gonna serve widows in Africa with the sum of all human knowledge.’” He begins shaking his hands in mock frustration. “He wakes up and says, ‘Those fuckers — they messed it up again!’”

Neither the presenters nor McMillen follow up on Walling’s aside that Giraffedata’s work might be “a little bit negative in some ways.” But it seems arguable to me that this is the real story, and the celebration of Henderson’s efforts is not just misplaced, but symptomatic. Rather than demonstrating the salvific benefits of non-hierarchical organizations, Giraffedata’s work symbolizes their remarkable tendency to turn into formations that are the exact opposite of what the rhetoric suggests: deeply (if informally) hierarchical collectives of individuals strongly attached to their own power, and dismissive of the structuring elements built into explicit political institutions.

This is a well-known problem. It has been well-known at least since 1970 when Jo Freeman wrote “The Tyranny of Structurelessness”; it is connected to what Alexander Galloway has recently called “The Reticular Fallacy.” These critiques can be summed up fairly simply: when you deny an organization the formal power to distribute power equitably—to acknowledge the inevitable hierarchies in social groups and deal with them explicitly—you inevitably hand power over to those most willing to be ruthless and unflinching in their pursuit of it. In other words, in the effort to create a “more distributed” system, except in very rare circumstances where all participants are of good will and relatively equivalent in their ethics and politics, you end up creating exactly the authoritarian rule that your work seemed designed specifically to avoid. You end up giving even more unstructured power to exactly the persons that institutional strictures are designed to curtail.

That this is a general problem with Wikipedia has been noted by Aaron Shaw and Benjamin Mako Hill in a 2014 paper called “Laboratories of Oligarchy? How The Iron Law Extends to Peer Production.” Shaw and Mako Hill are fairly enthusiastic about Wikipedia and peer production, and yet their clear-eyed research, much of which is based on empirical as well as theoretical considerations, forces them to conclude:

Although, invoking U.S. Supreme Court Justice Louis Brandeis, online collectives have been hailed as contemporary “laboratories of democracy”, our findings suggest that they may not necessarily facilitate enhanced practices of democratic engagement and organization. Indeed, our results imply that widespread efforts to appropriate online organizational tactics from peer production may facilitate the creation of entrenched oligarchies in which the self-selecting and early-adopting few assert their authority to lead in the context of movements without clearly defined institutions or boundaries. (23)[1]

In the current case, what is so striking about Giraffedata’s work is that, from the perspective of every reasonable expert angle on the question, Giraffedata is just plain wrong. It is not a fact that “comprised of” is ungrammatical or that it means only what Giraffedata says it does. This is not at all controversial. In an excellent piece at The Guardian, “Why Wikipedia’s Grammar Vigilante Is Wrong,” David Shariatmadari demonstrates the many reasons why this is the case (though as usual, read the comments for typically brusque and/or ‘anti-elite’ elitist opinions to the contrary). Even better is “Can 50,000 Wikipedia Edits Be Wrong?” by Mark Lieberman at Language Log, the leading linguistics site in the world, which has been covering this issue—that is, specifically the usage of “comprised of”—since at least 2011. Lieberman wryly notes that “It doesn’t seem to have occurred to Mr. McMillen to check the issue out in the Oxford English Dictionary or in Merriam-Webster’s Dictionary of English Usage, or for that matter in literary history, where he might have appreciated the opportunity to correct Thomas Hardy… and also Charles Dickens.” Bizarrely, Wikipedia itself has a page on “comprised of” that endorses the linguist’s view, rather than Giraffedata’s view.


Drawing the circle just a bit wider, Giraffedata is a linguistic prescriptivist in a world where the experts agree that prescriptivism is ideology rather than wisdom. Prescriptivism itself is an assertion of power in the name of one’s own authority that claims (erroneously) to be based on on higher authorities that do not, in fact, exist. It is, in fact, one of the most persistent targets in writing by actual linguists from across the political spectrum: Lieberman rightly calls it “authoritarian rationalism,” and he and Geoff Nunberg (another of the most prominent US linguists) have an interesting back-and-forth about its fit with general right-left politics.

At another level of abstraction, Henderson’s efforts exemplify a lust for power that entails a specific (if perhaps not entirely conscious) rejection of expertise over precisely the topic he cares about.[2] The development of “expertise” is exactly the kind of social, relatively ad-hoc but still structured distribution of power that the new structureless tyrants want to re-hierarchize, with themselves at top. Does Henderson ask linguists about the rightness or wrongness of his judgment? As Lieberman’s work points out, there are obvious, easily available resources in which Henderson might have checked his judgment; it does not appear even to have occurred to him to do so. As Shariatmadari points out, even in the McMillen article, it becomes clear very quickly that Henderson is aware that the “error” he is “correcting” is not actually a matter even of grammar, but a judgment of taste based on several well-known linguistic fallacies (that synonyms should not exist, or that a word’s origin dictates its current meaning).

None of this is to say that it is “right” or “wrong” to adjust the style of Wikipedia with regard to Henderson’s word choice hobby horse. But here again is another rejection of a perfectly reasonable and even useful form of distributed authority: editorial authority over a written product. Before Wikipedia, and even today, published encyclopedias and other publications had rules called “house styles.” These are guidelines made up provisionally by the publishing house to enforce consistency on their work; some are extremely detailed and some are much looser. The house style for any given publication would dictate whether or not to use “comprised of” in the sense that upsets Henderson so much. It would not be a fact whether “comprised of” is right or wrong, but only a fact within the context of the publication. And this is actually a better account of how language works, or usually works: “this is how we do it here,” rather than “this is correct” and “this is incorrect.” (Wikipedia does have a very detailed Manual of Style, but it largely refrains from guidelines pertaining to usage, unlike the in-house style manuals of publications like The New Yorker or The Wall Street Journal.)

At the next level of abstraction, perhaps the most important one, the Wikimedia Foundation’s endorsement of Giraffedata’s work as among their “favorite” displays a kind of agnotology—a studied cultivation of ignorance—that feeds structureless tyrannies and authoritarian anti-hierarchies. In order to rule over those whose knowledge or expertise challenges you, the best route is to dismiss or mock that expertise wholesale, to rule it out as expertise at all, in favor of your own deeply-held convictions that you trumpet as a “new kind” of expertise that invalidates the “old,” “incumbent” kinds. This kind of agnotology is widespread in current Silicon Valley and digital culture; it is no less prominent in reactionary political culture, such as the Tea Party and rightist anti-science movements.

Thus Henderson’s work connects to the well-known disdain of many core Wikipedia editors for actual experts on specific topics, and even more so for their stubborn resistance (speaking generally; of course there are exceptions) to the input of such experts, when one would expect exactly the opposite should be the case. (As a writer in Wired put it almost a decade ago, “The Wikipedia philosophy can be summed up thusly: ‘Experts are scum.’”) A world-leading expert on Topic A wants to help edit the page on that topic—is the right response to reach out to them and help guide them through the (what should be) minimal rules of your project? Or is it to mock and impugn them for having the temerity to think they are expert in something, in the face of the far more important project that you are expert in? One of its several pages addressing this problem, “Wikipedia: Expert Retention,” notes:

If by “Wikipedia” one means its values as expressed in policy, then it can be said that Wikipedia definitely does not value expertise. Attempts to establish a policy on credential verification have failed. There are competing essays that say credentials are irrelevant and that credentials matter. An attempt to push through a policy to ignore all credentials failed, though it received considerable support.

The culture of Wikipedia has no single commonly held view, as is illustrated in the discussion pages of the above cited essays and proposals. However, the lack of consensus (and indeed doggedly opposed parties) results in a perceived lack of respect for expertise, a deference normally found elsewhere in society. Anti-expertise positions often are not acted against, so they are in effect encouraged. And as they are encouraged, they more than negate any positive regard for expertise, since the latter is only expressed, at present, in the consideration given by individual editors to those whom they recognize as experts. (emphasis added)

This is why it’s important that the Wikimedia Foundation employees pass so quickly over the possible “negatives” in Giraffedata’s work, and choose to single him out for praise. This is exactly—if, I think, unconsciously, what the most persistent members of the Wikipedia community want—the disparagement of existing (or in Silicon Valley terminology, “incumbent”) structures for distributing power in the name of a “democratization” that is actually about people with a significant lust for power that is not patient enough to develop its own distributive structures (that is, to work on developing a house style for Wikipedia, or to, I don’t know, study linguistics). In this way, too much of peer production seems like a marketing sheen placed over a very clear and antidemocratic lust for personal power, much as the 1970s communes were, but writ large and with very central social pillars in its sights.

As Freeman’s work has always suggested, which makes the brute rejection of its reasoning in favor of Hayekian “spontaneous orders” of knowledge (or ignorance), Wikipedia’s structurelessness is very easily seen not as a social miracle of cooperation but as a breeding ground for tyrants.[3] Mako Hill and Shaw: “the adoption of peer production’s organizational forms may inhibit the achievement of enhanced organizational democracy” (22). That they do this in the name of democracy makes them characteristic of the contemporary, digitally-inspired agnotological oligarchy.


[1] It is worth noting that Shaw and Mako Hill rely in part on the so-called “Iron Law of Oligarchy” postulated by the proto-Fascist and Fascist sociologist Robert Michels in the early part of the 20th century. Michels actually thought it applies to all democratic organizations and cannot be prevented, but Shaw and Mako Hill rely on a great deal of post-Michels research that tends to give greater weight to formal methods of preventing oligarchy than Michels did.

[2] “Lust for power” is the usual English translation of the German word machtgelüst, which appears prior to the better-known “will to power” in Nietzsche, and unlike the latter term, is specifically meant to indicate the cathection of desire toward personal power.

[3] Wikipedia founder Jimmy Wales famously traces his philosophical inspiration for Wikipedia to Friedrich Hayek’s 1945 essay “The Use of Knowledge in Knowledge in Society”; Philip Mirowski, our most trenchant critic of neoliberalism, has repeatedly demonstrated the ways in which Hayek’s views specifically advocate ignorance.

Posted in "social media", cyberlibertarianism, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , , , , | 4 Responses

Tor Is Not a “Fundamental Law of the Universe”

In what I consider a very welcome act of journalistic open-mindedness, Pando Daily recently published a piece by Quinn Norton that responded both to Yasha Levine’s excellent and necessary piece on the US Government’s funding of the Tor Project, and perhaps even more so his even more necessary piece on the amazing attacks his piece received from some of the brightest stars in the encryption, “internet freedom,” and Tor universe.

I want to focus on a small part of Norton’s piece, in which she tries to explain the vicious attacks on Levine’s piece:

The incoherent frothing-at-the-mouth support for the fundamentals of Tor don’t arise from a set of politics, or money, or a particular arrangement of social trust like a statute or constitutional law. The support comes from an appeal to the fundamental laws of the universe, which not even the most vigorous of black budget ops can break.

Yes, the Tor people somehow believe that Tor itself implements a “fundamental law of the universe,” and that their privileged technical knowledge grants them special access that the rest of us lack. That is false, breathless narcissism and arrogance at its most outrageous, and very typical of our digital age.

There are fundamental laws of the universe: that something with mass cannot move faster than the speed of light in a vacuum; that matter can neither be created nor destroyed, but only converted into energy, and vice versa. These are fundamental laws that DoD can’t change. All technologies dip into these fundamental laws, to greater and lesser degrees.

Tor is not a fundamental law of the universe. Math is *fairly* fundamental, but even the simplest math–say, 2 + 2 =4–is NOT a fundamental law. Addition obtains under some circumstances, and not under others (this is part of the point of the revolutions in mathematical theory of the 19th century, including non-Euclidean geometry).

Grammatically, the phrase “Tor is/is not a fundamental law of the universe”—which, to be clear, is my phrase, not Norton’s—makes no sense. But other than the vague notion of “mathematical laws,” which she does not even directly invoke, Norton’s statement that Tor advocates “appeal” to the “fundamental laws of the universe” is conceptually no clearer. There are not that many fundamental laws. Tor “appeals” to them no more or less than, say, the NSA does when it uses satellites that rely in part on relativistic physics for geolocation. Relativity itself is a strange candidate for a “fundamental” law, for lots of interesting philosophical and scientific reasons, which does not mean it is not fundamental either, but my point is to show that the belief of Tor advocates that they are tapping into something over and above what the rest of us have access to is misbegotten and hubristic in the extreme. If what the Tor people are trying to show is that their cryptographic procedures are sound, fine. But that is not what we are talking about. We are talking about the use of Tor in the world.

Tor for freedom

The math on which Tor is based appears solid, as far as I can tell not being a cryptographic mathematician myself, according to both the Tor developers and outside analysts. Yet the actions Tor exists to enable–the use of digital communications systems in our world, supposedly opaque to traffic analysis (the main purpose Tor’s development team claim for their product)–are governed by no parallel solidity. It is part of a huge social apparatus. No matter how perfect the math, as long as that apparatus is large and involves people, it cannot be governed entirely by an unbreakable fundamental law of the universe. One is not challenging the laws of motion by buying out the operator of a Tor relay or exit node. It would be more accurate to say that a fundamental law of the social world is that people–including many of the same hackers Quinn Norton defends and celebrates in other writings–will do everything they can to find their way in to systems like Tor.

Further, we have historical evidence that proves this. People keep breaking Tor (something I am personally happy about). Sometimes the Tor developers seem mystified. But I think people will keep breaking it (and so, apparently, do the Tor developers themselves). And I do think those who fund it are aware of its systematic vulnerabilities in a way those hypnotized by its math are not, and that’s why the story of its funding is actually extremely relevant. It is not something that can’t be controlled by the state: it can be, or at least degraded significantly, in part because they could easily shut it down altogether through a program of disallowing a wide range of onion routing services, relays and nodes that Tor depends on to run, and continuing to disallow any new services that Tor were to set up.

Tor has edges. It has layers. It has connections with other services. It has physical systems on which it sits, binaries that must be compiled, and hundreds of other points of vulnerability even granted that onion routing is mathematically robust to breaking. Oddly enough, Google Chief Engineer and life-extension supplement magnate Ray Kurzweil, whom I consider a nut, believes Moore’s law is a “fundamental law of the universe” (or, more accurately, a corollary to the “law of accelerating returns” that he thinks is fundamental) which I think he is also wrong about, but whether it’s fundamental or not, Moore’s law suggests that computers will get fast enough eventually to break current encryption routines–perhaps breaking Tor, perhaps making historical records of Tor use much more subject to network analysis than they seem today. And who knows? Maybe there is some kind of non-Euclidean (or some other alternative) approach to cryptography nobody has stumbled on yet that will square the circle in a way none of us can foresee.

It is precisely the hubris and arrogance of Tor and its developers, the hubris of computationalists who believe their self-adjudicated superior knowledge of machines compared to everyone else, who believe that their access to fundamental computational/mathematical “laws” insulates them from society and politics–that makes this discussion so difficult. Tor’s amazing reactions to the Pando coverage are exactly the kind of petulant, arrogant, dismissive, power-hungry reaction computationalists typically have when anyone they deem unworthy dares actually to ask them to account for themselves. They think they are above us, in so many different ways. They are not, and they hate anyone who dares to suggest that they are just people too.

Posted in "hacking", cyberlibertarianism, revolution, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , , , | 1 Response

All Cybersecurity Technology Is Dual-Use

Dan Geer is one of the more interesting thinkers about digital security and privacy around. Geer is a sophisticated technologist with an extremely varied and rich background who has also, fairly recently, become a spook of some kind. Geer is currently the Chief Information Security Officer for In-Q-Tel, the technology investment subsidiary of the CIA, popularly and paradoxically known as a “not-for-profit venture capital firm,” but which gets much more directly involved with its investment targets with the intent of providing “‘ready-soon innovation’ (within 36 months) vital to the IC [intelligence community] mission,” and therefore shuns the phrase “venture capital.”

This might lead one to think that Geer would speak as what Glenn Greenwald likes to call a “government stenographer,” but I find his speeches and writings to be both unusually incisive and extremely independent minded. He often says things that nobody else says, and he says them from a position of knowledge and experience. And what he says often does not line up with either what one imagines “government” thinks, or with what many in industry want; he has recently suggested, contrary to what Google and many “digital freedom” advocates affirm, that the European “Right to Be Forgotten” actually does not go far enough in protecting privacy.

In his talk at the 2014 Black Hat USA conference, the same talk where he made remarks about the Right to Be Forgotten, called “Cybersecurity as Realpolitik” (text; video), Geer made the following deeply insightful observation:

All cyber security technology is dual use.

Here’s the full context of that statement:

Part of my feeling stems from a long-held and well-substantiated belief that all cyber security technology is dual use. Perhaps dual use is a truism for any and all tools from the scalpel to the hammer to the gas can — they can be used for good or ill — but I know that dual use is inherent in cyber security tools. If your definition of “tool” is wide enough, I suggest that the cyber security tool-set favors offense these days. Chris Inglis, recently retired NSA Deputy Director, remarked that if we were to score cyber the way we score soccer, the tally would be 462-456 twenty minutes into the game,[CI] i.e., all offense. I will take his comment as confirming at the highest level not only the dual use nature of cybersecurity but also confirming that offense is where the innovations that only States can afford is going on.

Nevertheless, this essay is an outgrowth from, an extension of, that increasing importance of cybersecurity. With the humility of which I spoke, I do not claim that I have the last word. What I do claim is that when we speak about cybersecurity policy we are no longer engaging in some sort of parlor game. I claim that policy matters are now the most important matters, that once a topic area, like cybersecurity, becomes interlaced with nearly every aspect of life for nearly everybody, the outcome differential between good policies and bad policies broadens, and the ease of finding answers falls. As H.L. Mencken so trenchantly put it, “For every complex problem there is a solution that is clear, simple, and wrong.”

geer at black hat

Dan Geer at the Black Hat USA 2014 conference (Photo: Threatpost)

Now what Geer means by “dual-use” here is one of the term’s ordinary meanings: all cybersecurity technology (and really all digital technology) has both civilian and military uses.

But we can expand that, as Geer suggests when he mentions the scalpel, hammer, and gas can, in another way the term is sometimes used: all cybersecurity technology has both offensive and defensive uses.

This basic fact, which is obvious from any careful consideration of game theory or military or intelligence history, seems absolutely lost on the most vocal and most active proponents of personal security: the “cypherpunks” and crypto advocates who continually bombard us with the recommendation we “encrypt everything.” (In “Opt-Out Citizenship” I describe the anti-democratic nature of the end-to-end encryption movement.)

Not only that: I don’t think “cybersecurity” technology is a broad enough term, either: it would be better to say that a huge amount of digital technology is dual-use. That is to say that a great deal of digital technology has uses to which it can be and will be put that are neither obvious nor, necessarily, intended by their developers and even users, and that often work in exactly the opposite way that their developers or advocates say (or think) they do.

This is part of what drives me absolutely crazy about the cypherpunks and other crypterati who have come out in droves in the wake of the Snowden revelations.

They act and write as if they control what they do; as if, unlike the rest of the people in the world, what they do will be accepted as-is, will end the story, will have only the direct effects they intend.

Thus, they write as if significantly upping the amount and efficacy of encryption on the web is something that “bad” hackers and “bad” cypherpunks will just accept.

Despite the fact that we know that’s not true. Any advance in encryption has both offensive and defensive uses. In its most basic form, that means that while encoding or encrypting information might look defensive, the ability to decrypt or decode that information is offensive.

In another form, it means that no matter how carefully and thoroughly you develop your own encryption scheme, the very act of doing that does not merely suggest but ensures—particularly if your new technology gets adopted—that your opponents will use every means available to defeat it, including the (often, very paradoxically if viewed from the right angle, “open source”) information you’ve provided about how your technology works.

This isn’t a recipe for peace or for privacy. It’s an arms war. Cypherpunks might see it as some kind of perverse “peace war,” because they see themselves “only” developing defensive techniques—although given the penchant of those folks for obscurity and anonymity, it’s really special pleading to think that the only people involved in these efforts are engaged in defense.

But they aren’t. They are developing at best new “missile shields,” and the response of offensive technologists has to be—it is required to be, and they are paid to do it—better missiles that can get by the shields.

Further, because these crypterati almost universally adopt an anarcho-capitalist or far-right libertarian hatred for everything about government, they seem unable to grasp the fact that the actual mission of law enforcement and military intelligence—the mission they have to do, even when they are following the law and the constitution perfectly—involves doing everything in their power to crack and penetrate every encryption scheme in use. They have to. One of the ways they do that is to hire the very folks who bray so loudly about the sweet nature of absolute technical privacy—and once on the other side, who is better at finding ways around cryptography than those who pride themselves on their superior hacking skills? And the very development of these skills entails the creation of the universal surveillance systems used by the NSA as revealed by Snowden and others.

The population caught in the middle of this arms war is not made more free by it. We are increasingly imprisoned by it. We are increasingly collateral damage. Rather than (or at least in addition to) escalation, we need to talk about a different paradigm entirely: disarmament.

Posted in "hacking", "social media", cyberlibertarianism, materality of computation, privacy, rhetoric of computation, surveillance | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Social Media as Political Control: The Facebook Study, Acxiom, & NSA

Although it didn’t break the major media until last week, around June 2 researchers led by Adam Kramer of Facebook published a study in the Proceedings of the National Academy of Sciences (PNAS) entitled “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” The publication has triggered an flood of complaints and concerns: is Facebook manipulating its users routinely, as it seems to admit in its defense of its practices? Did the researchers—two of whom were at universities (Cornell and the University of California-San Francisco) during the time the actual study was conducted in 2012—get proper approval for the study from the appropriate Institutional Review Board (IRB), required of all public research institutions (and most private institutions, especially if they take Federal dollars for research projects)? Was Cornell actually involved in the relevant part of the research (as opposed to analysis of previously-collected data)? Whether or not IRB approval was required, did Facebook meet reasonable standards for “informed consent”? Do Terms of Service agreements accomplish not just the letter but the spirit of the informed consent guidelines? Could Facebook see emotion changes in its individual users? Did it properly anonymize the data? Can Facebook manipulate our moods? Was the effect it noticed even significant in the way the study claims? Is Facebook manipulating emotions to influence consumer behavior?

While these are all critical questions, most of them seem to me to miss the far more important point, one that has so far been gestured at only by Zeynep Tufekci (“Facebook and Engineering the Public”; Michael Gurstein’s excellent “Facebook Does Mind Control,” which appeared almost simultaneously with this post, makes similar points to mine) and former Obama election campaign data scientist Clay Johnson (whose concerns are more than a little ironic). To see its full extent, we need to turn briefly to Glenn Greenwald and Edward Snowden. I have a lot to say about the Greenwald/Snowden story, which I’ll avoid going into too much for the time being, but for present purposes I want to note that one of the most interesting facets of that story is the question of exactly what each of them thinks the problem is that they are working so diligently to expose: is it a military intelligence agency out of control, an executive branch out of control, a judiciary failing to do its job, Congress not doing oversight the way some constituents would like, the American people refusing to implement the Constitution as Snowden/Greenwald think we should, and so on? Even more pointed, whomever it is we designate as the bad actors in this story, why are they doing it? To what end?

For Greenwald, the bad actors are usually found in the NSA and the executive branch (although, as an aside, his reporting seems often to show that all three branches of government are being read into or overseeing the programs as required by law, which definitely raises questions about who the bad guys actually are). More importantly, Greenwald has an analysis of why the bad actors are conducting warrantless, mass surveillance: he calls it “political control.” Brookings Institution Senior Fellow Benjamin Wittes has a close reading of Greenwald’s statements on this topic in No Place to Hide, (also see Wittes’s insightful review of Greenwald’s book) where he finds the relevant gloss of “political control” in this quotation from Greenwald:

All of the evidence highlights the implicit bargain that is offered to citizens: pose no challenge and you have nothing to worry about. Mind your own business, and support or at least tolerate what we do, and you’ll be fine. Put differently, you must refrain from provoking the authority that wields surveillance powers if you wish to be deemed free of wrongdoing. This is a deal that invites passivity, obedience, and conformity. The safest course, the way to ensure being “left alone,” is to remain quiet, unthreatening, and compliant.

That is certainly a form of political control, and a disturbing one (though Wittes I think very wisely asks, if this is the goal of mass surveillance, why is it so ineffective with regard to Greenwald himself and the other actors in the Snowden releases? Further, how was suppression-by-intimidation work supposed to work when the programs were entirely secret, and exposed only by the efforts of Greenwald and Snowden?). But it’s not the only form of political control, and I’m not at all sure it’s the most salient or most worrying of the kinds of political control enabled by ubiquitous, networked, archived communication itself: that is to say the functionality, not the technology, of social media itself.

Why I find it ironic that Clay Johnson should worry that Mark Zuckerberg might be able to “swing an election by promoting Upworthy posts 2 weeks beforehand” is that this is precisely, at a not very extreme level of abstraction, what political data scientists do in campaigns. In fact it’s not all that abstract: “The [Obama 2012] campaign found that roughly 1 in 5 people contacted by a Facebook pal acted on the request, in large part because the message came from someone they knew,” according to a Time magazine story, for example. In other words, the campaign itself did research on Facebook and how its users could be politically manipulated—swinging elections by measuring how much potential voters like specific movie stars, for example (in the case of Obama 2012, it turned out to be George Clooney and Sarah Jessica Parker). Johnson’s own company, Blue State Digital, developed a tool the Obama campaign used to significant advantage—“Quick Donate,” deployed so that “supporters can contribute with just a single click,” which might mean that it’s easy, or might mean that supporters act on impulse, before what Daniel Kahneman calls their “slow thinking” can consider the matter carefully.

Has it ever been thus? Yes, surely. But the level of control and manipulation possible in the digital era exceed what was possible before by an almost unfathomable extent. “Predictive analytics” and big data and many other tools hint at a means for manipulating the public in all sorts of ways without their knowledge at all. These methods go far beyond manipulating emotions, and so focusing on the specific behavior modifications and effects achieved by this specific experiment strikes me as missing the point.

NSA Facebook

Facebook Security Agency (Image source: uncyclopedia)

Some have responded to this story along the lines of Erin Kissane— “get off Facebook”—or Dan Diamond—“If Facebook’s Secret Study Bothered You, Then It’s Time To Quit Facebook.” I don’t think this is quite the right response for several reasons. It puts the onus on individuals to fix the problem, but individuals are not the source of the problem; the social network itself is. It’s not that users should get off of Facebook; it’s that the kind of services Facebook sells should not be available. I know that’s hard for people to hear, but it’s a thought that we have not just the right but the responsibility to consider in a democratic society: that the functionality itself might be too destructive (and disruptive) to what we understand our political system to be.

More importantly, despite these protestations, it isn’t possible to get off Facebook. For “Facebook” here read “data brokers,” because that’s what Facebook is in many ways, and as such it is part of a universe of hundreds and perhaps even thousands of companies (of which the most famous non-social media company is Acxiom) who make monitoring, predicting, and control the behavior of people—that is, in the most literal sense, political control—their business. As Julia Angwin has demonstrated recently, we can’t get out of these services even if we want to, and to some extent the more we try, the more damaging we make the services to us as individuals. Further, these services aren’t concerned with us as individuals, as Marginal Utility blogger and New Inquiry editor Rob Horning (among his many excellent pieces on the structural as opposed to the person impact of Facebook, see “Social Graph vs. Social Class,” “Hollow Inside,” “Social Media Are Not Self-Expression,” and some of his pieces at Generation Bubble; it’s also a frequent topic on his Twitter feed; Peter Strempel’s “Social Media as Technology of Control” is a sharp reflection on some of Horning’s writing) and I and others have both been insisting for years: these effects occur at population levels, as probabilities: much as the Obama campaign did not care that much whether you or your neighbor voted for him, but did care that if they sprayed Chanel No. 5 in the air one June morning, one of the two of you was 80% likely to vote for him, and the other was 40% likely not to go to the polls. Tufekci somewhat pointedly argued for this in the aftermath of the 2012 election:

Social scientists increasingly understand that much of our decision making is irrational and emotional. For example, the Obama campaign used pictures of the president’s family at every opportunity. This was no accident. The campaign field-tested this as early as 2007 through a rigorous randomized experiment, the kind used in clinical trials for medical drugs, and settled on the winning combination of image, message and button placement.

Further, you can’t even get off Facebook itself, which is why I disagree pretty strongly with the implications of a recent academic paper of Tufekci’s, in which she writes fairly hopefully about strategies activists use to evade social media surveillance, by performing actions “unintelligible to algorithms.” I think this only provides comfort if you are looking at individuals and at specific social media platforms, where it may well be possible to obscure what Jim is doing by using alternate identities, locations, and other means of obscuring who is doing what. But most of the big data and data mining tools focus on populations, not individuals, and on probabilities, not specific events. Here, I don’t think it matters a great deal whether you are purposely obscuring activities or not, because those “purposely obscured” activities also go into the big data hopper, also offer fuel for the analytical fire, and may well reveal much more than we think about intended future actions and behavior patterns, and also leave us much more susceptible than we know to relatively imperceptible behavioral manipulation.

Here it’s ironic that Max Schrems is in the news again, having just published a book in German called Kämpf um deine Daten (English: Fight for Your Data) and spokesman for the Europe v. Facebook group that is challenging not so much the NSA itself but the cooperation between NSA and Facebook in European courts. A recent story about Schrems’s book in the major German newspaper Frankfurter Allgemeine Zeitung (FAZ) notes that what got Schrems concerned about the question of data privacy in the first place was this:

Schrems machte von seinem Auskunftsrecht Gebrauch und erwirkte im Jahr 2011 nach längerem Hin und Her die Herausgabe der Daten, die der Konzern über ihn gespeichert hatte. Er bekam ein pdf-Dokument mit Rohdaten, die, obwohl Schrems nur Gelegenheitsnutzer war, ausgedruckt 1222 Seiten umfassten – ein Umfang, den im letzten Jahrhundert nur Stasi-Akten von Spitzenpolitikern erreichten. Misstrauisch machte ihn, dass das Konvolut auch Daten enthielt, die er mit den gängigen Werkzeugen von Facebook längst gelöscht hatte.

Here’s a rough English translation, with help from Google Translate:

Schrems exercised his Freedom of Information rights and after a long back and forth obtained in 2011 the data Facebook hold on him. He got a PDF document with raw data which, although Schrems was only an occasional user, included 1222 printed pages—a scale that in the last century could have been reached only in the Stasi files of top politicians. What he found especially suspicious was that the documents also contained data that he had long since erased with the normal Facebook tools.

In fact, it’s probably even worse, both if we consider data brokers like Acxiom (who maintain detailed profiles on us whether we like it or not), or even Facebook itself, which it is reasonable to assume does just the same thing, whether we have signed up for it or not. And it is no doubt true that, as the great, skeptical data scientist Cathy O’Neil says over at her MathBabe blog, “this kind of experiment happens on a daily basis at a place like Facebook or Google.” This is the real problem; marking this specific project out as “research” and an unacceptable but unusual effort misses the point almost entirely. Google, Facebook, Twitter, the data brokers, and many more are giant research experiments, on us. “Informed consent” for this kind of experiment would have to be provided by the whole population, even those who don’t use social media at all, and the possible consequences would have to include “total derailing of your political system without your knowledge.”

(As an aside those who have gone out of their way to defend Facebook—see especially Brian Keegan and Tal Yarkoni—provide great examples of cyberlibertarianism in action, emotionally siding with corporate capital as itself a kind of social justice or political cause; Alan Jacobs provides a strong critique of this work.)

This, in the end, is part of why I find very disturbing Greenwald’s interpretation of Snowden’s materials, and his relentless attacks on the US government, and no less his concern for US companies only in so far as their business has been harmed by the Snowden information he and others have publicized. Political control, in any reasonable interpretation of that phrase, refers to the manipulation of the public to take actions and maintain beliefs that they might not arrive at via direct appeal to logical argument. Emotion, prejudice, and impulse substitute for deep thinking and careful consideration. While Greenwald has presented some—but truthfully, only some—evidence that the NSA may engage in political control of this sort, he blames it on the government rather than on the existence of tools, platforms and capabilities that do not just enable but are literally structured around such manipulation. Bizarrely, even Julian Assange himself makes this point in his book Cypherpunks, but it’s a point Greenwald continues to shove aside. Social media is by its very nature a medium of political control. The issue is much less who is using it, and why they are using it, than that it exists at all. What we should be discussing—if we take the warnings of George Orwell and Aldous Huxley at all seriously—is not whether NSA should have access to these tools. If the tools exist, and especially if we endorse some form of the nostrum that Greenwald in other modes rabidly supports, that information must be free, then we have no real way to prevent it from being used to manipulate us. How the NSA (and Facebook, and Acxiom) uses this information is of great concern, to be sure: but the question we are not asking is whether it is not the specific users and uses we should be curtailing, but the very existence of the data in the first place. It is my view that as long as this data exists, its use for political control will be impossible to stop, and that the lack of regulation in private companies means that we should be even more concerned about how they use it (and how it is sold, and to whom) than we are about what heavily-regulated governments do with it. Regardless, in both cases, the solution cannot be to chase after the tails of these wild monkeys—it is to get rid of the bananas they are pursuing in the first place. Instead, we need to recognize what social media and data brokerage does: it does a kind of non-physical violence to our selves, our polities, and our independence. It is time at least to ask whether social media itself, or at least some parts of it—the functionality, not the technology—is too antithetical to basic human rights and to the democratic project for it to be acceptable in a free society.

Posted in "social media", cyberlibertarianism, digital humanities, privacy, surveillance, we are building big brother | Tagged , , , , , , , , , , , , , , , , , , , , , , , | 1 Response

Bitcoinsanity 2: Revolutions in Rhetoric

Bitcoin is touted, publicized and promoted as an innovation in financial technology. Usually those doing the promoting have very little experience with finance in general or with financial technology in particular–a huge, booming industry mostly made up of proprietary technologies that those of us who don’t work for major banks or trading firms know very little about–but are happy to claim with tremendous certainty that this particular financial technology is utterly transformative.

(As a side note, the blockchain itself is not inherently financial technology, and it may well prove more useful and interesting in contexts other than finance, such as the “fully decentralized data management service” offered by companies like MaidSafe; these kinds of developments are preliminary enough that I don’t think it’s yet possible to judge their usefulness).

Like certain other rhetorical constructions (e.g. “Arab Spring,” “open”), at a certain point the rhetoric and the discourse it engenders start to seem as much of the point as are the underlying technical or political facts. The rhetoric overtakes those facts; it becomes the facts. Unlike the “Arab Spring,” Bitcoin can be even harder to see from this angle, because it really is a piece of software, and a distributed network of users of that software.

Regardless: unlike some pieces of software, and like other social practices, Bitcoin’s main function so far is rhetorical. Bitcoin enables and licenses all sorts of argumentative and rhetorical practices that would not otherwise be possible in just this fashion, and the creation and propagation of those practices has become important–perhaps even central–to whatever “Bitcoin” is. This is not peripheral, unavoidable, unexceptionable tomfoolery; it is a core part of what Bitcoin is. Until and unless Bitcoin actually starts to function as a currency (meaning that its value stops fluctuating for a significant period of time) or admits that “value fluctuations” and “currency” are incompatible with each other, this will continue to be the case.

It’s not in any way peripheral. No matter how many of them I read, I am still astonished at the number of pieces that come out nearly every day that “explain” how Bitcoin works (although what they actually describe is the blockchain technology), then give some examples of Bitcoin being exchanged in the real world, then move to “Bitcoin is revolutionizing finance” to “Bitcoin will revolutionize everything” without in any way connecting the dots to what these concepts actually mean as they are used today. Why, just across the transom as I’m writing this comes “How Bitcoin Tech Will Revolutionise Everything from Email to Governments” out of “Virgin Entrepreneur” (run of course by the well-known decentralizer Richard Branson, who surely invests in technologies because they are likely to defuse radically the power of his enormous wealth) where anti-statist libertarian comedian (and if those aren’t qualifications to dismantle the world financial system what would be?) Dominic Frisby (@dominicfrisby) proclaims that the “wonderful Ayn Rand stuff” of which the blockchain is constructed leads us to ask:

What indeed will be the purpose of representative democracy when any issue can be quickly and efficiently decided by the people and voted on via the block chain? The revolution will not be televised. It will be cryptographically time stamped on the block chain.

Well, what is the purpose of representative democracy? One might well ask that question as one advocates the loosing of a technology on the world designed to render it impotent. In the US, and in most of the democratic world, we have representative governments bound by laws and constitutions specifically to avoid the well-known dangers of majoritarian rule and of letting each person pursue their “wonderful Ayn Rand” interests without any sort of check on their powers.

Bitcoin provides a whole new iteration of the far right’s ability to sell these once well-discarded ideas to an unsuspecting and (ironically, in the “information age”) incredibly uninformed public about the way government works and the way democracy and laws have been carefully constructed to work over hundreds of years. Yes, they work incredibly badly. The only thing worse than the way they work is to get rid of them entirely, without any kind of meaningful structure to replace them. After all, we know a lot about what completely unregulated democratic discussions look like today–we need look no further than reddit or 4chan or Twitter. Imagine what that kind of logic and conduct, magnified into governmental power, looks like. Actually you don’t have to imagine, because we are seeing plenty of companies today take that power for themselves, existing laws and structures and regulations be damned.

Here, I’ve collected just a small sampling of real-life statements from Bitcoinistas that demonstrate the level of rhetorical know-nothingism for which Bitcoin is particularly (although by no means exclusively) responsible right now. Most of them were reported by the great Twitter accounts Shit /r/Bitcoin says (which, as the name implies, samples quotations from the /r/Bitcoin subreddit) and bitcoin.txt. Word of warning: if any of what you read in these comments makes sense to you, I probably think you need to read more. A lot more.

Thoughts on Banking, Taxation, & Monetary Theory for Which John Maynard Keynes Bears No Conceivable Responsibility

Of course the government want to control the currency. They want to have ultimate power over everything, the people be damned. Digital currency can compete with the fiat banking system which is used to loot the value of currency on a continual basis. (Source: Robert Zraick, Jan 2013, comment on Forbes article)

bitcoin on reddit

Bitcoin on Reddit (Source: @RedditBTC)

Political Science You Won’t Find in John Locke or The Federalist Papers

Bitcoin is a direct threat to corrupt governments who control and manipulate currency, and use taxpayer funds to buy votes. You better believe they’re going to ban it! But mutual barter systems will prevail on the web… and it’s a great thing. It will destroy the power that government yields uncontrollably and put it back into the hands of the people where it belongs. (Source: Douglas Karr, Jan 2013, comment on Forbes article)

Economic Theory, Courtesy John Birch Society

I understand how they work… unlike ANY of the old-school economists, who also failed to predict the 2008 crash, and who just went along with what it was acceptable to say. The more “established” an economist is, the more likely they are to be wrong about bitcoins. This has been the pattern so far. You might as well ask the doddering self-entitled satin-tour-jackets wearing old twats from the RIAA about torrent protocols. (Source: Genomicon, Apr 2, 2013)

We Come to Build a Better World

Posted in "hacking", "social media", bitcoin, cyberlibertarianism, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , | Leave a comment

‘Permissionless Innovation’: Using Technology to Dismantle the Republic

There may be no more pernicious and dishonest doctrine among Silicon Valley’s avatars than the one they call “permissionless innovation.” The phrase entails the view that entrepreneurs and “innovators” are the lifeblood of society, and must be allowed to push forward without needing to ask for “permission” from government, for the good of society. The main advocates for the practice are found at the Koch-directed and -funded libertarian Mercatus Center and its tech-specific Technology Liberation Front (TLF), particularly Senior Research Fellow Adam Thierer; it’s also a phrase one hears occasionally from apparently politically-neutral “internet freedom” organizations, as if it were not tied directly to market fundamentalism.

Whether or not “innovators” would be better off in achieving their own goals without needing to ask for “permission,” the fact is that another name for “permission” as it is used in this instance is “democratic governance.” Whether or not it is best for business to have democratic government looking over the shoulders of business, it is absolutely, indubitably necessary for democratic governance to mean anything. That is why libertarians had to come up with a new term for what they want; “laws and regulations don’t apply to us” might tip off the plebes to what is really going on.

Associated with certain aspects of “open internet” rhetoric by, among others, “father of the internet” (and “Google’s Chief Internet Evangelist,” in case you wonder where these positions are coming from) Vint Cerf—yet another site where we should be paying much more careful attention to the deployment of “open”—“permissionless innovation” has gained most traction among far-right market fundamentalists like the TLF.

In comments they submitted to the FAA’s proposed rules for “test sites” for the integration of commercial drones into domestic airspace, the TLF folks wrote:

As an open platform, the Internet allows entrepreneurs to try new business models and offer new services without seeking the approval of regulators beforehand.

Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators.

Note how cleverly the technical nature of the “open platform” of the internet—“open” in that case meaning that the protocols are not proprietary, which entails very little or nothing about regulatory status—merges into the inability or inadvisability of government to regulate it. This is cyberlibertarian rhetoric in its most pointed function—using language that it is hard to disagree with about the nature of technological change so as to garner support for extreme political and economic positions we may not even realize we are going along with. “Open Internet, yes!” “Keep your paternalistic ‘permission’ off our backs—for democracy!” Or not.

The market fundamentalists of TLF and Silicon Valley would love you to believe that “permissionless innovation” is somehow organic to “the internet,” but in fact it is an experiment we conducted for a long time in the US, and the experiment proved that it does not work. From the EPA to the FDA to OSHA, nearly every Federal (and State) regulatory agency exists because of significant, usually deadly failures of industry to restrain itself. We don’t need to look very far to see how destructive unregulated industry can be: just think of the 1980 authorization of the “Superfund” act, enacted after more than a decade of environmental protest proved ineffective in getting industry not simply to stop polluting, but to stop contaminating sites so thoroughly that they directly damaged agriculture and human health (including killing people), to say nothing of more traditional environmental concerns—practices for which “permissionless” industry did not merely shirk responsibility, but which they actively hid. Consider OSHA, created only in 1970, after not merely decades but centuries of employment practices so outrageous that it was not until 60 years after the Triangle Shirtwaist Fire that the government finally acted to limit the number of workers who are directly killed by their employers. When OSHA was created in 1970, 14,000 workers were killed on the job each year in the US; despite the workforce more than doubling since then, in 2009 only 4,400 were killed—which is still, by the way, awful. And industry accepted and accepts OSHA standards kicking and screaming every step of the way.

“Permissionless innovation” suggests that the correct order for dramatic technological changes should be “first harm, then fix.” This is of course the opposite of the way important regulatory bodies like the FDA—let alone doctors themselves following the Hippocratic Oath—approach their business: “first, do no harm.” The “permissionless innovation” folks would have you believe that in the rare, rare case in which one of their technologies harms somebody, they will be the first to step in and fix things up, maybe even making those they’ve harmed whole.

Yet we have approximately zero examples of market fundamentalists stepping in to say that “hey, we asked for ‘permissionless innovation,’ so since we fucked up, it’s our responsibility to fix things up.” On the contrary, they are the same people who then argue that “people make their own choices” when they “choose” to use technology whose consequences they can’t actually fathom at all, but that therefore they are owed nothing. So what they really want is no government beforehand, and no government afterwards—more accurately, no government at all.

It’s tempting to argue that digital technology is different from drugs or food, but that would belie all sorts of facts. Silicon Valley is trying to put its technology inside and outside of every part of the world, from the “Internet of Things” to drones to FitBit to iPhone location services and on and on. These technologies are meant to infiltrate every aspect of our lives—what is needed is more, not less, regulation, and more creative ways to regulate them, since they by design run across many different existing spheres of law and regulation.

polluted WV water

Polluted West Virginia Water. Photo credit: Crystal Good, PBS

This is no idle speculation. Even today, we have more than enough examples of what “permissionless innovation” can do. We need remember back no further than January of this year, when the crafty market fundamentalists at deliberately-named Freedom Industries caused a huge chemical spill, polluting water throughout West Virginia. Freedom Industries deliberately bypassed and found loopholes in existing regulations so as to produce and stockpile chemicals whose impact on human health is unknown. And were good “permissionless innovation” folks at Freedom standing up, taking responsibility for the harm they’d caused? Guess again.

Deliberately getting around the EPA is one thing, but even technological innovations closer to the digital world currently happen, and follow the same pattern of denying the responsibility that permissionless innovation would suggest “innovators” must take. We know that most soap products today contain the chemical triclosan, an antibacterial substance that, when loosed on the environment due to imperfect regulation, does not actually work as advertised. Instead, the FDA believes it harms humans and the environment, actually producing drug-resistant bacteria, a huge concern in an area of diminishing antibiotic effectiveness, and the chemical was banned by the EU in 2010. Despite this, US producers continue to sell the products because they appeal to consumers’ misguided (and advertising-fueled) belief that “killing bacteria” must be good.

In a similar vein, another, the inclusion of so-called “microbeads” in cosmetics, soaps and toothpaste, making them sparkle, follows exactly the desired pattern of permissionless innovation. The new technology, which serves as near as I can determine only marketing purposes (it makes part-liquid substances sparkle), was not covered by existing regulation, and thus has become nearly ubiquitous in a range of products. But it turns out that the beads, because they are so small, leach throughout the environment, escape the effects of water treatment plants that aren’t prepared for them, and then concentrate in marine life, including fish that humans eat. Among the many reasons that is bad, the beads “tend to absorb pollutants, such as PCBs, pesticides and motor oil,” likely killing fish and adding to the toxic load of humans who eat fish.

Some companies—including L’Oreal, Proctor & Gamble, the Body Shop and Johnson & Johnson—agreed to phase out the microbeads when presented with evidence of the damage they cause by researchers. But others haven’t, and just recently the State of Illinois has finally passed legislation to outlaw them altogether, since the Great Lakes, North America’s largest bodies of freshwater, have been found to be thoroughly contaminated with them.

So we go from an apparently harmless product—but one, we note, that served no important function in health or welfare—to an inadvertent and potentially seriously damaging technology that now we have to try to unwind. Scientists are concerned that the microbeads already in the Great Lakes are causing significant damage, so the voluntary cessation is great, but doesn’t solve the problem that’s already been caused.

Cosmetic and pharmaceutical manufacturers are already familiar with regulatory bodies, and so it is not all that surprising that some of them have voluntarily agreed to curb their practices—after the harm has been done. Silicon Valley companies have so far demonstrated very little of the same deference—on the contrary, they continue business practices even after regulators directly tell them that what they are doing violates the law.

“Permissionless innovation” is a license to harm; it is a demand that government not intrude in exactly the area that government is meant for—to protect the general welfare of citizens. It is a recipe for disaster, and I have no hesitation whatsoever about saying that, in the battle between human health and social welfare vs. the for-profit interests of “innovators,” society is well-served by erring on the side of caution. As quite a few of us, including Astra Taylor in her recent People’s Platform have started to say, the proliferation of digital technology into every sphere of human life suggests we need more and more careful regulation, not less–unless we want to learn what the digital equivalent of Love Canal might be.

Posted in cyberlibertarianism, google, materality of computation, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , | 8 Responses

Bitcoin: The Cryptopolitics of Cryptocurrencies

I’m happy to have a piece up at the Harvard University Press blog, entitled “Bitcoin: The Cryptopolitics of Cryptocurrencies.” It was written as a bit of an introductory piece for readers who don’t know much about Bitcoin and may have heard the news from Mt. Gox this week, so it will probably be old news to people who have read my earlier posts on Bitcoin (Bitcoinsanity 1: The (Ir)relevance of Finance, or, It’s (Not) Different This Time and Bitcoin Will Eat Itself: More Contradictions of (Digital) Libertarianism). Here’s a short excerpt:

“Money” names the instrument in which official transactions in that nation-state are conducted: all other things being equal, US Government bonds have a value in US dollars, and taxes in the US must be paid in dollars. As another economist puts it, “In post-Keynesian monetary theory money is anything that will settle a legal contractual obligation. And by the civil law of contracts, the government determines what settles a legal monetary contractual obligation.” This is the fundamental point, critical to all monetary theory, that Bitcoin advocates seem unable or unwilling to recognize (and admittedly it is what was until now a fairly arcane point of economic theory): the State decides what money is, and no assertion otherwise by individuals or groups can change that—only the law can.

The complete post is available here.


Posted in bitcoin, cyberlibertarianism, information doesn't want to be free, materality of computation, revolution, rhetoric of computation | Tagged , , , , , , | Leave a comment

Glasslinks: Privacy, Glassholes, Panics, & Take-Backs

A colleague asked if I had any links to writings about Google Glass, so I dug around in my files and found quite a few things. I thought they might come in handy for others doing research on the topic. I have even more, but this is overwhelming enough as it is. The final pair makes for some humor.

Creepiness/Reactions to Actual Use


General/More Theoretical/Thoughtful

google glass

An arbitrary image of someone wearing Google Glass

Corporate BS

Posted in cyberlibertarianism, google, materality of computation, privacy, rhetoric of computation, surveillance, we are building big brother | Tagged , , , , , , , , , , , , | Leave a comment

Article: ‘Commercial Trolling: Social Media and the Corporate Deformation of Democracy’

I wrote this essay for a collection that originally said it could handle pieces of this length, but in the end decided not to. It’s a bit long for traditional journals or edited collections, and it’s about some fairly immediate stuff that’s also connected to other work I’ve been writing lately, so I decided simply to post it as-is to this site (and also to SSRN and academia.edu). Yes, pure open access with no intermediaries (though my Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License is technically “not a free culture license”), something I only feel is wise to do because I publish plenty in journals, and the piece is too long for most of the journals where I’d be likely to place it–though of course anyone interested in publishing it is welcome to contact me.

“Commercial Trolling: Social Media and the Corporate Deformation of Democracy”: Abstract
While “trolling” originally named and is today often thought to be the activity of recalcitrant or obstreperous individuals with too much time or their hands or axes to grind about particular issues, a great deal of trolling on today’s social media platforms is crafted not by such individuals but instead by persons (or even computer programs) acting on behalf of (and usually employed by) powerful interests, including corporations, institutions, governments, and lobbying groups, and whose goal is not so much contributing to real exchange of political views, but instead the tilting of the discursive field to make some positions appear reasonable or even popular, and to marginalize other opinions (and those who hold them). Such action is visible in the range of ongoing intrusions by corporate actors into Wikipedia, which is reflected in the elaborate infrastructure the site maintains to police such intrusions, an infrastructure not available to much of the rest of the internet. It is even more obvious in Anti-Global Warming (AGW) discourse, by agents of industry lobbying groups and energy companies, in many locations across the web. Given the ease with which capital can purchase the services of agents to advocate effectively for views that are disfavored by a large portion—at times, such as in the climate change debate, a large majority—of the population, questions are raised about the apparently inherent democratic nature of information distribution on the web, and about what means might be utilized to level the playing field between good-faith contributors to discourse on the one hand, and institutionally-directed contributors on the other.

Full paper available here (and also on SSRN and academia.edu)

commercial powertroller

a commercial salmon trolling boat

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Posted in cyberlibertarianism, information doesn't want to be free, rhetoric of computation | Tagged , , , , , , , , , , , , | Leave a comment