All Cybersecurity Technology Is Dual-Use

Dan Geer is one of the more interesting thinkers about digital security and privacy around. Geer is a sophisticated technologist with an extremely varied and rich background who has also, fairly recently, become a spook of some kind. Geer is currently the Chief Information Security Officer for In-Q-Tel, the technology investment subsidiary of the CIA, popularly and paradoxically known as a “not-for-profit venture capital firm,” but which gets much more directly involved with its investment targets with the intent of providing “‘ready-soon innovation’ (within 36 months) vital to the IC [intelligence community] mission,” and therefore shuns the phrase “venture capital.”

This might lead one to think that Geer would speak as what Glenn Greenwald likes to call a “government stenographer,” but I find his speeches and writings to be both unusually incisive and extremely independent minded. He often says things that nobody else says, and he says them from a position of knowledge and experience. And what he says often does not line up with either what one imagines “government” thinks, or with what many in industry want; he has recently suggested, contrary to what Google and many “digital freedom” advocates affirm, that the European “Right to Be Forgotten” actually does not go far enough in protecting privacy.

In his talk at the 2014 Black Hat USA conference, the same talk where he made remarks about the Right to Be Forgotten, called “Cybersecurity as Realpolitik” (text; video), Geer made the following deeply insightful observation:

All cyber security technology is dual use.

Here’s the full context of that statement:

Part of my feeling stems from a long-held and well-substantiated belief that all cyber security technology is dual use. Perhaps dual use is a truism for any and all tools from the scalpel to the hammer to the gas can — they can be used for good or ill — but I know that dual use is inherent in cyber security tools. If your definition of “tool” is wide enough, I suggest that the cyber security tool-set favors offense these days. Chris Inglis, recently retired NSA Deputy Director, remarked that if we were to score cyber the way we score soccer, the tally would be 462-456 twenty minutes into the game,[CI] i.e., all offense. I will take his comment as confirming at the highest level not only the dual use nature of cybersecurity but also confirming that offense is where the innovations that only States can afford is going on.

Nevertheless, this essay is an outgrowth from, an extension of, that increasing importance of cybersecurity. With the humility of which I spoke, I do not claim that I have the last word. What I do claim is that when we speak about cybersecurity policy we are no longer engaging in some sort of parlor game. I claim that policy matters are now the most important matters, that once a topic area, like cybersecurity, becomes interlaced with nearly every aspect of life for nearly everybody, the outcome differential between good policies and bad policies broadens, and the ease of finding answers falls. As H.L. Mencken so trenchantly put it, “For every complex problem there is a solution that is clear, simple, and wrong.”

geer at black hat

Dan Geer at the Black Hat USA 2014 conference (Photo: Threatpost)

Now what Geer means by “dual-use” here is one of the term’s ordinary meanings: all cybersecurity technology (and really all digital technology) has both civilian and military uses.

But we can expand that, as Geer suggests when he mentions the scalpel, hammer, and gas can, in another way the term is sometimes used: all cybersecurity technology has both offensive and defensive uses.

This basic fact, which is obvious from any careful consideration of game theory or military or intelligence history, seems absolutely lost on the most vocal and most active proponents of personal security: the “cypherpunks” and crypto advocates who continually bombard us with the recommendation we “encrypt everything.” (In “Opt-Out Citizenship” I describe the anti-democratic nature of the end-to-end encryption movement.)

Not only that: I don’t think “cybersecurity” technology is a broad enough term, either: it would be better to say that a huge amount of digital technology is dual-use. That is to say that a great deal of digital technology has uses to which it can be and will be put that are neither obvious nor, necessarily, intended by their developers and even users, and that often work in exactly the opposite way that their developers or advocates say (or think) they do.

This is part of what drives me absolutely crazy about the cypherpunks and other crypterati who have come out in droves in the wake of the Snowden revelations.

They act and write as if they control what they do; as if, unlike the rest of the people in the world, what they do will be accepted as-is, will end the story, will have only the direct effects they intend.

Thus, they write as if significantly upping the amount and efficacy of encryption on the web is something that “bad” hackers and “bad” cypherpunks will just accept.

Despite the fact that we know that’s not true. Any advance in encryption has both offensive and defensive uses. In its most basic form, that means that while encoding or encrypting information might look defensive, the ability to decrypt or decode that information is offensive.

In another form, it means that no matter how carefully and thoroughly you develop your own encryption scheme, the very act of doing that does not merely suggest but ensures—particularly if your new technology gets adopted—that your opponents will use every means available to defeat it, including the (often, very paradoxically if viewed from the right angle, “open source”) information you’ve provided about how your technology works.

This isn’t a recipe for peace or for privacy. It’s an arms war. Cypherpunks might see it as some kind of perverse “peace war,” because they see themselves “only” developing defensive techniques—although given the penchant of those folks for obscurity and anonymity, it’s really special pleading to think that the only people involved in these efforts are engaged in defense.

But they aren’t. They are developing at best new “missile shields,” and the response of offensive technologists has to be—it is required to be, and they are paid to do it—better missiles that can get by the shields.

Further, because these crypterati almost universally adopt an anarcho-capitalist or far-right libertarian hatred for everything about government, they seem unable to grasp the fact that the actual mission of law enforcement and military intelligence—the mission they have to do, even when they are following the law and the constitution perfectly—involves doing everything in their power to crack and penetrate every encryption scheme in use. They have to. One of the ways they do that is to hire the very folks who bray so loudly about the sweet nature of absolute technical privacy—and once on the other side, who is better at finding ways around cryptography than those who pride themselves on their superior hacking skills? And the very development of these skills entails the creation of the universal surveillance systems used by the NSA as revealed by Snowden and others.

The population caught in the middle of this arms war is not made more free by it. We are increasingly imprisoned by it. We are increasingly collateral damage. Rather than (or at least in addition to) escalation, we need to talk about a different paradigm entirely: disarmament.

Posted in "hacking", "social media", cyberlibertarianism, materality of computation, privacy, rhetoric of computation, surveillance | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Social Media as Political Control: The Facebook Study, Acxiom, & NSA

Although it didn’t break the major media until last week, around June 2 researchers led by Adam Kramer of Facebook published a study in the Proceedings of the National Academy of Sciences (PNAS) entitled “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” The publication has triggered an flood of complaints and concerns: is Facebook manipulating its users routinely, as it seems to admit in its defense of its practices? Did the researchers—two of whom were at universities (Cornell and the University of California-San Francisco) during the time the actual study was conducted in 2012—get proper approval for the study from the appropriate Institutional Review Board (IRB), required of all public research institutions (and most private institutions, especially if they take Federal dollars for research projects)? Was Cornell actually involved in the relevant part of the research (as opposed to analysis of previously-collected data)? Whether or not IRB approval was required, did Facebook meet reasonable standards for “informed consent”? Do Terms of Service agreements accomplish not just the letter but the spirit of the informed consent guidelines? Could Facebook see emotion changes in its individual users? Did it properly anonymize the data? Can Facebook manipulate our moods? Was the effect it noticed even significant in the way the study claims? Is Facebook manipulating emotions to influence consumer behavior?

While these are all critical questions, most of them seem to me to miss the far more important point, one that has so far been gestured at only by Zeynep Tufekci (“Facebook and Engineering the Public”; Michael Gurstein’s excellent “Facebook Does Mind Control,” which appeared almost simultaneously with this post, makes similar points to mine) and former Obama election campaign data scientist Clay Johnson (whose concerns are more than a little ironic). To see its full extent, we need to turn briefly to Glenn Greenwald and Edward Snowden. I have a lot to say about the Greenwald/Snowden story, which I’ll avoid going into too much for the time being, but for present purposes I want to note that one of the most interesting facets of that story is the question of exactly what each of them thinks the problem is that they are working so diligently to expose: is it a military intelligence agency out of control, an executive branch out of control, a judiciary failing to do its job, Congress not doing oversight the way some constituents would like, the American people refusing to implement the Constitution as Snowden/Greenwald think we should, and so on? Even more pointed, whomever it is we designate as the bad actors in this story, why are they doing it? To what end?

For Greenwald, the bad actors are usually found in the NSA and the executive branch (although, as an aside, his reporting seems often to show that all three branches of government are being read into or overseeing the programs as required by law, which definitely raises questions about who the bad guys actually are). More importantly, Greenwald has an analysis of why the bad actors are conducting warrantless, mass surveillance: he calls it “political control.” Brookings Institution Senior Fellow Benjamin Wittes has a close reading of Greenwald’s statements on this topic in No Place to Hide, (also see Wittes’s insightful review of Greenwald’s book) where he finds the relevant gloss of “political control” in this quotation from Greenwald:

All of the evidence highlights the implicit bargain that is offered to citizens: pose no challenge and you have nothing to worry about. Mind your own business, and support or at least tolerate what we do, and you’ll be fine. Put differently, you must refrain from provoking the authority that wields surveillance powers if you wish to be deemed free of wrongdoing. This is a deal that invites passivity, obedience, and conformity. The safest course, the way to ensure being “left alone,” is to remain quiet, unthreatening, and compliant.

That is certainly a form of political control, and a disturbing one (though Wittes I think very wisely asks, if this is the goal of mass surveillance, why is it so ineffective with regard to Greenwald himself and the other actors in the Snowden releases? Further, how was suppression-by-intimidation work supposed to work when the programs were entirely secret, and exposed only by the efforts of Greenwald and Snowden?). But it’s not the only form of political control, and I’m not at all sure it’s the most salient or most worrying of the kinds of political control enabled by ubiquitous, networked, archived communication itself: that is to say the functionality, not the technology, of social media itself.

Why I find it ironic that Clay Johnson should worry that Mark Zuckerberg might be able to “swing an election by promoting Upworthy posts 2 weeks beforehand” is that this is precisely, at a not very extreme level of abstraction, what political data scientists do in campaigns. In fact it’s not all that abstract: “The [Obama 2012] campaign found that roughly 1 in 5 people contacted by a Facebook pal acted on the request, in large part because the message came from someone they knew,” according to a Time magazine story, for example. In other words, the campaign itself did research on Facebook and how its users could be politically manipulated—swinging elections by measuring how much potential voters like specific movie stars, for example (in the case of Obama 2012, it turned out to be George Clooney and Sarah Jessica Parker). Johnson’s own company, Blue State Digital, developed a tool the Obama campaign used to significant advantage—“Quick Donate,” deployed so that “supporters can contribute with just a single click,” which might mean that it’s easy, or might mean that supporters act on impulse, before what Daniel Kahneman calls their “slow thinking” can consider the matter carefully.

Has it ever been thus? Yes, surely. But the level of control and manipulation possible in the digital era exceed what was possible before by an almost unfathomable extent. “Predictive analytics” and big data and many other tools hint at a means for manipulating the public in all sorts of ways without their knowledge at all. These methods go far beyond manipulating emotions, and so focusing on the specific behavior modifications and effects achieved by this specific experiment strikes me as missing the point.

NSA Facebook

Facebook Security Agency (Image source: uncyclopedia)

Some have responded to this story along the lines of Erin Kissane— “get off Facebook”—or Dan Diamond—“If Facebook’s Secret Study Bothered You, Then It’s Time To Quit Facebook.” I don’t think this is quite the right response for several reasons. It puts the onus on individuals to fix the problem, but individuals are not the source of the problem; the social network itself is. It’s not that users should get off of Facebook; it’s that the kind of services Facebook sells should not be available. I know that’s hard for people to hear, but it’s a thought that we have not just the right but the responsibility to consider in a democratic society: that the functionality itself might be too destructive (and disruptive) to what we understand our political system to be.

More importantly, despite these protestations, it isn’t possible to get off Facebook. For “Facebook” here read “data brokers,” because that’s what Facebook is in many ways, and as such it is part of a universe of hundreds and perhaps even thousands of companies (of which the most famous non-social media company is Acxiom) who make monitoring, predicting, and control the behavior of people—that is, in the most literal sense, political control—their business. As Julia Angwin has demonstrated recently, we can’t get out of these services even if we want to, and to some extent the more we try, the more damaging we make the services to us as individuals. Further, these services aren’t concerned with us as individuals, as Marginal Utility blogger and New Inquiry editor Rob Horning (among his many excellent pieces on the structural as opposed to the person impact of Facebook, see “Social Graph vs. Social Class,” “Hollow Inside,” “Social Media Are Not Self-Expression,” and some of his pieces at Generation Bubble; it’s also a frequent topic on his Twitter feed; Peter Strempel’s “Social Media as Technology of Control” is a sharp reflection on some of Horning’s writing) and I and others have both been insisting for years: these effects occur at population levels, as probabilities: much as the Obama campaign did not care that much whether you or your neighbor voted for him, but did care that if they sprayed Chanel No. 5 in the air one June morning, one of the two of you was 80% likely to vote for him, and the other was 40% likely not to go to the polls. Tufekci somewhat pointedly argued for this in the aftermath of the 2012 election:

Social scientists increasingly understand that much of our decision making is irrational and emotional. For example, the Obama campaign used pictures of the president’s family at every opportunity. This was no accident. The campaign field-tested this as early as 2007 through a rigorous randomized experiment, the kind used in clinical trials for medical drugs, and settled on the winning combination of image, message and button placement.

Further, you can’t even get off Facebook itself, which is why I disagree pretty strongly with the implications of a recent academic paper of Tufekci’s, in which she writes fairly hopefully about strategies activists use to evade social media surveillance, by performing actions “unintelligible to algorithms.” I think this only provides comfort if you are looking at individuals and at specific social media platforms, where it may well be possible to obscure what Jim is doing by using alternate identities, locations, and other means of obscuring who is doing what. But most of the big data and data mining tools focus on populations, not individuals, and on probabilities, not specific events. Here, I don’t think it matters a great deal whether you are purposely obscuring activities or not, because those “purposely obscured” activities also go into the big data hopper, also offer fuel for the analytical fire, and may well reveal much more than we think about intended future actions and behavior patterns, and also leave us much more susceptible than we know to relatively imperceptible behavioral manipulation.

Here it’s ironic that Max Schrems is in the news again, having just published a book in German called Kämpf um deine Daten (English: Fight for Your Data) and spokesman for the Europe v. Facebook group that is challenging not so much the NSA itself but the cooperation between NSA and Facebook in European courts. A recent story about Schrems’s book in the major German newspaper Frankfurter Allgemeine Zeitung (FAZ) notes that what got Schrems concerned about the question of data privacy in the first place was this:

Schrems machte von seinem Auskunftsrecht Gebrauch und erwirkte im Jahr 2011 nach längerem Hin und Her die Herausgabe der Daten, die der Konzern über ihn gespeichert hatte. Er bekam ein pdf-Dokument mit Rohdaten, die, obwohl Schrems nur Gelegenheitsnutzer war, ausgedruckt 1222 Seiten umfassten – ein Umfang, den im letzten Jahrhundert nur Stasi-Akten von Spitzenpolitikern erreichten. Misstrauisch machte ihn, dass das Konvolut auch Daten enthielt, die er mit den gängigen Werkzeugen von Facebook längst gelöscht hatte.

Here’s a rough English translation, with help from Google Translate:

Schrems exercised his Freedom of Information rights and after a long back and forth obtained in 2011 the data Facebook hold on him. He got a PDF document with raw data which, although Schrems was only an occasional user, included 1222 printed pages—a scale that in the last century could have been reached only in the Stasi files of top politicians. What he found especially suspicious was that the documents also contained data that he had long since erased with the normal Facebook tools.

In fact, it’s probably even worse, both if we consider data brokers like Acxiom (who maintain detailed profiles on us whether we like it or not), or even Facebook itself, which it is reasonable to assume does just the same thing, whether we have signed up for it or not. And it is no doubt true that, as the great, skeptical data scientist Cathy O’Neil says over at her MathBabe blog, “this kind of experiment happens on a daily basis at a place like Facebook or Google.” This is the real problem; marking this specific project out as “research” and an unacceptable but unusual effort misses the point almost entirely. Google, Facebook, Twitter, the data brokers, and many more are giant research experiments, on us. “Informed consent” for this kind of experiment would have to be provided by the whole population, even those who don’t use social media at all, and the possible consequences would have to include “total derailing of your political system without your knowledge.”

(As an aside those who have gone out of their way to defend Facebook—see especially Brian Keegan and Tal Yarkoni—provide great examples of cyberlibertarianism in action, emotionally siding with corporate capital as itself a kind of social justice or political cause; Alan Jacobs provides a strong critique of this work.)

This, in the end, is part of why I find very disturbing Greenwald’s interpretation of Snowden’s materials, and his relentless attacks on the US government, and no less his concern for US companies only in so far as their business has been harmed by the Snowden information he and others have publicized. Political control, in any reasonable interpretation of that phrase, refers to the manipulation of the public to take actions and maintain beliefs that they might not arrive at via direct appeal to logical argument. Emotion, prejudice, and impulse substitute for deep thinking and careful consideration. While Greenwald has presented some—but truthfully, only some—evidence that the NSA may engage in political control of this sort, he blames it on the government rather than on the existence of tools, platforms and capabilities that do not just enable but are literally structured around such manipulation. Bizarrely, even Julian Assange himself makes this point in his book Cypherpunks, but it’s a point Greenwald continues to shove aside. Social media is by its very nature a medium of political control. The issue is much less who is using it, and why they are using it, than that it exists at all. What we should be discussing—if we take the warnings of George Orwell and Aldous Huxley at all seriously—is not whether NSA should have access to these tools. If the tools exist, and especially if we endorse some form of the nostrum that Greenwald in other modes rabidly supports, that information must be free, then we have no real way to prevent it from being used to manipulate us. How the NSA (and Facebook, and Acxiom) uses this information is of great concern, to be sure: but the question we are not asking is whether it is not the specific users and uses we should be curtailing, but the very existence of the data in the first place. It is my view that as long as this data exists, its use for political control will be impossible to stop, and that the lack of regulation in private companies means that we should be even more concerned about how they use it (and how it is sold, and to whom) than we are about what heavily-regulated governments do with it. Regardless, in both cases, the solution cannot be to chase after the tails of these wild monkeys—it is to get rid of the bananas they are pursuing in the first place. Instead, we need to recognize what social media and data brokerage does: it does a kind of non-physical violence to our selves, our polities, and our independence. It is time at least to ask whether social media itself, or at least some parts of it—the functionality, not the technology—is too antithetical to basic human rights and to the democratic project for it to be acceptable in a free society.

Posted in "social media", cyberlibertarianism, digital humanities, privacy, surveillance, we are building big brother | Tagged , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Bitcoinsanity 2: Revolutions in Rhetoric

Bitcoin is touted, publicized and promoted as an innovation in financial technology. Usually those doing the promoting have very little experience with finance in general or with financial technology in particular–a huge, booming industry mostly made up of proprietary technologies that those of us who don’t work for major banks or trading firms know very little about–but are happy to claim with tremendous certainty that this particular financial technology is utterly transformative.

(As a side note, the blockchain itself is not inherently financial technology, and it may well prove more useful and interesting in contexts other than finance, such as the “fully decentralized data management service” offered by companies like MaidSafe; these kinds of developments are preliminary enough that I don’t think it’s yet possible to judge their usefulness).

Like certain other rhetorical constructions (e.g. “Arab Spring,” “open”), at a certain point the rhetoric and the discourse it engenders start to seem as much of the point as are the underlying technical or political facts. The rhetoric overtakes those facts; it becomes the facts. Unlike the “Arab Spring,” Bitcoin can be even harder to see from this angle, because it really is a piece of software, and a distributed network of users of that software.

Regardless: unlike some pieces of software, and like other social practices, Bitcoin’s main function so far is rhetorical. Bitcoin enables and licenses all sorts of argumentative and rhetorical practices that would not otherwise be possible in just this fashion, and the creation and propagation of those practices has become important–perhaps even central–to whatever “Bitcoin” is. This is not peripheral, unavoidable, unexceptionable tomfoolery; it is a core part of what Bitcoin is. Until and unless Bitcoin actually starts to function as a currency (meaning that its value stops fluctuating for a significant period of time) or admits that “value fluctuations” and “currency” are incompatible with each other, this will continue to be the case.

It’s not in any way peripheral. No matter how many of them I read, I am still astonished at the number of pieces that come out nearly every day that “explain” how Bitcoin works (although what they actually describe is the blockchain technology), then give some examples of Bitcoin being exchanged in the real world, then move to “Bitcoin is revolutionizing finance” to “Bitcoin will revolutionize everything” without in any way connecting the dots to what these concepts actually mean as they are used today. Why, just across the transom as I’m writing this comes “How Bitcoin Tech Will Revolutionise Everything from Email to Governments” out of “Virgin Entrepreneur” (run of course by the well-known decentralizer Richard Branson, who surely invests in technologies because they are likely to defuse radically the power of his enormous wealth) where anti-statist libertarian comedian (and if those aren’t qualifications to dismantle the world financial system what would be?) Dominic Frisby (@dominicfrisby) proclaims that the “wonderful Ayn Rand stuff” of which the blockchain is constructed leads us to ask:

What indeed will be the purpose of representative democracy when any issue can be quickly and efficiently decided by the people and voted on via the block chain? The revolution will not be televised. It will be cryptographically time stamped on the block chain.

Well, what is the purpose of representative democracy? One might well ask that question as one advocates the loosing of a technology on the world designed to render it impotent. In the US, and in most of the democratic world, we have representative governments bound by laws and constitutions specifically to avoid the well-known dangers of majoritarian rule and of letting each person pursue their “wonderful Ayn Rand” interests without any sort of check on their powers.

Bitcoin provides a whole new iteration of the far right’s ability to sell these once well-discarded ideas to an unsuspecting and (ironically, in the “information age”) incredibly uninformed public about the way government works and the way democracy and laws have been carefully constructed to work over hundreds of years. Yes, they work incredibly badly. The only thing worse than the way they work is to get rid of them entirely, without any kind of meaningful structure to replace them. After all, we know a lot about what completely unregulated democratic discussions look like today–we need look no further than reddit or 4chan or Twitter. Imagine what that kind of logic and conduct, magnified into governmental power, looks like. Actually you don’t have to imagine, because we are seeing plenty of companies today take that power for themselves, existing laws and structures and regulations be damned.

Here, I’ve collected just a small sampling of real-life statements from Bitcoinistas that demonstrate the level of rhetorical know-nothingism for which Bitcoin is particularly (although by no means exclusively) responsible right now. Most of them were reported by the great Twitter accounts Shit /r/Bitcoin says (which, as the name implies, samples quotations from the /r/Bitcoin subreddit) and bitcoin.txt. Word of warning: if any of what you read in these comments makes sense to you, I probably think you need to read more. A lot more.

Thoughts on Banking, Taxation, & Monetary Theory for Which John Maynard Keynes Bears No Conceivable Responsibility

Of course the government want to control the currency. They want to have ultimate power over everything, the people be damned. Digital currency can compete with the fiat banking system which is used to loot the value of currency on a continual basis. (Source: Robert Zraick, Jan 2013, comment on Forbes article)

bitcoin on reddit

Bitcoin on Reddit (Source: @RedditBTC)

Political Science You Won’t Find in John Locke or The Federalist Papers

Bitcoin is a direct threat to corrupt governments who control and manipulate currency, and use taxpayer funds to buy votes. You better believe they’re going to ban it! But mutual barter systems will prevail on the web… and it’s a great thing. It will destroy the power that government yields uncontrollably and put it back into the hands of the people where it belongs. (Source: Douglas Karr, Jan 2013, comment on Forbes article)

Economic Theory, Courtesy John Birch Society

I understand how they work… unlike ANY of the old-school economists, who also failed to predict the 2008 crash, and who just went along with what it was acceptable to say. The more “established” an economist is, the more likely they are to be wrong about bitcoins. This has been the pattern so far. You might as well ask the doddering self-entitled satin-tour-jackets wearing old twats from the RIAA about torrent protocols. (Source: Genomicon, Apr 2, 2013)

We Come to Build a Better World

Posted in "hacking", "social media", bitcoin, cyberlibertarianism, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , | Leave a comment

‘Permissionless Innovation': Using Technology to Dismantle the Republic

There may be no more pernicious and dishonest doctrine among Silicon Valley’s avatars than the one they call “permissionless innovation.” The phrase entails the view that entrepreneurs and “innovators” are the lifeblood of society, and must be allowed to push forward without needing to ask for “permission” from government, for the good of society. The main advocates for the practice are found at the Koch-directed and -funded libertarian Mercatus Center and its tech-specific Technology Liberation Front (TLF), particularly Senior Research Fellow Adam Thierer; it’s also a phrase one hears occasionally from apparently politically-neutral “internet freedom” organizations, as if it were not tied directly to market fundamentalism.

Whether or not “innovators” would be better off in achieving their own goals without needing to ask for “permission,” the fact is that another name for “permission” as it is used in this instance is “democratic governance.” Whether or not it is best for business to have democratic government looking over the shoulders of business, it is absolutely, indubitably necessary for democratic governance to mean anything. That is why libertarians had to come up with a new term for what they want; “laws and regulations don’t apply to us” might tip off the plebes to what is really going on.

Associated with certain aspects of “open internet” rhetoric by, among others, “father of the internet” (and “Google’s Chief Internet Evangelist,” in case you wonder where these positions are coming from) Vint Cerf—yet another site where we should be paying much more careful attention to the deployment of “open”—“permissionless innovation” has gained most traction among far-right market fundamentalists like the TLF.

In comments they submitted to the FAA’s proposed rules for “test sites” for the integration of commercial drones into domestic airspace, the TLF folks wrote:

As an open platform, the Internet allows entrepreneurs to try new business models and offer new services without seeking the approval of regulators beforehand.

Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators.

Note how cleverly the technical nature of the “open platform” of the internet—“open” in that case meaning that the protocols are not proprietary, which entails very little or nothing about regulatory status—merges into the inability or inadvisability of government to regulate it. This is cyberlibertarian rhetoric in its most pointed function—using language that it is hard to disagree with about the nature of technological change so as to garner support for extreme political and economic positions we may not even realize we are going along with. “Open Internet, yes!” “Keep your paternalistic ‘permission’ off our backs—for democracy!” Or not.

The market fundamentalists of TLF and Silicon Valley would love you to believe that “permissionless innovation” is somehow organic to “the internet,” but in fact it is an experiment we conducted for a long time in the US, and the experiment proved that it does not work. From the EPA to the FDA to OSHA, nearly every Federal (and State) regulatory agency exists because of significant, usually deadly failures of industry to restrain itself. We don’t need to look very far to see how destructive unregulated industry can be: just think of the 1980 authorization of the “Superfund” act, enacted after more than a decade of environmental protest proved ineffective in getting industry not simply to stop polluting, but to stop contaminating sites so thoroughly that they directly damaged agriculture and human health (including killing people), to say nothing of more traditional environmental concerns—practices for which “permissionless” industry did not merely shirk responsibility, but which they actively hid. Consider OSHA, created only in 1970, after not merely decades but centuries of employment practices so outrageous that it was not until 60 years after the Triangle Shirtwaist Fire that the government finally acted to limit the number of workers who are directly killed by their employers. When OSHA was created in 1970, 14,000 workers were killed on the job each year in the US; despite the workforce more than doubling since then, in 2009 only 4,400 were killed—which is still, by the way, awful. And industry accepted and accepts OSHA standards kicking and screaming every step of the way.

“Permissionless innovation” suggests that the correct order for dramatic technological changes should be “first harm, then fix.” This is of course the opposite of the way important regulatory bodies like the FDA—let alone doctors themselves following the Hippocratic Oath—approach their business: “first, do no harm.” The “permissionless innovation” folks would have you believe that in the rare, rare case in which one of their technologies harms somebody, they will be the first to step in and fix things up, maybe even making those they’ve harmed whole.

Yet we have approximately zero examples of market fundamentalists stepping in to say that “hey, we asked for ‘permissionless innovation,’ so since we fucked up, it’s our responsibility to fix things up.” On the contrary, they are the same people who then argue that “people make their own choices” when they “choose” to use technology whose consequences they can’t actually fathom at all, but that therefore they are owed nothing. So what they really want is no government beforehand, and no government afterwards—more accurately, no government at all.

It’s tempting to argue that digital technology is different from drugs or food, but that would belie all sorts of facts. Silicon Valley is trying to put its technology inside and outside of every part of the world, from the “Internet of Things” to drones to FitBit to iPhone location services and on and on. These technologies are meant to infiltrate every aspect of our lives—what is needed is more, not less, regulation, and more creative ways to regulate them, since they by design run across many different existing spheres of law and regulation.

polluted WV water

Polluted West Virginia Water. Photo credit: Crystal Good, PBS

This is no idle speculation. Even today, we have more than enough examples of what “permissionless innovation” can do. We need remember back no further than January of this year, when the crafty market fundamentalists at deliberately-named Freedom Industries caused a huge chemical spill, polluting water throughout West Virginia. Freedom Industries deliberately bypassed and found loopholes in existing regulations so as to produce and stockpile chemicals whose impact on human health is unknown. And were good “permissionless innovation” folks at Freedom standing up, taking responsibility for the harm they’d caused? Guess again.

Deliberately getting around the EPA is one thing, but even technological innovations closer to the digital world currently happen, and follow the same pattern of denying the responsibility that permissionless innovation would suggest “innovators” must take. We know that most soap products today contain the chemical triclosan, an antibacterial substance that, when loosed on the environment due to imperfect regulation, does not actually work as advertised. Instead, the FDA believes it harms humans and the environment, actually producing drug-resistant bacteria, a huge concern in an area of diminishing antibiotic effectiveness, and the chemical was banned by the EU in 2010. Despite this, US producers continue to sell the products because they appeal to consumers’ misguided (and advertising-fueled) belief that “killing bacteria” must be good.

In a similar vein, another, the inclusion of so-called “microbeads” in cosmetics, soaps and toothpaste, making them sparkle, follows exactly the desired pattern of permissionless innovation. The new technology, which serves as near as I can determine only marketing purposes (it makes part-liquid substances sparkle), was not covered by existing regulation, and thus has become nearly ubiquitous in a range of products. But it turns out that the beads, because they are so small, leach throughout the environment, escape the effects of water treatment plants that aren’t prepared for them, and then concentrate in marine life, including fish that humans eat. Among the many reasons that is bad, the beads “tend to absorb pollutants, such as PCBs, pesticides and motor oil,” likely killing fish and adding to the toxic load of humans who eat fish.

Some companies—including L’Oreal, Proctor & Gamble, the Body Shop and Johnson & Johnson—agreed to phase out the microbeads when presented with evidence of the damage they cause by researchers. But others haven’t, and just recently the State of Illinois has finally passed legislation to outlaw them altogether, since the Great Lakes, North America’s largest bodies of freshwater, have been found to be thoroughly contaminated with them.

So we go from an apparently harmless product—but one, we note, that served no important function in health or welfare—to an inadvertent and potentially seriously damaging technology that now we have to try to unwind. Scientists are concerned that the microbeads already in the Great Lakes are causing significant damage, so the voluntary cessation is great, but doesn’t solve the problem that’s already been caused.

Cosmetic and pharmaceutical manufacturers are already familiar with regulatory bodies, and so it is not all that surprising that some of them have voluntarily agreed to curb their practices—after the harm has been done. Silicon Valley companies have so far demonstrated very little of the same deference—on the contrary, they continue business practices even after regulators directly tell them that what they are doing violates the law.

“Permissionless innovation” is a license to harm; it is a demand that government not intrude in exactly the area that government is meant for—to protect the general welfare of citizens. It is a recipe for disaster, and I have no hesitation whatsoever about saying that, in the battle between human health and social welfare vs. the for-profit interests of “innovators,” society is well-served by erring on the side of caution. As quite a few of us, including Astra Taylor in her recent People’s Platform have started to say, the proliferation of digital technology into every sphere of human life suggests we need more and more careful regulation, not less–unless we want to learn what the digital equivalent of Love Canal might be.

Posted in cyberlibertarianism, google, materality of computation, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Bitcoin: The Cryptopolitics of Cryptocurrencies

I’m happy to have a piece up at the Harvard University Press blog, entitled “Bitcoin: The Cryptopolitics of Cryptocurrencies.” It was written as a bit of an introductory piece for readers who don’t know much about Bitcoin and may have heard the news from Mt. Gox this week, so it will probably be old news to people who have read my earlier posts on Bitcoin (Bitcoinsanity 1: The (Ir)relevance of Finance, or, It’s (Not) Different This Time and Bitcoin Will Eat Itself: More Contradictions of (Digital) Libertarianism). Here’s a short excerpt:

“Money” names the instrument in which official transactions in that nation-state are conducted: all other things being equal, US Government bonds have a value in US dollars, and taxes in the US must be paid in dollars. As another economist puts it, “In post-Keynesian monetary theory money is anything that will settle a legal contractual obligation. And by the civil law of contracts, the government determines what settles a legal monetary contractual obligation.” This is the fundamental point, critical to all monetary theory, that Bitcoin advocates seem unable or unwilling to recognize (and admittedly it is what was until now a fairly arcane point of economic theory): the State decides what money is, and no assertion otherwise by individuals or groups can change that—only the law can.

The complete post is available here.

bitcoin

Posted in bitcoin, cyberlibertarianism, information doesn't want to be free, materality of computation, revolution, rhetoric of computation | Tagged , , , , , , | Leave a comment

Glasslinks: Privacy, Glassholes, Panics, & Take-Backs

A colleague asked if I had any links to writings about Google Glass, so I dug around in my files and found quite a few things. I thought they might come in handy for others doing research on the topic. I have even more, but this is overwhelming enough as it is. The final pair makes for some humor.

Creepiness/Reactions to Actual Use

Privacy

General/More Theoretical/Thoughtful

google glass

An arbitrary image of someone wearing Google Glass

Corporate BS

Posted in cyberlibertarianism, google, materality of computation, privacy, rhetoric of computation, surveillance, we are building big brother | Tagged , , , , , , , , , , , , | Leave a comment

Paper: ‘Commercial Trolling: Social Media and the Corporate Deformation of Democracy’

I wrote this essay for a collection that originally said it could handle pieces of this length, but in the end decided not to. It’s a bit long for traditional journals or edited collections, and it’s about some fairly immediate stuff that’s also connected to other work I’ve been writing lately, so I decided simply to post it as-is to this site (and also to SSRN and academia.edu). Yes, pure open access with no intermediaries (though my Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License is technically “not a free culture license”), something I only feel is wise to do because I publish plenty in journals, and the piece is too long for most of the journals where I’d be likely to place it–though of course anyone interested in publishing it is welcome to contact me.

“Commercial Trolling: Social Media and the Corporate Deformation of Democracy”: Abstract
While “trolling” originally named and is today often thought to be the activity of recalcitrant or obstreperous individuals with too much time or their hands or axes to grind about particular issues, a great deal of trolling on today’s social media platforms is crafted not by such individuals but instead by persons (or even computer programs) acting on behalf of (and usually employed by) powerful interests, including corporations, institutions, governments, and lobbying groups, and whose goal is not so much contributing to real exchange of political views, but instead the tilting of the discursive field to make some positions appear reasonable or even popular, and to marginalize other opinions (and those who hold them). Such action is visible in the range of ongoing intrusions by corporate actors into Wikipedia, which is reflected in the elaborate infrastructure the site maintains to police such intrusions, an infrastructure not available to much of the rest of the internet. It is even more obvious in Anti-Global Warming (AGW) discourse, by agents of industry lobbying groups and energy companies, in many locations across the web. Given the ease with which capital can purchase the services of agents to advocate effectively for views that are disfavored by a large portion—at times, such as in the climate change debate, a large majority—of the population, questions are raised about the apparently inherent democratic nature of information distribution on the web, and about what means might be utilized to level the playing field between good-faith contributors to discourse on the one hand, and institutionally-directed contributors on the other.

Full paper available here (and also on SSRN and academia.edu)

commercial powertroller

a commercial salmon trolling boat

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Posted in cyberlibertarianism, information doesn't want to be free, rhetoric of computation | Tagged , , , , , , , , , , , , | Leave a comment

Crawling from the Wreckage

I am slowly in the process of resurrecting my site manually from the database that suffered SQL injection and a number of other unspecified attacks over the past few months.

It is imperfect, but this temporary archive page provides links to the site’s existing content through archive.org, as I work on recreating the materials that were here before.

I would say the lesson is to backup your site, but I was in the habit of backing up, and the SQL injection and/or whatever else it was that got in happened quietly enough at first that it looks like the damage was done prior to my knowledge of it, so even the earliest backups are corrupted.

But, you know, code is speech, and therefore can and should be subject to no more oversight or regulation than any other form of speech. Like any other form of speech, code can completely obliterate the words of somebody else’s speech, which is surely what Madison, Locke, Jefferson, and the others had in mind.

Posted in "hacking" | Tagged , | Leave a comment

Interview: The ‘Sharing’ Hype

From “The ‘Sharing’ Hype,” an interview published today at In These Times conducted by Rebecca Burns with me, Neal Gorenflo, co-founder and publisher of Shareable Magazine, and the SolidarityNYC collective, which supports the growth of cooperatives in New York City.

“Sharing” can be seen as a form of resistance to the capitalist economy. But the “sharing economy” becomes a way of capitalizing on that resistance. This strikes me as a strong instance of cyberlibertarianism, which is the yoking of far-right ideas about “freedom” and government to an apparently apolitical digital utopianism. The political mushiness of the rhetoric surrounding such projects masks what the leaders of the projects want, which is the extraction of profit from sectors so far insulated from such monetization. The only “freedom” such efforts ultimately serve is the economic freedom of concentrated capital.

Read the complete interview at In These Times.

Posted in cyberlibertarianism, rhetoric of computation | Tagged , , , , , , , , , , , | Leave a comment

Interview: On Hacking, Decentralization, Power, Digital Democracy

A few excerpts from an interview at Dichtung Digital: Journal für Kunst und Kultur digitaler Medien, with questions asked by Roberto Simanowski.

My least favorite digital neologism is “hacker.” The word has so many meanings, and yet it is routinely used as if its meaning was unambiguous. Wikipedia has dozens of pages devoted to the word, and yet many authors, including scholars of the topic, write as if these ambiguities are epiphenomenal or unimportant. Thus the two most common meanings of the word—“someone who breaks into computer systems,” on the one hand, is by far the most widely-understood across society, and “skilled, possibly self-taught, computer user” on the other, is favored to some extent within digital circles—are in certain ways in conflict with each other and in certain ways overlap. They do not need to be seen as “the same word.” Yet so much writing about “hackers” somehow assumes that these meanings (and others) must be examined together because they have been lumped by someone or other under a single label.
Today, “hackers” are bizarrely celebrated as both libertarian and leftist political agitators, “outsiders” who “get the system” better than the rest of us do, and consummate insiders. My view is that this terminological blurring has served to destabilize Left politics, by assimilating a great deal of what would otherwise be resistant political energy to the supposedly “political” cause of hackers, whose politics are at the same time beyond specification and “beyond” Left-Right politics.

….

The Internet was never a bastion of communism, not without a kind of thoroughgoing establishment of foundations which it never had, and certainly not once the restrictions on commercial use were lifted. At some level I think some kind of public accountability for central mechanisms like search is absolutely imperative, though what forms these can take are not at all clear to me, since exposing parts of the search algorithm almost necessarily makes gaming search engines that much easier, and gaming seems to me a significant problem already. Computerization is always going to promote centralization even as it promotes decentralization – often in one and the same motion. Advocates of decentralization are often almost completely blind to this, directly suggesting that single central platforms such as Facebook, Wikipedia, Twitter and Google “decentralize” as if this somehow disables the centralization they so obviously entail.

….

Derrida encourages us to use the term “reason” in place of this more expansive notion of “rationality,” pointing out how frequently in contemporary discourse and across many languages we use the word “reasonable” to mean something different from “rational.” I argue in my book that the regime of computation today encourages the narrow view of rationality – that human reason is all calculation – and that is discourages the broader view, that reason includes other principles and practices in addition to calculation and logic. I believe some versions of “modernity” tilt toward one, and some tilt toward the other. Projects to quantify the social – including Klout scores, the quantified self, and many other aspects of social and predictive media – advertise the notion that calculation is everything. I think we have very serious reasons, even from Enlightenment and modernist thinkers, to believe this is wrong, and that historically, regimes that have bought into this view have typically not been favorable to a politics of egalitarianism and concerns with broad issues of social equality. My hope is that the pendulum is swinging very far toward the calculation pole, but that eventually it will swing back toward the broader view of rationality, recognizing that there are dangers and fallacies inherent in any attempt to thoroughly quantify the social.

The complete interview is available here.

 

Posted in cyberlibertarianism, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , | Leave a comment