Right Reaction and the Digital Humanities

A while back, I had an encounter that struck me at the time, and continues to strike me, as perfectly emblematic of the Digital Humanities as an ideological formation. While it includes a kind of brutal incivility that I associate with much of the politics that persists very near the “nice” surface of DH (of which one needs no more shocking example than the recent deeply personal and brutally mocking responses by two people I had thought were her close friends and colleagues to a perfectly reasonable and non-personal piece by Adeline Koh), I try to avoid such directly uncivil tactics if I can, and so I have deliberately let a significant amount of time pass so as to remove as much as possible the appearance of personal attack in writing this up. I have also omitted a significant amount of information so as to (hopefully) obscure the identities or institutional affiliations or even professional specializations of the persons involved, including avoiding all pronouns that would reveal the gender of any of the speakers, as I am much less interested in criticizing one individual than in showing how this person’s conduct represents a significant part of the psychology and politics that drives parts of DH.

The bare bones of the story are as follows. I am the co-leader of a “Digital Humanities and Digital Studies Working Group” (DHDS) at Virginia Commonwealth University (VCU), where I teach in the English Department. The group usually proceeds by reading and discussing texts, although sometimes we look at projects, consider projects by group participants, and have visits from outside speakers. Recently, a DHDS meeting was scheduled with a group of speakers who had been invited to campus for other reasons. These speakers included fairly well-known members of the DH community. One of them, to whom I’ll refer as A, occupies a position of some significant seniority and power in that community. (The other speakers, who don’t play much of a role in what follows, I’ll refer to as B and C.) Nevertheless, I had never met, talked to, or read anything by A prior to this meeting, in part because A has published very little about DH.

The meeting was attended by the group of speakers, a few faculty members from VCU, and a half-dozen PhD students from the MATX program in which I am a core faculty member, all of whom I had worked or was currently working with in some form or another.

The meeting began with the convener who had organized this event asking me to speak about a symposium we held at VCU a few years ago called “Critical Approaches to Digital Humanities,” about my own experience in DH, and about the overall course of discussions we had had to that point in the DHDS working group. I spoke for just over 5 minutes. I gave a brief overview describing how I came to the views I hold and how the symposium came into being. My main focus was my own experience: I mentioned that I was one of the first two people (along with Rita Raley of UCSB) hired as a “digital humanist” in the country and that despite being employed as a “Digital Humanities professor” since 2003, and despite a large number of projects and publications, my name does not occur in any of the typical journals, conferences, list, organizations, etc. I described my view, familiar to those who know my work or me, that DH is seen at least as profitably as a politics than it is a method, and that as a politics its function has been to unseat other sites of authority in English departments and to establish alternate sites of power from existing ones, and in no small part to keep what I broadly call “cultural studies of the digital” out of English departments, and generally to work against cultural studies & theoretical approaches, while not labeling itself as such. I discussed how frequently I am published in forums devoted to debating the purpose of DH, but that as far as DH “proper” goes, the unacceptability of my work to that community has been a signal and defining part of my career—despite my continuing to be employed as a professor of DH. Needless to say, it was clear that none of A, B, or C had ever heard of me or read anything I’ve written, which is fine: for just the reasons that make me so skeptical of DH as an enterprise, the main part of my work is not the sort of stuff that interests DHers, although it does seem to be of significant interest to those who see studying the digital per se to be important, which I am of course glad about.

B and C first responded to what I’d said for a while, saying something positive about the concerns I’d raised.

Then A started talking, with a notably hostile tone, which I found remarkable in itself given that A was in part my guest and that I’d said nothing whatsoever directed at or about A (it’s also worth noting that A is not in English). “I have to take issue with what David has said,” A said. “DH is not a monolith.” I hadn’t described it as a monolith (I had said it is profitably viewed as a politics as well as a method) and as usual the point of this familiar claim wasn’t clear (“not a monolith” suggests that my critique is valid for some parts of DH, but that there are others of whom it isn’t true; but A went on immediately, as do almost all of those who use the “not a monolith” response, to dispute every allegation I’d made across the board), except that I was very wrong. Yet the disrespect and hostility emanating from A were palpable. So was A’s complete dismissal of my own reports of my experience, and perhaps even more stunningly, of my own work as a scholar, with which A was clearly entirely unfamiliar, but whose quality A had already assessed based on my brief story.  I saw my students looking at me with jaws agape—they had heard my skepticism and critique of DH many times, of course, and a couple of them had seen something of what was on display here, but as several of them said to me later, they had never seen it in action as a political force, where the excess of emotion and the brute point of the emotion (in some sense to shut me down or disable my line of critique without engaging it directly) were so readily visible.

Some of the other points A made that I took notes on at the time: “I don’t accept analyses based on power,” apparently meaning that any analysis of any situation that looks at political power is inherently invalid, a claim I found not only remarkable coming from a humanist of any discipline, but also one that we typically hear only from the political right, which is to say, the party that benefits from its alignment with power, an alignment it often tries to downplay even as it benefits.

“The grants awarded by ODH [the Office of Digital Humanities at the National Endowment for the Humanities] are not uniform” (I had pointed out that ODH exclusively or near-exclusively funds tools-and-archives, a point that I am not alone in making and that I wrote up in a blog post with detailed analysis of one year’s awards). Interestingly, either B or C chimed in at this point to say that actually they agreed that the awards were remarkably uniform in their focus on tools and archives, the point I was making.

UKIP

To this I responded, “yes they are, and I’ve done a statistical analysis that shows it. There have never been any grants awarded for critical study.”

A replied: “they aren’t uniform, and it is their prerogative to decide what to fund. And as we just saw [referring to a single recently-published article on big data] statistics aren’t reliable.” (I really struggled not to laugh at this point: a DHer committed to quantitative analysis so angry at me as to argue that statistics as a whole are not valid? But it happened; there are even witnesses.) I tried to point out that we were not dealing with sampling (aka the usual meaning of “statistics”) but with an analysis of the complete list of all ODH grants for a single year, and a briefer examination of all the grants for other years, in which virtually no grants are devoted to “theory” or cultural studies as such, or to critical analysis of the digital. A waved this off with A’s hand and a pronounced sneer. (Interestingly, this was one point where either B or C interjected a bit in my favor, opining that there is a uniformity to the grants along the lines I suggested and that they are unprecedented, but A was unmoved.)

I asked: “is it the prerogative of funding agencies to provide unprecedented amounts of funding [A shakes head vehemently no, to disagree that they are unprecedented] for projects not requested by the units themselves?” A replied: “they aren’t unprecedented.” I insisted that they are and asked for the precedent to which A was referring, and A rejected my question as inappropriate without giving any actual response.

I said that despite the general truth that DH is not a “monolith,” there is a “core agenda” or view of “narrow DH” that is widely-understood in the field, often referred to as “tools and archives.” I referred to the Burdick et al Digital_Humanities volume as a recent example that explicitly sets out this single-focused agenda for DH. A interrupted again, angrily, dismissing my comments, insisting that “that volume has been widely discredited” and that the “narrow” view of DH was incorrect.

Toward the end of the conversation one of the more telling exchanges occurred. I had noted that “a main point of DH has been to reject the role of theory in English Departments, and it has been successful.” A replied quite a bit later, as if they comment had struck some kind of nerve: “the one thing I agree with David about is that DH is opposed to theory,” making it clear that this was a very good thing.

One dynamic that is worth pausing over: B and C are both relatively well-known members of the DH community. Not only were they visibly shocked by A’s conduct, but they both several times made comments in which they tried to “heal the breach” by granting that certain parts of my critique were probably right, and several times explicitly endorsed some of my specific comments. Yet anyone sitting in that room, no doubt including B & C themselves, walked away seeing the conflict between A and me as the thing that was happening, as the main political event. To me, B & C stand for all those perfectly well-meaning DHers who are not themselves directly invested in its poisonous political agenda. I do not resent the fact that B & C could not repair the event more fully. But I think they are emblematic of the role played by all those in the DH community who don’t understand or endorse or take seriously what I have tried for years to explicate as its politics. They are, broadly speaking, ineffective, and as such, end up adding gravitas to the power of those with an agenda. Their level of conviction and commitment, especially politically speaking, is far shallower than those who really do care. My impression, which may be self-serving, was that B & C were actually more inclined to take my statements seriously because of the wide-ranging and inexact vitriol of A’s performance; at some level I hope that the level of attack those of us who dare to try to locate the politics of DH might inspire others of reasonable mind to do the same.

Then in the evening we had a series of talks by the guests. It will surprise no one to know that A’s paper (composed, I am 99% sure, prior to the events of the day) explicitly and at length endorsed exclusively the tools-and-archives, “narrow” definition of DH that A had strenuously attacked me 6 hours earlier in the day for suggesting was the core of DH. A seemed not to recognize at all that this contradicted what A had so vehemently stated hours before. It even sounded like DH was a monolith after all, which I found a bit shocking.

I let this post sit for quite a while, though I took notes at the time for the purposes of writing it up. What I found remarkable about the encounter was the way that, as I have seen many times, any critique of DH in general receives what I take to be a typical rightist reaction form. First, hostility and belittling of the target; then, absolute rejection of anything the target says, typically without even having heard what that was; then, an assertion of positive principles that, more often than not, actually endorses what the critique was, but with the added affirmation that what is done was correct. This is the same pattern I encounter when I criticize Tor, or bitcoin, or cyberlibertarianism. I am an idiot; I am wrong for saying these things tend to the right; I don’t understand what the right is; actually, the right is correct, and these things should tend to the right–and despite this being my original thesis, I am completely wrong. I see that as part of the rightward tilt that is endemic to digital technology, absent careful and vigilant attendance to one’s political means and ends. “The digital” is strongly aligned with power. Power and capital in our society are inextricably linked, and in many ways identical. Strongly identifying with “the digital” almost always entails a strong identification with power. That identification works particularly well, as do all reactionary ideological formations, by burying that identification under a facade of neutrality. “I reject political analyses,” this position says, while enjoying and strongly occupying the position of power which it currently inhabits itself. Much like Wikipedia editors or GamerGate trolls, this simultaneous embrace of and disavowal of power is key to the maintenance of rightist political formations.

Posted in cyberlibertarianism, digital humanities, rhetoric of computation, what are computers for | Tagged , , , , , , , , , | Comments closed

Crowdforcing: When What I “Share” Is Yours

Among the many default, background, often unexamined assumptions of the digital revolution is that sharing is good. A major part of the digital revolution in rhetoric is to repurpose existing language in ways that advantage the promoters of one scheme or another. It is no surprise that while it may well have been the case that to earlier generations sharing was, in more or less uncomplicated ways, good, the rhetorical revolution works to dissuade us from considering whether the ethics associated with earlier terminology still apply, telling us instead that if we call it sharing, it must be good.

This is fecund ground for critics of the digital, and rightly so. Despite being called “the sharing economy”—a phrase almost as literally oxymoronic as “giant shrimp,” “living dead” or “civil war”—the companies associated with that practice have very little to do with what we have until now called “sharing.” As a rule, they are much more like digital sharecroppers such as Facebook than their promotional materials tell us, charging rent on the labor and property of individuals while centralized providers make enormous profits on volume (and often enough by offloading inherent employer costs to workers, while making it virtually impossible for them to act as organized labor). Of course there is a continuum; there are “sharing economy” phenomena that are not particularly associated with the extraction of profit from a previously unexploited resource, but there are many others that are specifically designed to do just that.

One phenomenon that has so far flown under the radar in discussions of peer-to-peer production and the sharing economy but that demands recognition on its own is one for which I think an apt name would be crowdforcing. Crowdforcing in the sense I am using it refers to practices in which one or more persons decides for one or more others whether he or she will share his or her resources, without the other person’s consent or even, perhaps more worryingly, knowledge. While this process has analogs and has even itself occurred prior to the digital revolution and the widespread use of computational tools, it has positively exploded thanks to them, and thus in the digital age may well constitute a difference in kind as well as amount.

Once we conceptualize it this way, crowdforcing can be found with remarkable frequency in current digital practice. Consider the following recent events:

  • In a recent triumph of genetic science, a company called DeCode Genetics has created a database with the complete DNA sequences for all 320,000 citizens of Iceland. Slightly less noted in the news coverage of the story is that DeCode collected actual tissue samples from only about 10,000 people, and then used statistical techniques to extrapolate the remaining 310,000 sequences. This is not population-level data: it is the full genetic sequence of each individual. As the MIT Technology Review reported, the study raises “complex medical and ethical issues.” For example, “DeCode’s data can now identify about 2,000 people with the gene mutation across Iceland’s population, and [DeCode founder and CEO Kári] Stefánsson said that the company has been in negotiations with health authorities about whether to alert them.” Gísli Pálsson, an anthropologist at the University of Iceland, is reported as saying that “This is beyond informed consent. People are not even in the studies, they haven’t submitted any consent or even a sample.” While there are unique aspects to Iceland’s population that makes it particularly useful for a study like this, scientists have no doubt that “This technique can be applied to any population,” according to Myles Axton, chief editor of Nature Genetics, which published some of DeCode’s findings. And while news coverage of the story has dwelt on the complex medical-ethics issues relating to informing people who may not want to know of their risk for certain genetic diseases, this reasoning can and must be applied much more widely: in general, thanks to big data analytics, when I give data to a company with my overt consent, I am often sharing a great deal more data about others to whom I am connected without even their knowledge, let alone any kind of consent. We can see this in the US on popular sites like 23andMe and Ancestry.com, where the explicit goals often include specific inferential data about people who are not using the product. The US itself is in process of conducting a genetic database that may mimic the inferential capacities of the Icelandic one. Genetic information is is one of the better-regulated parts of the data sciences (though regulation even in this domain remains inadequate), and yet even it seems to have an impoverished vision of what is possible with this data; where do we look for constraints placed on this sort of data analysis in general?
  • Sharing pictures of your minor children on Facebook is already an interesting enough issue. Obviously, you have the parental right to decide whether or not to post photos of your minor children, but parents likely do not understand all the ramifications of such sharing for themselves, let alone for their children, not least since none of us know what Facebook and the data it harvests will be like in 10 or 20 years. Yet an even more pressing issue occurs when people share pictures on Facebook and elsewhere of other peoples’ minor children, without the consent or even knowledge of those parents. Facebook makes it easy to tag photos with the names of people who don’t belong to it. The refrain we hear ad nauseum—“if you’re concerned about Facebook, don’t use it”—is false in many ways, among which the most critical may be that those most concerned about Facebook, who have therefore chosen not to use it, may thereby have virtually no control over not just the “shadow profile” Facebook reportedly maintains for everyone in the countries where it operates, but even what appears to be ordinary sharing data that can be used by all the data brokers and other social analytic providers. Thus while you may make a positive, overt decision not to share about yourself, and even less about the minor children of whom you have legal guardianship, others can and routinely do decide you are going to anyway.
  • So-called “Sharing Economy” companies like Uber, Lyft, and particularly in this case AirBnB insist on drawing focus to the population who looks most sympathetic from the companies’ point of view: first, the individual service providers (individuals who earn extra money by occasionally driving a car for Uber, or who rent out their apartments when out of the city for a few weeks), and second, the individual users (those buying Uber car rides or renting out AirBnB properties). They work hard to draw attention away from themselves as companies (except when they are hoping to attract investor attention), and even more strongly away from their impact on the parts of the social world that are impacted by their services—in so far as these are mentioned at all, it is typically with derisive words like “incumbent” and in contexts where we are told that the world would beyond question be a better place if these “incumbents” would just go away. One does not need to look hard on peers.org, an astroturf industry lobbying group disguised as a grassroots quasi-socialist advocacy organization, to find paean after paean to the benefits brought to individuals by these giant corporate middlemen. (More objectively, much of the “sharing economy” looks like yet particularly broad cases of the time-honored rightist practice of privatizing profits while “socializing” losses.) One has to look much harder—in fact, one will look in vain—to find accounts of the neighbors of AirBnB hosts whose zoned-residential properties have turned into unregulated temporary lodging facilities, with all of the attendant problems these bring. One has to look even harder to find thoughtful economics analyses that discuss what the longer-term effects are of housing properties routinely taking in additional short-term income: it does not take much more than common sense to realize that the additional income flowing in will eventually get added to the value of the properties themselves, eventually pricing the current occupants out of the properties in which they live. The impact of these changes may be most pronounced on those who have played no role whatsoever in the decision to rent out units to AirBnB—in fact, in the case of condominiums, the community may have explicitly ruled out such uses for exactly this reason, and yet, absent intensive policing by the condo board, may find their explicit rules and contracts being violated in any number of ways. And condo boards are among the entities with the most power to resist these involuntary “innovations” on established guidelines; others have no idea they are happening. As Tom Slee’s persistent research has shown, AirBnB has a disturbing relationship with what appear to be a variety of secretive corporate bodies who have essentially turned zoned-residential properties into full-time commercial units, which not only violates laws that were enacted for good reason, but also promises to radically alter property values for residents using the properties as the law currently permits.
  • Related to the sharing of genetic information is the (currently) much broader category of information that we call right now the quantified self. In the largest sense we could include in this category any kind of GTD, to-do list, calorie and diet trackers, health monitors, exercise and fitness trackers, monitoring devices such as FitBit and even glucose monitors, and many more to come. On the one hand, there are certainly mild crowdforcing effects of collecting this data on oneself, just as there are crowdforcing effects of SAT prep programs (if they work as advertised, which is debatable) and steroid usage in sports. But when this data is collected and aggregated—whether by companies like FitBit or even user communities—they start to impact all of us in ways that only seem minor today. When these data get accessed by brokers to insurers or insurers themselves, they provide ample opportunities for those of us who choose not to use these methods at all to be directly impacted by other people’s use of them, from whether health insurers start to charge us “premiums” (ie, deny us “discounts”) if we refuse to give them access to our own data, to inferences made about us based on the data they do have run through big data correlations with the richer data provided by QS participants, and so on.
  • The concerns raised by practitioners and scholars like Frank Pasquale, Cathy O’Neill, Kate Crawford, Danielle Citron, Julia Angwin and others about the uses of so-called “big data analytics” resonate at least to some extent with the issue of crowdforcing. Big data providers create statistical aggregates of human behavior based on collecting what are thought to be representative samples of various populations. They cannot create these aggregates without the submission of data from the representative group. Those willing to submit their data may not themselves understand the purposes to which their data is being put. A simple example of this is found in the so-called “wellness programs” that are becoming increasingly attached to health insurance. In its most common formulation, insurers offer a “discount” to those customers willing to submit to a variety of tests, data collection routines, and to participate in certain forms of proactive health activities (such as using a gym). Especially in the first two cases, it looks to the user like the insurer is incenting them to take tests that may find markers of diseases that can be easily treated in their early stages and much less easily treated if caught later. Regardless of whether these techniques work or not, which is debatable, the insurers have at least one other motive in pushing these programs, which is to collect data on their customers and create statistical aggregates that affect not just those who submit to the testing, but their entire base of insured customers, and even those it does not currently insure. The differential pricing model that insurers call “discounts” or sometimes “bonuses” (but rarely “penalties,” which speaking financially, they also are: it is another characteristic of the kinds of marketing rhetoric swamping everything in the digital world that literally the same practice can appear noxious if called a “penalty” but welcome if called a “discount”) seem entirely at odds with the central premise of insurance, which is that risk is distributed across large groups regardless of their particular risk profile. Even in the age of Obamacare that discourages insurers from discriminating based on pre-existing conditions, and where large employers are required by law to provide insurance at the same cost to all their employees, these “discounts” allow insurers to do just the opposite, and suggest a wide range of follow-on practices that will discriminate with an even more finely-grained comb. If customers to these companies understood that the “bonuses” have been created to craft discriminatory data profiles at least as much as they are to promote customer wellness, perhaps they would resist these “incentives” more strenuously. As it is, not only do those being crowdforced by these policies have very little access to information that makes clear the purpose of this data collection, but those contributing to it have very little idea of what they are doing to other people or even themselves. And this, of course, is one of the deep problems with social media in general, most notably exemplified by the data brokerage industry.
  • As a final example, consider the proposed and so-far unsuccessful launch of Google Glass. One of the most maddening parts of the debate over Google Glass was that proponents would focus almost exclusively on the benefits Glass might afford them, while dismissing what critics kept talking about, which is the impact Glass has on others—on those who choose not to use Glass. Critics said: Glass puts all public (and much private) space under constant recorded surveillance, both by the individuals wearing Glass and, perhaps even more worryingly, by Google itself when it stores, even temporarily, that data on its servers. What was so frustrating about this debate was the recalcitrance of Google and its supporters to see that they were arguing very strongly that the rights of people who choose not to use Glass were not up for the taking by those who did want to use it; that Google’s explicit insistence that I have to allow my children to be recorded on video and Google to store that video simply by dint of their being alive where any person who happens to use Glass might be was nothing short of remarkable. It was not hard to find Glassholes overtly insisting that their rights (and therefore Google’s rights) trump those of everyone else. This controversy was a particularly strong demonstration of the fact that the “if you don’t like it, don’t use it” mantra is false. I think we all have to look at the failure of Google to (so far) create a market for Glass as a real success of the mass of public engagement to reject the ability of a huge for-profit corporation to dictate terms to everyone. It’s even a success of the free market, in the sense that Google’s market research clearly showed that this product was not going to be met with significant consumer demand. But it is partly the visibility of Glass that allowed this to happen; too much of what I talk about here is not immediately visible in the way Glass was.

To some extent, crowdforcing is a variation on the relatively wide class of phenomena to which economists refer as externalities. Externalities refer in general to costs or benefits that impact a party even when they played no direct role in a transaction. These are usually put into two main classes. “Negative” externalities occur when costs are attached to uninvolved parties, of which the classic examples usually have to do with environmental pollution by for-profit companies, for which cleanup and health costs are “paid” by individuals who may have no connection whatsoever with the company. “Positive” externalities occur when someone else’s actions with which I’m uninvolved benefit me regardless: the simplest textbook example is something like neighbors improving their houses, which may raise property values even for homeowners who have done no work on their properties at all. Externalities clearly occur with particular frequency in times of technological and market change; there were no doubt quite a few people who would have preferred to use horses for transportation even after so many people were driving cars that horses could no longer be allowed on the roadways. While these kinds of externalities might be in some way homologous with some of the crowdforcing examples that are economic in focus (such as the impact of AirBnB and Uber on the economic conditions of the properties they share), they strike me as not capturing so well the data-centric aspects of much current sharing. Collecting blood samples from those individuals who were tested certainly allowed researchers in the past to determine normal and abnormal levels of the various components of human blood, but they did not make it possible to infer much (if anything) about my blood without I myself being tested. In fact, in the US, Fifth   Amendment protections against self-incrimination extend to the collection of such personal data because it is considered unique to each human body: that is, it is currently impermissible to collect DNA samples from everyone unless a proper warrant has been issued. How do we track this right with the ability to infer everyone’s DNA without most of us ever submitting to collection?

a crowd

Crowdforcing effects also overlap with phenomena researchers refer to by names like “neighborhood effects” and “social contagion.” In each of these, what some people do ends up affecting what many other people do, in a way that goes much beyond the ordinary majoritarian aspects of democratic culture. That is, we know that only one candidate will win an election, and that therefore those who did not vote for that candidate will be (temporarily) forced to acknowledge the political rule of people with whom they don’t agree. But this happens in the open, with the knowledge and even the formal consent of all those involved, even if that consent is not always completely understood.

Externalities produced by economic transactions often look something like crowdforcing. For example, when people with means routinely hire tutors and coaches for their children for standardized tests, they end up skewing the results even more in their favor, thus impacting those without means in ways they frequently do not understand and may not be aware of. This can happen in all sorts of markets, even in cultural markets (fashion, beauty, privilege, skills, experience). But it is only the advent of society-wide digital data collection and analysis techniques that makes it so easy to sell your neighbor out without their knowledge and consent, and to have what is sold be so central to their lifeworld.

Dealing with this problem requires, first of all, conceptualizing it as a problem. That’s all I’ve tried to do here: suggest the shape of a problem that, while not entirely new, comes into stark relief and becomes widespread due to the availability of exactly the tools that are routinely promoted as “crowdsourcing” and “collective intelligence” and “networks.” As always, this is by no means to deny the many positive effects these tools and methods can have; it is to suggest that we are currently overly committed to finding those positive effects and not to exploring or dwelling on the negative effects, as profound as they may be. As the examples I’ve presented here show, the potential for crowdforcing effects on the whole population are massive, disturbing, and only increasing in scope.

In a time when so much cultural energy is devoted to the self, maximizing, promoting, decorating and sharing it, it has become hard to think with anything like the scrutiny required about how our actions impact others. From an ethical perspective, this is typically the most important question we can ask: arguably it is the foundation of ethics itself. Despite the rhetoric of sharing, we are doing our best to turn away from examining how our actions impact others. Our world could do with a lot more, rather than less, of that kind of thinking.

Posted in "social media", cyberlibertarianism, materality of computation, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , , , , , , , , , , , | 6 Responses

Tor, Technocracy, Democracy

As important as the technical issues regarding Tor are, at least as important—probably more important—is the political worldview that Tor promotes (as do other projects like it). While it is useful and relevant to talk about formations that capture large parts of the Tor community, like “geek culture” and “cypherpunks” and libertarianism and anarchism, one of the most salient political frames in which to see Tor is also one that is almost universally applicable across these communities: Tor is technocratic. Technocracy is a term used by political scientists and technology scholars to describe the view that political problems have technological solutions, and that those technological solutions constitute a kind of politics that transcends what are wrongly characterized as “traditional” left-right politics.

In a terrific recent article describing technocracy and its prevalence in contemporary digital culture, the philosophers of technology Evan Selinger and Jathan Sadowski write:

Unlike force wielding, iron-fisted dictators, technocrats derive their authority from a seemingly softer form of power: scientific and engineering prestige. No matter where technocrats are found, they attempt to legitimize their hold over others by offering innovative proposals untainted by troubling subjective biases and interests. Through rhetorical appeals to optimization and objectivity, technocrats depict their favored approaches to social control as pragmatic alternatives to grossly inefficient political mechanisms. Indeed, technocrats regularly conceive of their interventions in duty-bound terms, as a responsibility to help citizens and society overcome vast political frictions.

Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (“civic,” “ethical,” “white” and “black” hat alike), hacktivists, WikiLeaks fans, Anonymous “members,” even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands.

While these suggestions typically frame themselves in terms of the words we use to describe core political values—most often, values associated with democracy—they actually offer very little discussion adequate to the rich traditions of political thought that articulated those values to begin with. That is, technocratic power understands technology as an area of precise expertise, in which one must demonstrate a significant level of knowledge and skill as a prerequisite even to contributing to the project at all. Yet technocrats typically tolerate no such characterization of law or politics: these are trivial matters not even up for debate, and in so far as they are up for debate, they are matters for which the same technical skills qualify participants. This is why it is no surprise that amount the 30 or 40 individuals listed by the project as “Core Tor People,” the vast majority are developers or technology researchers, and those few for whom politics is even part of their ambit, approach it almost exclusively as technologists. The actual legal specialists, no more than a handful, tend to be dedicated advocates for the particular view of society Tor propagates. In other words, there is very little room in Tor for discussion of its politics, for whether the project actually does embody widely-shared political values: this is taken as given.

freedom is slavery

“Freedom Is Slavery,” a flag of the United Technocracies of Man, a totalitarian oligarchic state in the Ad Astra Per Astera alternate history by RvBOMally

This would be fine if Tor really were “purely” technological—although just what a “purely” technological project might be is by no means clear in our world—but Tor is, by anyone’s account, deeply political, so much so that the developers themselves must turn to political principles to explain why the project exists at all. Consider, for example, the Tor Project blog post written by lead developer Roger Dingledine that describes the “possible upcoming attempts to disable the Tor network” discussed by Yasha Levine and Paul Carr on Pando. Dingledine writes:

The Tor network provides a safe haven from surveillance, censorship, and computer network exploitation for millions of people who live in repressive regimes, including human rights activists in countries such as Iran, Syria, and Russia.

And further:

Attempts to disable the Tor network would interfere with all of these users, not just ones disliked by the attacker.

Why would that be bad? Because “every person has the right to privacy. This right is a foundation of a democratic society.”

This appears to be an extremely clear statement. It is not a technological argument: it is a political argument. It was generated by Dingledine of his own volition; it is meant to be a—possibly the—basic argument that that justifies Tor. Tor is connected to a fundamental human right, the “right to privacy” which is a “foundation” of a “democratic society.” Dingledine is certainly right that we should not do things that threaten such democratic foundations. At the same time, Dingledine seems not to recognize that terms like “repressive regime” are inherently and deeply political, and that “surveillance” and “censorship” and “exploitation” name political activities whose definitions vary according to legal regime and even political point of view. Clearly, many users of Tor consider any observation by any government, for any reason, to be “exploitation” by a “repressive regime,” which is consistent for the many members of the community who profess a variety of anarchism or anarcho-capitalism, but not for those with other political views, such as those who think that there are circumstances under which laws need to be enforced.

Especially concerning about this argument is that it mischaracterizes the nature of the legal guarantees of human rights. In a democracy, it is not actually up to individuals on their own to decide how and where human rights should be enforced or protected, and then to create autonomous zones wherein those rights are protected in the terms they see fit. Instead, in a democracy, citizens work together to have laws and regulations enacted that realize their interpretation of rights. Agitating for a “right to privacy” amendment to the Constitution would be appropriate political action for privacy in a democracy. Even certain forms of (limited) civil disobedience are an important part of democracy. But creating a tool that you claim protects privacy according to your own definition of the term, overtly resisting any attempt to discuss what it means to say that it “protects privacy,” and then insisting everyone use it and nobody, especially those lacking the coding skills to be insiders, complain about it because of its connection to fundamental rights, is profoundly antidemocratic. Like all technocratic claims, it challenges what actually is a fundamental precept of democracy that few across the political spectrum would challenge: that open discussion of every issue affecting us is required in order for political power to be properly administered.

It doesn’t take much to show that Dingledine’s statement about the political foundations of Tor can’t bear the weight he places on it. I commented on the Tor Project blog, pointing out that he is using “right to privacy” in a different way from what that term means outside of the context of Tor: “the ‘right to privacy’ does not mean what you assert it means here, at all, even in those jurisdictions that (unlike the US) have that right enshrined in law or constitution.” Dingledine responded:

Live in the world you want to live in. (Think of it as a corollary to ‘be the change you want to see in the world’.)

We’re not talking about any particular legal regime here. We’re talking about basic human rights that humans worldwide have, regardless of particular laws or interpretations of laws.

I guess other people can say that it isn’t true — that privacy isn’t a universal human right — but we’re going to keep saying that it is.

This is technocratic two-stepping of a very typical sort and deeply worrying sort. First, Dingledine claimed that Tor must be supported because it follows directly from a fundamental “right to privacy.” Yet when pressed—and not that hard—he admits that what he means by “right to privacy” is not what any human rights body or “particular legal regime” has meant by it. Instead of talking about how human rights are protected, he asserts that human rights are natural rights and that these natural rights create natural law that is properly enforced by entities above and outside of democratic polities. Where the UN’s Universal Declaration on Human Rights of 1948 is very clear that states and bodies like the UN to which states belong are the exclusive guarantors of human rights, whatever the origin of those rights, Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role.

We don’t have to look very hard to see the problems with that. Many in the US would assert that the right to bear arms means that individuals can own guns (or even more powerful weapons). More than a few construe this as a human or even a natural right. Many would say “the citizen’s right to bear arms is a foundation of a democratic society.” Yet many would not. Another democracy, the UK, does not allow citizens to bear arms. Tor, notably, is the home of many hidden services that sell weapons. Is it for the Tor developers to decide what is and what is not a fundamental human right, and how states should recognize them, and to distribute weapons in the UK despite its explicit, democratically-enacted, legal prohibition of them? (At this point, it is only the existence of legal services beyond Tor’s control that make this difficult, but that has little to do with Tor’s operation: if it were up to Tor, the UK legal prohibition on weapons would be overwritten by technocratic fiat.)

We should note as well that once we venture into the terrain of natural rights and natural law, we are deep in the thick of politics. It simply is not the case that all political thinkers, let alone all citizens, are going to agree about the origin of rights, and even fewer would agree that natural rights lead to a natural law that transcends the power of popular sovereignty to protect. Dingledine’s appeal to natural law is not politically neutral: it takes a side in a central, ages-old debate about the origin of rights, the nature of the bodies that guarantee them.

That’s fine, except when we remember that we are asked to endorse Tor precisely because it instances a politics so fundamental that everyone, or virtually everyone, would agree with it. Otherwise, Tor is a political animal, and the public should accede to its development no more than it does to any other proposed innovation or law: it must be subject to exactly the same tests everything else is. Yet this is exactly what Tor claims it is above, in many different ways.

Further, it is hard not to notice that the appeal to natural rights is today most often associated with the political right, for a variety of reasons (ur-neocon Leo Strauss was one of the most prominent 20th century proponents of these views). We aren’t supposed to endorse Tor because we endorse the right: it’s supposed to be above the left/right distinction. But it isn’t.

Tor, like all other technocratic solutions (or solutionist technologies) is profoundly political. Rather than claiming it is above them, it should invite vigorous political discussion of its functions and purpose (as at least the Tor Project’s outgoing Executive Director, Andrew Lewman, has recently stated, though there have yet to be many signs that the Tor community, let alone the core group of “Tor People,” agrees with this). Rather than a staff composed entirely of technologists, any project with the potential to intercede so directly in so many vital areas of human conduct should be staffed by at least as many with political and legal expertise as it is by technologists. It should be able to articulate its benefits and drawbacks fully in the operational political language of the countries in which it operates. It should be able to acknowledge that an actual foundation of democratic polities is the need to make accommodations and compromises between people whose political convictions will differ. It needs to make clear that it is a political project, and that like all political projects, it exists subject to the will of the citizenry, to whom it reports, and which can decide whether or not the project should continue. Otherwise, it disparages the very democratic ground on which many of its promoters claim to operate.

This, in the end, is one reason that Pando’s coverage of Tor is so important, and a reason it strikes me as seriously unfortunate to suggest that. I think many in Tor know much less about politics than they think they do. If they did, they might wonder as I do why it is that organizations like Radio Free Asia and the Broadcasting Board of Governors have been such persistent supporters of the project. These organizations are not in the business of supporting technology for technology’s sake, or science for the sake of “pure science.” Rather, they promote a particular view of “media freedom” that is designed to promote the values of the US and some of its allies. These organizations have strong ties to the intelligence community. Anyone with a solid knowledge of political history will know that RFA and BBG only fund projects that advance their own interests, and that those interests are those of the US at its most hegemonic, at its most willing to push its way inside of other sovereign states. Many view them as distributors of propaganda, pure and simple.

You don’t have to look hard to find this information: Wikipedia itself notes that Catharin Dalpino of the centrist Brookings Institution think tank (ie, no wild-eyed radical) says of Radio Free Asia: “It doesn’t sound like reporting about what’s going on in a country. Often, it reads like a textbook on democracy, which is fine, but even to an American it’s rather propagandistic.” It is no stretch to see the “media freedom” agenda of these organizations and the “internet freedom” agenda surrounding Tor as more alike than different. Further, Tor is arguably a much more powerful tool than are media broadcasts, despite how powerful those themselves are. This is not to say that it is absolutely wrong for the US to promote its values this way, or that everything about Radio Free Europe and Radio Free Asia was and is bad. It’s to say these are profoundly political projects, and democracy demands that the citizenry and its elected representatives, not technocrats, decide whether to pursue them.

We are often told that Tor is just trying to do good, trying to inspire respect for human decency and human rights, and that its community is just being attacked because it is “an easy target.” Yet the contrary story is much more rarely told: that Tor encourages a technocratic dismissal of democratic values, and promotes serious and seriously uninformed anti-government hostility. Further, despite the claims of its advocates that Tor is meant to protect “activists” against human rights abuses (as the Tor community construes these), the fact remains that to many observers, Tor is just as lucidly seen as a tool that promotes and encourages human rights abuses of the very worst kind: child pornography, child exploitation, all the crimes and suffering that go along with worldwide distribution of illegal drugs, assassination for hire, and much more. The Tor community dismisses these worries as “FUD” (or more poetically, the “Four Horsemen of the Info-Apocalypse”) but evidence that they are real is very hard for the objective observer to overlook (even lists on the open web of the most widely-used hidden services reveals very few that are not involved in circumventing laws that many my consider not only reasonable but important). The “use case” for encrypted messaging such as OTR (Off-The-Record messaging) is far easier to understand in a political sense than is the one for the hidden services that sell drugs, weapons, promote rape porn, and so on. It is beyond ironic that a tool for which the most salient uses may be the most serious affronts to human rights should be promoted as if its contributions to human rights are so obvious as to be beyond question. Does Tor do “good”? No doubt. But it also enables some very bad things, at least as I personally evaluate “good” and “bad.” You can’t say that on the one hand the good it enables accrues to Tor’s benefit, while the bad it enables is just an unavoidable cost of doing business. With very limited exceptions (e.g. speech itself, and even there the balance is contested) we don’t treat cultural phenomena that way. The only name for striking the right balance between those poles is politics, and it is entirely possible that the political balance Tor strikes is one that, were it better understood, few people would assent to. Making decisions about matters like this, not the expanded and putative “right to privacy,” is the foundation of democracy. Unless Tor learns not just to accommodate but to encourage such discussions, it will remain a project based on technocracy, not democracy, and therefore one that those of us concerned about the fate of democracy must view with significant concern.

Posted in "hacking", cyberlibertarianism, privacy, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment

‘Is It Compromised?’ Is the Wrong Question about US Government Funding of Tor

In many ways, the most surprising thing about Yasha Levine’s powerful reporting on US government funding of Tor at Pando Daily has been the response to it. From the trolling attacks and ad hominem insults by apparently respectable, senior digital privacy activists and journalists, to repeated, climate-denialist-style “I’m rubber you’re glue”-type (or, as I like to call them, “You’re a towel”), evidence-free insinuations about Levine and Pando’s possibly covert funding sources and intelligence world connections, almost none of the response has had the measured and reasonable tone, let alone the connection to established facts, that Levine’s own reporting has.

Much of this response tries to turn Levine’s reporting into a conspiracy theory, which it then pretends to defuse by positing even wilder conspiracy theories. The conspiracy theory is that funding from the State Department, BBG, and all the various CIA and other intelligence agency cut-outs means that the Tor developers are covert agents or assets, secretly doing something very different from what they say they are doing; and that Tor is deliberately compromised in ways that these leaders are not revealing.

This turns into the question: “If the CIA is funding Tor, where are the vulnerabilities they are secretly planting in it? Why haven’t they been found via the classic principle that ‘all bugs are shallow when many eyes are looking for them’?”

For example, here are two comments to Levine’s “Internet Privacy, Funded by Spooks: A Brief History of the BBG”:

User “SpryteEast” writes: “this article could be great if it had proof. Most of modern-day cryptography technology was funded by US government at some point. Does it mean that they can break into everything?”

User “grumpykocka” writes: “Simple question: do you have proof that these systems have been compromised in any way? Technically, wouldn’t these open systems be incredibly hard to compromise without us knowing it? Perhaps they could be cracked, but you are implying much more than that, intentional back doors built into the code. But again, how would that go undetected in these open source solutions?”

This is from Tor & First Look staffer Micah Lee’s mean-spirited and defensive “Fact-Checking Pando’s Smears Against Tor,” responding to Levine’s earlier pieces:

If there were evidence of an intentional design flaw in the Tor network, similar to NSA’s sabotage of encryption standards through their BULLRUN program, that would be a huge deal. Pando didn’t find anything that wasn’t published on torproject.org.

This is the wrong question.

It’s a question so wrong and so enticing that it often derails the conversation we really should be having. It’s asked so often that it has the appearance of a party line, talking points that those involved issue with a remarkable persistence and uniformity. We don’t need to ask whether that party line is dictated by someone; what is more interesting is the party line itself.

cia dissemination of propaganda

a 1977 New York Times story about CIA’s propagandistic use of the press (from Yasha Levine’s most recent Pando story)

CIA, like other intelligence agencies (and for whom I’ll use it as a shorthand for the time being), is not a mind control supervillain. It does not “own” assets (in the spy lingo, “agents” usually refers to actual employees, whereas “assets” are others who may in some way or another contribute to intelligence agency efforts on an ad-hoc basis) and prescribe every aspect of their behavior. Rather, it looks for assets whose interests may align with its own. At times it may nudge them in the direction it wants; only with some of those most closely tied in does it directly give orders. When CIA operates through cutouts those cutouts typically appear to have full autonomy, and many in the cutout may well have that autonomy: that’s what gives cutouts credibility and what makes them useful. If everyone could have easily seen that Life magazine was a CIA front, people would have taken it much less seriously than they did.

CIA uses cutouts and assets for a much subtler purpose: because those apparently “regular” people and organizations, in doing what they do anyway, align with US state interests. They advance CIA’s interests just by being themselves. CIA has no need to control, direct, or even directly influence these assets: in certain ways, this would be less productive than remaining in the background.

From this perspective, the wrong question is to ask what CIA and State and so on are doing to “mess” with the Tor Project. The right question is to ask: how does the development of Tor, and in a parallel fashion the promotion of “internet freedom,” align with the interests of CIA, the State Department, USAID, and so on?

This is a question that it is very hard for cyberlibertarians even to put to themselves. They are so convinced of the righteousness of “internet freedom” and of Tor, so sure of its purpose and its politics that many of them appear not even to be able to bear to ask whether these beliefs might be fallacious. That “internet freedom,” a slogan without a clear referent, might be a policy the US promotes for specific geostrategic reasons, in part because so many people hop on board without understanding that the “internet freedom” agenda is not what it sounds like. That Tor serves some very specific US interests.

Despite the conspiratorial accusations levied at Levine, his piece makes this focus very clear:

The BBG was formed in 1999 and runs on a $721 million annual budget. It reports directly to Secretary of State John Kerry and operates like a holding company for a host of Cold War-era CIA spinoffs and old school “psychological warfare” projects: Radio Free Europe, Radio Free Asia, Radio Martí, Voice of America, Radio Liberation from Bolshevism (since renamed “Radio Liberty”) and a dozen other government-funded radio stations and media outlets pumping out pro-American propaganda across the globe.

This does not mean, of course, that it’s uninteresting whether some people involved with Tor—perhaps especially those close to and/or funded by the OTF, as Levine points out—might be “assets” in some way or another, but we are likely never really to know the truth about covert shenanigans like that. It also doesn’t mean that questions about Tor being compromised are unimportant. It’s interesting to note that Micah Lee asks Levine to provide evidence of an “intentional design flaw in the Tor network”: evidence of intentionality would consist of communicative documentation that is only likely to turn up in unusual circumstances. But there is plenty of evidence of design flaws per se in the Tor network: they are found all the time, often by the Tor developers themselves. How did they get there? Who knows. But that is one reason why “is it compromised” is such a misguided question: we know Tor is compromised or has been compromised at times, and undoubtedly will be again. We don’t know who is responsible for its vulnerabilities: often they emerge from parts of the system nobody appears to have thought about, but sometimes nobody, not even those in the Tor community, knows their source.

But these are questions about which we can’t do much more than speculate. They are outweighed in importance by the central question about the ideology behind Tor. If you are asking how government funding compromises Tor and “internet freedom,” you are asking the wrong question. The right question is: how do Tor and “internet freedom” serve the interests of those who fund them so generously—and have virtually no history of funding (especially on an ongoing basis) projects that are contrary or even irrelevant to their interests? Why do major factions within the US Government so steadfastly promote an internet project whose supporters routinely insist that “the government sure does hate the Internet”?

We don’t have to look far or think that hard to develop answers to these questions. Just the other day, Shawn Powers and Michael Jablonski, authors of the new and fascinating-sounding book, The Real Cyber War: The Political Economy of Internet Freedom (University of Illinois Press, 2015), announced the publication of their book by writing:

Efforts to create a universal internet built upon Western legal, political, and social preferences is driven by economic and geopolitical motivations rather than the humanitarian and democratic ideals that typically accompany related policy discourse. In fact, the freedom-to-connect movement is intertwined with broader efforts to structure global society in ways that favor American and Western cultures, economies, and governments.

The inability of many Tor and “internet freedom” and even super-encryption supporters to understand (or at least, to talk as if they understand) this point of view is part of what is so disturbing about this whole situation. “Internet freedom” and “internet privacy” and even “Tor” have become like articles of religious faith: creeds whose fundamental tenets cannot be questioned, even if they also cannot be stated in anything like the clarity with which “freedom of the press” can be stated. The critique we need to consider is not merely that major powers are “paying lip service” to the idea of internet freedom; it is that the idea itself is bankrupt: it is a propagandistic slogan in search of a meaning, a set of meaningful-sounding (but meaningless) words, like “right to work,” that exists only to serve a powerful and disturbing agenda (which is one direction that the outsize “internet freedom” funding provided by the US State Department, and Google’s triumphalist support for the idea, should raise questions for everyone). Indeed, if the putative freedom of information on which “the internet” (and Tor, and “internet freedom,” etc.) is supposedly based is going to mean anything–if it at least entails the “freedom of speech” and “freedom of the press” that in my opinion it does not eclipse in especially legible ways–it has to mean being willing always to question our fundamental assumptions, making it beyond ironic that its fiercest champions work so hard to prevent us from doing just that.

Posted in cyberlibertarianism, privacy, revolution, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , , , , | 1 Response

Wikipedia and the Oligarchy of Ignorance

In a recent story on Medium called “One Man’s Quest to Rid Wikipedia of Exactly One Grammatical Mistake: Meet the Ultimate WikiGnome,” Andrew McMillen tells the story of Wikipedia editor “Giraffedata”—beyond the world of Wikipedia, a software engineer named Bryan Henderson—who has edited thousands of Wikipedia pages to correct a single grammatical error and is one of the 1000 most active editors of Wikipedia. McMillen describes Giraffedata as one of the “favorite Wikipedians” of some employees at the Wikimedia Foundation, the umbrella organization that funds and organizes Wikipedia along with other projects. The area he works on is not controversial (at least not in the sense of hot topics like GamerGate or climate change); his edits are typically not reverted in the way that substantive edits to such controversial topics frequently are. While the area he focuses on is idiosyncratic, his work is extremely productive. As such he is understood by at least some of the core Wikipedians to exemplify the power of crowds, the benefits of “organizing without organization,” the fundamental anti-hierarchical principles that apparently point toward new, better political formations.

McMillen describes a presentation at the 2012 Wikimania conference by two Wikimedia employees, Maryana Pinchuk and Steven Walling:

Walling lands on a slide entitled, ‘perfectionism.’ The bespectacled young man pauses, frowning.

“I feel sometimes that this motivation feels a little bit fuzzy, or a little bit negative in some ways… Like, one of my favorite Wikipedians of all time is this user called Giraffedata,” he says. “He has, like, 15,000 edits, and he’s done almost nothing except fix the incorrect use of ‘comprised of’ in articles.”

A couple of audience members applaud loudly.

“By hand, manually. No tools!” interjects Pinchuk, her green-painted fingernails fluttering as she gestures for emphasis.

“It’s not a bot!” adds Walling. “It’s totally contextual in every article. He’s, like, my hero!”

“If anybody knows him, get him to come to our office. We’ll give him a Barnstar in person,” says Pinchuk, referring to the coveted virtual medallion that Wikipedia editors award one another.

Walling continues: “I don’t think he wakes up in the morning and says, ‘I’m gonna serve widows in Africa with the sum of all human knowledge.’” He begins shaking his hands in mock frustration. “He wakes up and says, ‘Those fuckers — they messed it up again!’”

Neither the presenters nor McMillen follow up on Walling’s aside that Giraffedata’s work might be “a little bit negative in some ways.” But it seems arguable to me that this is the real story, and the celebration of Henderson’s efforts is not just misplaced, but symptomatic. Rather than demonstrating the salvific benefits of non-hierarchical organizations, Giraffedata’s work symbolizes their remarkable tendency to turn into formations that are the exact opposite of what the rhetoric suggests: deeply (if informally) hierarchical collectives of individuals strongly attached to their own power, and dismissive of the structuring elements built into explicit political institutions.

This is a well-known problem. It has been well-known at least since 1970 when Jo Freeman wrote “The Tyranny of Structurelessness”; it is connected to what Alexander Galloway has recently called “The Reticular Fallacy.” These critiques can be summed up fairly simply: when you deny an organization the formal power to distribute power equitably—to acknowledge the inevitable hierarchies in social groups and deal with them explicitly—you inevitably hand power over to those most willing to be ruthless and unflinching in their pursuit of it. In other words, in the effort to create a “more distributed” system, except in very rare circumstances where all participants are of good will and relatively equivalent in their ethics and politics, you end up creating exactly the authoritarian rule that your work seemed designed specifically to avoid. You end up giving even more unstructured power to exactly the persons that institutional strictures are designed to curtail.

That this is a general problem with Wikipedia has been noted by Aaron Shaw and Benjamin Mako Hill in a 2014 paper called “Laboratories of Oligarchy? How The Iron Law Extends to Peer Production.” Shaw and Mako Hill are fairly enthusiastic about Wikipedia and peer production, and yet their clear-eyed research, much of which is based on empirical as well as theoretical considerations, forces them to conclude:

Although, invoking U.S. Supreme Court Justice Louis Brandeis, online collectives have been hailed as contemporary “laboratories of democracy”, our findings suggest that they may not necessarily facilitate enhanced practices of democratic engagement and organization. Indeed, our results imply that widespread efforts to appropriate online organizational tactics from peer production may facilitate the creation of entrenched oligarchies in which the self-selecting and early-adopting few assert their authority to lead in the context of movements without clearly defined institutions or boundaries. (23)[1]

In the current case, what is so striking about Giraffedata’s work is that, from the perspective of every reasonable expert angle on the question, Giraffedata is just plain wrong. It is not a fact that “comprised of” is ungrammatical or that it means only what Giraffedata says it does. This is not at all controversial. In an excellent piece at The Guardian, “Why Wikipedia’s Grammar Vigilante Is Wrong,” David Shariatmadari demonstrates the many reasons why this is the case (though as usual, read the comments for typically brusque and/or ‘anti-elite’ elitist opinions to the contrary). Even better is “Can 50,000 Wikipedia Edits Be Wrong?” by Mark Lieberman at Language Log, the leading linguistics site in the world, which has been covering this issue—that is, specifically the usage of “comprised of”—since at least 2011. Lieberman wryly notes that “It doesn’t seem to have occurred to Mr. McMillen to check the issue out in the Oxford English Dictionary or in Merriam-Webster’s Dictionary of English Usage, or for that matter in literary history, where he might have appreciated the opportunity to correct Thomas Hardy… and also Charles Dickens.” Bizarrely, Wikipedia itself has a page on “comprised of” that endorses the linguist’s view, rather than Giraffedata’s view.

wikipedia

Drawing the circle just a bit wider, Giraffedata is a linguistic prescriptivist in a world where the experts agree that prescriptivism is ideology rather than wisdom. Prescriptivism itself is an assertion of power in the name of one’s own authority that claims (erroneously) to be based on on higher authorities that do not, in fact, exist. It is, in fact, one of the most persistent targets in writing by actual linguists from across the political spectrum: Lieberman rightly calls it “authoritarian rationalism,” and he and Geoff Nunberg (another of the most prominent US linguists) have an interesting back-and-forth about its fit with general right-left politics.

At another level of abstraction, Henderson’s efforts exemplify a lust for power that entails a specific (if perhaps not entirely conscious) rejection of expertise over precisely the topic he cares about.[2] The development of “expertise” is exactly the kind of social, relatively ad-hoc but still structured distribution of power that the new structureless tyrants want to re-hierarchize, with themselves at top. Does Henderson ask linguists about the rightness or wrongness of his judgment? As Lieberman’s work points out, there are obvious, easily available resources in which Henderson might have checked his judgment; it does not appear even to have occurred to him to do so. As Shariatmadari points out, even in the McMillen article, it becomes clear very quickly that Henderson is aware that the “error” he is “correcting” is not actually a matter even of grammar, but a judgment of taste based on several well-known linguistic fallacies (that synonyms should not exist, or that a word’s origin dictates its current meaning).

None of this is to say that it is “right” or “wrong” to adjust the style of Wikipedia with regard to Henderson’s word choice hobby horse. But here again is another rejection of a perfectly reasonable and even useful form of distributed authority: editorial authority over a written product. Before Wikipedia, and even today, published encyclopedias and other publications had rules called “house styles.” These are guidelines made up provisionally by the publishing house to enforce consistency on their work; some are extremely detailed and some are much looser. The house style for any given publication would dictate whether or not to use “comprised of” in the sense that upsets Henderson so much. It would not be a fact whether “comprised of” is right or wrong, but only a fact within the context of the publication. And this is actually a better account of how language works, or usually works: “this is how we do it here,” rather than “this is correct” and “this is incorrect.” (Wikipedia does have a very detailed Manual of Style, but it largely refrains from guidelines pertaining to usage, unlike the in-house style manuals of publications like The New Yorker or The Wall Street Journal.)

At the next level of abstraction, perhaps the most important one, the Wikimedia Foundation’s endorsement of Giraffedata’s work as among their “favorite” displays a kind of agnotology—a studied cultivation of ignorance—that feeds structureless tyrannies and authoritarian anti-hierarchies. In order to rule over those whose knowledge or expertise challenges you, the best route is to dismiss or mock that expertise wholesale, to rule it out as expertise at all, in favor of your own deeply-held convictions that you trumpet as a “new kind” of expertise that invalidates the “old,” “incumbent” kinds. This kind of agnotology is widespread in current Silicon Valley and digital culture; it is no less prominent in reactionary political culture, such as the Tea Party and rightist anti-science movements.

Thus Henderson’s work connects to the well-known disdain of many core Wikipedia editors for actual experts on specific topics, and even more so for their stubborn resistance (speaking generally; of course there are exceptions) to the input of such experts, when one would expect exactly the opposite should be the case. (As a writer in Wired put it almost a decade ago, “The Wikipedia philosophy can be summed up thusly: ‘Experts are scum.’”) A world-leading expert on Topic A wants to help edit the page on that topic—is the right response to reach out to them and help guide them through the (what should be) minimal rules of your project? Or is it to mock and impugn them for having the temerity to think they are expert in something, in the face of the far more important project that you are expert in? One of its several pages addressing this problem, “Wikipedia: Expert Retention,” notes:

If by “Wikipedia” one means its values as expressed in policy, then it can be said that Wikipedia definitely does not value expertise. Attempts to establish a policy on credential verification have failed. There are competing essays that say credentials are irrelevant and that credentials matter. An attempt to push through a policy to ignore all credentials failed, though it received considerable support.

The culture of Wikipedia has no single commonly held view, as is illustrated in the discussion pages of the above cited essays and proposals. However, the lack of consensus (and indeed doggedly opposed parties) results in a perceived lack of respect for expertise, a deference normally found elsewhere in society. Anti-expertise positions often are not acted against, so they are in effect encouraged. And as they are encouraged, they more than negate any positive regard for expertise, since the latter is only expressed, at present, in the consideration given by individual editors to those whom they recognize as experts. (emphasis added)

This is why it’s important that the Wikimedia Foundation employees pass so quickly over the possible “negatives” in Giraffedata’s work, and choose to single him out for praise. This is exactly—if, I think, unconsciously, what the most persistent members of the Wikipedia community want—the disparagement of existing (or in Silicon Valley terminology, “incumbent”) structures for distributing power in the name of a “democratization” that is actually about people with a significant lust for power that is not patient enough to develop its own distributive structures (that is, to work on developing a house style for Wikipedia, or to, I don’t know, study linguistics). In this way, too much of peer production seems like a marketing sheen placed over a very clear and antidemocratic lust for personal power, much as the 1970s communes were, but writ large and with very central social pillars in its sights.

As Freeman’s work has always suggested, which makes the brute rejection of its reasoning in favor of Hayekian “spontaneous orders” of knowledge (or ignorance), Wikipedia’s structurelessness is very easily seen not as a social miracle of cooperation but as a breeding ground for tyrants.[3] Mako Hill and Shaw: “the adoption of peer production’s organizational forms may inhibit the achievement of enhanced organizational democracy” (22). That they do this in the name of democracy makes them characteristic of the contemporary, digitally-inspired agnotological oligarchy.

NOTES

[1] It is worth noting that Shaw and Mako Hill rely in part on the so-called “Iron Law of Oligarchy” postulated by the proto-Fascist and Fascist sociologist Robert Michels in the early part of the 20th century. Michels actually thought it applies to all democratic organizations and cannot be prevented, but Shaw and Mako Hill rely on a great deal of post-Michels research that tends to give greater weight to formal methods of preventing oligarchy than Michels did.

[2] “Lust for power” is the usual English translation of the German word machtgelüst, which appears prior to the better-known “will to power” in Nietzsche, and unlike the latter term, is specifically meant to indicate the cathection of desire toward personal power.

[3] Wikipedia founder Jimmy Wales famously traces his philosophical inspiration for Wikipedia to Friedrich Hayek’s 1945 essay “The Use of Knowledge in Knowledge in Society”; Philip Mirowski, our most trenchant critic of neoliberalism, has repeatedly demonstrated the ways in which Hayek’s views specifically advocate ignorance.

Posted in "social media", cyberlibertarianism, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , , , , | 3 Responses

Tor Is Not a “Fundamental Law of the Universe”

In what I consider a very welcome act of journalistic open-mindedness, Pando Daily recently published a piece by Quinn Norton that responded both to Yasha Levine’s excellent and necessary piece on the US Government’s funding of the Tor Project, and perhaps even more so his even more necessary piece on the amazing attacks his piece received from some of the brightest stars in the encryption, “internet freedom,” and Tor universe.

I want to focus on a small part of Norton’s piece, in which she tries to explain the vicious attacks on Levine’s piece:

The incoherent frothing-at-the-mouth support for the fundamentals of Tor don’t arise from a set of politics, or money, or a particular arrangement of social trust like a statute or constitutional law. The support comes from an appeal to the fundamental laws of the universe, which not even the most vigorous of black budget ops can break.

Yes, the Tor people somehow believe that Tor itself implements a “fundamental law of the universe,” and that their privileged technical knowledge grants them special access that the rest of us lack. That is false, breathless narcissism and arrogance at its most outrageous, and very typical of our digital age.

There are fundamental laws of the universe: that something with mass cannot move faster than the speed of light in a vacuum; that matter can neither be created nor destroyed, but only converted into energy, and vice versa. These are fundamental laws that DoD can’t change. All technologies dip into these fundamental laws, to greater and lesser degrees.

Tor is not a fundamental law of the universe. Math is *fairly* fundamental, but even the simplest math–say, 2 + 2 =4–is NOT a fundamental law. Addition obtains under some circumstances, and not under others (this is part of the point of the revolutions in mathematical theory of the 19th century, including non-Euclidean geometry).

Grammatically, the phrase “Tor is/is not a fundamental law of the universe”—which, to be clear, is my phrase, not Norton’s—makes no sense. But other than the vague notion of “mathematical laws,” which she does not even directly invoke, Norton’s statement that Tor advocates “appeal” to the “fundamental laws of the universe” is conceptually no clearer. There are not that many fundamental laws. Tor “appeals” to them no more or less than, say, the NSA does when it uses satellites that rely in part on relativistic physics for geolocation. Relativity itself is a strange candidate for a “fundamental” law, for lots of interesting philosophical and scientific reasons, which does not mean it is not fundamental either, but my point is to show that the belief of Tor advocates that they are tapping into something over and above what the rest of us have access to is misbegotten and hubristic in the extreme. If what the Tor people are trying to show is that their cryptographic procedures are sound, fine. But that is not what we are talking about. We are talking about the use of Tor in the world.

Tor for freedom

The math on which Tor is based appears solid, as far as I can tell not being a cryptographic mathematician myself, according to both the Tor developers and outside analysts. Yet the actions Tor exists to enable–the use of digital communications systems in our world, supposedly opaque to traffic analysis (the main purpose Tor’s development team claim for their product)–are governed by no parallel solidity. It is part of a huge social apparatus. No matter how perfect the math, as long as that apparatus is large and involves people, it cannot be governed entirely by an unbreakable fundamental law of the universe. One is not challenging the laws of motion by buying out the operator of a Tor relay or exit node. It would be more accurate to say that a fundamental law of the social world is that people–including many of the same hackers Quinn Norton defends and celebrates in other writings–will do everything they can to find their way in to systems like Tor.

Further, we have historical evidence that proves this. People keep breaking Tor (something I am personally happy about). Sometimes the Tor developers seem mystified. But I think people will keep breaking it (and so, apparently, do the Tor developers themselves). And I do think those who fund it are aware of its systematic vulnerabilities in a way those hypnotized by its math are not, and that’s why the story of its funding is actually extremely relevant. It is not something that can’t be controlled by the state: it can be, or at least degraded significantly, in part because they could easily shut it down altogether through a program of disallowing a wide range of onion routing services, relays and nodes that Tor depends on to run, and continuing to disallow any new services that Tor were to set up.

Tor has edges. It has layers. It has connections with other services. It has physical systems on which it sits, binaries that must be compiled, and hundreds of other points of vulnerability even granted that onion routing is mathematically robust to breaking. Oddly enough, Google Chief Engineer and life-extension supplement magnate Ray Kurzweil, whom I consider a nut, believes Moore’s law is a “fundamental law of the universe” (or, more accurately, a corollary to the “law of accelerating returns” that he thinks is fundamental) which I think he is also wrong about, but whether it’s fundamental or not, Moore’s law suggests that computers will get fast enough eventually to break current encryption routines–perhaps breaking Tor, perhaps making historical records of Tor use much more subject to network analysis than they seem today. And who knows? Maybe there is some kind of non-Euclidean (or some other alternative) approach to cryptography nobody has stumbled on yet that will square the circle in a way none of us can foresee.

It is precisely the hubris and arrogance of Tor and its developers, the hubris of computationalists who believe their self-adjudicated superior knowledge of machines compared to everyone else, who believe that their access to fundamental computational/mathematical “laws” insulates them from society and politics–that makes this discussion so difficult. Tor’s amazing reactions to the Pando coverage are exactly the kind of petulant, arrogant, dismissive, power-hungry reaction computationalists typically have when anyone they deem unworthy dares actually to ask them to account for themselves. They think they are above us, in so many different ways. They are not, and they hate anyone who dares to suggest that they are just people too.

Posted in "hacking", cyberlibertarianism, revolution, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , , , | 1 Response

All Cybersecurity Technology Is Dual-Use

Dan Geer is one of the more interesting thinkers about digital security and privacy around. Geer is a sophisticated technologist with an extremely varied and rich background who has also, fairly recently, become a spook of some kind. Geer is currently the Chief Information Security Officer for In-Q-Tel, the technology investment subsidiary of the CIA, popularly and paradoxically known as a “not-for-profit venture capital firm,” but which gets much more directly involved with its investment targets with the intent of providing “‘ready-soon innovation’ (within 36 months) vital to the IC [intelligence community] mission,” and therefore shuns the phrase “venture capital.”

This might lead one to think that Geer would speak as what Glenn Greenwald likes to call a “government stenographer,” but I find his speeches and writings to be both unusually incisive and extremely independent minded. He often says things that nobody else says, and he says them from a position of knowledge and experience. And what he says often does not line up with either what one imagines “government” thinks, or with what many in industry want; he has recently suggested, contrary to what Google and many “digital freedom” advocates affirm, that the European “Right to Be Forgotten” actually does not go far enough in protecting privacy.

In his talk at the 2014 Black Hat USA conference, the same talk where he made remarks about the Right to Be Forgotten, called “Cybersecurity as Realpolitik” (text; video), Geer made the following deeply insightful observation:

All cyber security technology is dual use.

Here’s the full context of that statement:

Part of my feeling stems from a long-held and well-substantiated belief that all cyber security technology is dual use. Perhaps dual use is a truism for any and all tools from the scalpel to the hammer to the gas can — they can be used for good or ill — but I know that dual use is inherent in cyber security tools. If your definition of “tool” is wide enough, I suggest that the cyber security tool-set favors offense these days. Chris Inglis, recently retired NSA Deputy Director, remarked that if we were to score cyber the way we score soccer, the tally would be 462-456 twenty minutes into the game,[CI] i.e., all offense. I will take his comment as confirming at the highest level not only the dual use nature of cybersecurity but also confirming that offense is where the innovations that only States can afford is going on.

Nevertheless, this essay is an outgrowth from, an extension of, that increasing importance of cybersecurity. With the humility of which I spoke, I do not claim that I have the last word. What I do claim is that when we speak about cybersecurity policy we are no longer engaging in some sort of parlor game. I claim that policy matters are now the most important matters, that once a topic area, like cybersecurity, becomes interlaced with nearly every aspect of life for nearly everybody, the outcome differential between good policies and bad policies broadens, and the ease of finding answers falls. As H.L. Mencken so trenchantly put it, “For every complex problem there is a solution that is clear, simple, and wrong.”

geer at black hat

Dan Geer at the Black Hat USA 2014 conference (Photo: Threatpost)

Now what Geer means by “dual-use” here is one of the term’s ordinary meanings: all cybersecurity technology (and really all digital technology) has both civilian and military uses.

But we can expand that, as Geer suggests when he mentions the scalpel, hammer, and gas can, in another way the term is sometimes used: all cybersecurity technology has both offensive and defensive uses.

This basic fact, which is obvious from any careful consideration of game theory or military or intelligence history, seems absolutely lost on the most vocal and most active proponents of personal security: the “cypherpunks” and crypto advocates who continually bombard us with the recommendation we “encrypt everything.” (In “Opt-Out Citizenship” I describe the anti-democratic nature of the end-to-end encryption movement.)

Not only that: I don’t think “cybersecurity” technology is a broad enough term, either: it would be better to say that a huge amount of digital technology is dual-use. That is to say that a great deal of digital technology has uses to which it can be and will be put that are neither obvious nor, necessarily, intended by their developers and even users, and that often work in exactly the opposite way that their developers or advocates say (or think) they do.

This is part of what drives me absolutely crazy about the cypherpunks and other crypterati who have come out in droves in the wake of the Snowden revelations.

They act and write as if they control what they do; as if, unlike the rest of the people in the world, what they do will be accepted as-is, will end the story, will have only the direct effects they intend.

Thus, they write as if significantly upping the amount and efficacy of encryption on the web is something that “bad” hackers and “bad” cypherpunks will just accept.

Despite the fact that we know that’s not true. Any advance in encryption has both offensive and defensive uses. In its most basic form, that means that while encoding or encrypting information might look defensive, the ability to decrypt or decode that information is offensive.

In another form, it means that no matter how carefully and thoroughly you develop your own encryption scheme, the very act of doing that does not merely suggest but ensures—particularly if your new technology gets adopted—that your opponents will use every means available to defeat it, including the (often, very paradoxically if viewed from the right angle, “open source”) information you’ve provided about how your technology works.

This isn’t a recipe for peace or for privacy. It’s an arms war. Cypherpunks might see it as some kind of perverse “peace war,” because they see themselves “only” developing defensive techniques—although given the penchant of those folks for obscurity and anonymity, it’s really special pleading to think that the only people involved in these efforts are engaged in defense.

But they aren’t. They are developing at best new “missile shields,” and the response of offensive technologists has to be—it is required to be, and they are paid to do it—better missiles that can get by the shields.

Further, because these crypterati almost universally adopt an anarcho-capitalist or far-right libertarian hatred for everything about government, they seem unable to grasp the fact that the actual mission of law enforcement and military intelligence—the mission they have to do, even when they are following the law and the constitution perfectly—involves doing everything in their power to crack and penetrate every encryption scheme in use. They have to. One of the ways they do that is to hire the very folks who bray so loudly about the sweet nature of absolute technical privacy—and once on the other side, who is better at finding ways around cryptography than those who pride themselves on their superior hacking skills? And the very development of these skills entails the creation of the universal surveillance systems used by the NSA as revealed by Snowden and others.

The population caught in the middle of this arms war is not made more free by it. We are increasingly imprisoned by it. We are increasingly collateral damage. Rather than (or at least in addition to) escalation, we need to talk about a different paradigm entirely: disarmament.

Posted in "hacking", "social media", cyberlibertarianism, materality of computation, privacy, rhetoric of computation, surveillance | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Social Media as Political Control: The Facebook Study, Acxiom, & NSA

Although it didn’t break the major media until last week, around June 2 researchers led by Adam Kramer of Facebook published a study in the Proceedings of the National Academy of Sciences (PNAS) entitled “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” The publication has triggered an flood of complaints and concerns: is Facebook manipulating its users routinely, as it seems to admit in its defense of its practices? Did the researchers—two of whom were at universities (Cornell and the University of California-San Francisco) during the time the actual study was conducted in 2012—get proper approval for the study from the appropriate Institutional Review Board (IRB), required of all public research institutions (and most private institutions, especially if they take Federal dollars for research projects)? Was Cornell actually involved in the relevant part of the research (as opposed to analysis of previously-collected data)? Whether or not IRB approval was required, did Facebook meet reasonable standards for “informed consent”? Do Terms of Service agreements accomplish not just the letter but the spirit of the informed consent guidelines? Could Facebook see emotion changes in its individual users? Did it properly anonymize the data? Can Facebook manipulate our moods? Was the effect it noticed even significant in the way the study claims? Is Facebook manipulating emotions to influence consumer behavior?

While these are all critical questions, most of them seem to me to miss the far more important point, one that has so far been gestured at only by Zeynep Tufekci (“Facebook and Engineering the Public”; Michael Gurstein’s excellent “Facebook Does Mind Control,” which appeared almost simultaneously with this post, makes similar points to mine) and former Obama election campaign data scientist Clay Johnson (whose concerns are more than a little ironic). To see its full extent, we need to turn briefly to Glenn Greenwald and Edward Snowden. I have a lot to say about the Greenwald/Snowden story, which I’ll avoid going into too much for the time being, but for present purposes I want to note that one of the most interesting facets of that story is the question of exactly what each of them thinks the problem is that they are working so diligently to expose: is it a military intelligence agency out of control, an executive branch out of control, a judiciary failing to do its job, Congress not doing oversight the way some constituents would like, the American people refusing to implement the Constitution as Snowden/Greenwald think we should, and so on? Even more pointed, whomever it is we designate as the bad actors in this story, why are they doing it? To what end?

For Greenwald, the bad actors are usually found in the NSA and the executive branch (although, as an aside, his reporting seems often to show that all three branches of government are being read into or overseeing the programs as required by law, which definitely raises questions about who the bad guys actually are). More importantly, Greenwald has an analysis of why the bad actors are conducting warrantless, mass surveillance: he calls it “political control.” Brookings Institution Senior Fellow Benjamin Wittes has a close reading of Greenwald’s statements on this topic in No Place to Hide, (also see Wittes’s insightful review of Greenwald’s book) where he finds the relevant gloss of “political control” in this quotation from Greenwald:

All of the evidence highlights the implicit bargain that is offered to citizens: pose no challenge and you have nothing to worry about. Mind your own business, and support or at least tolerate what we do, and you’ll be fine. Put differently, you must refrain from provoking the authority that wields surveillance powers if you wish to be deemed free of wrongdoing. This is a deal that invites passivity, obedience, and conformity. The safest course, the way to ensure being “left alone,” is to remain quiet, unthreatening, and compliant.

That is certainly a form of political control, and a disturbing one (though Wittes I think very wisely asks, if this is the goal of mass surveillance, why is it so ineffective with regard to Greenwald himself and the other actors in the Snowden releases? Further, how was suppression-by-intimidation work supposed to work when the programs were entirely secret, and exposed only by the efforts of Greenwald and Snowden?). But it’s not the only form of political control, and I’m not at all sure it’s the most salient or most worrying of the kinds of political control enabled by ubiquitous, networked, archived communication itself: that is to say the functionality, not the technology, of social media itself.

Why I find it ironic that Clay Johnson should worry that Mark Zuckerberg might be able to “swing an election by promoting Upworthy posts 2 weeks beforehand” is that this is precisely, at a not very extreme level of abstraction, what political data scientists do in campaigns. In fact it’s not all that abstract: “The [Obama 2012] campaign found that roughly 1 in 5 people contacted by a Facebook pal acted on the request, in large part because the message came from someone they knew,” according to a Time magazine story, for example. In other words, the campaign itself did research on Facebook and how its users could be politically manipulated—swinging elections by measuring how much potential voters like specific movie stars, for example (in the case of Obama 2012, it turned out to be George Clooney and Sarah Jessica Parker). Johnson’s own company, Blue State Digital, developed a tool the Obama campaign used to significant advantage—“Quick Donate,” deployed so that “supporters can contribute with just a single click,” which might mean that it’s easy, or might mean that supporters act on impulse, before what Daniel Kahneman calls their “slow thinking” can consider the matter carefully.

Has it ever been thus? Yes, surely. But the level of control and manipulation possible in the digital era exceed what was possible before by an almost unfathomable extent. “Predictive analytics” and big data and many other tools hint at a means for manipulating the public in all sorts of ways without their knowledge at all. These methods go far beyond manipulating emotions, and so focusing on the specific behavior modifications and effects achieved by this specific experiment strikes me as missing the point.

NSA Facebook

Facebook Security Agency (Image source: uncyclopedia)

Some have responded to this story along the lines of Erin Kissane— “get off Facebook”—or Dan Diamond—“If Facebook’s Secret Study Bothered You, Then It’s Time To Quit Facebook.” I don’t think this is quite the right response for several reasons. It puts the onus on individuals to fix the problem, but individuals are not the source of the problem; the social network itself is. It’s not that users should get off of Facebook; it’s that the kind of services Facebook sells should not be available. I know that’s hard for people to hear, but it’s a thought that we have not just the right but the responsibility to consider in a democratic society: that the functionality itself might be too destructive (and disruptive) to what we understand our political system to be.

More importantly, despite these protestations, it isn’t possible to get off Facebook. For “Facebook” here read “data brokers,” because that’s what Facebook is in many ways, and as such it is part of a universe of hundreds and perhaps even thousands of companies (of which the most famous non-social media company is Acxiom) who make monitoring, predicting, and control the behavior of people—that is, in the most literal sense, political control—their business. As Julia Angwin has demonstrated recently, we can’t get out of these services even if we want to, and to some extent the more we try, the more damaging we make the services to us as individuals. Further, these services aren’t concerned with us as individuals, as Marginal Utility blogger and New Inquiry editor Rob Horning (among his many excellent pieces on the structural as opposed to the person impact of Facebook, see “Social Graph vs. Social Class,” “Hollow Inside,” “Social Media Are Not Self-Expression,” and some of his pieces at Generation Bubble; it’s also a frequent topic on his Twitter feed; Peter Strempel’s “Social Media as Technology of Control” is a sharp reflection on some of Horning’s writing) and I and others have both been insisting for years: these effects occur at population levels, as probabilities: much as the Obama campaign did not care that much whether you or your neighbor voted for him, but did care that if they sprayed Chanel No. 5 in the air one June morning, one of the two of you was 80% likely to vote for him, and the other was 40% likely not to go to the polls. Tufekci somewhat pointedly argued for this in the aftermath of the 2012 election:

Social scientists increasingly understand that much of our decision making is irrational and emotional. For example, the Obama campaign used pictures of the president’s family at every opportunity. This was no accident. The campaign field-tested this as early as 2007 through a rigorous randomized experiment, the kind used in clinical trials for medical drugs, and settled on the winning combination of image, message and button placement.

Further, you can’t even get off Facebook itself, which is why I disagree pretty strongly with the implications of a recent academic paper of Tufekci’s, in which she writes fairly hopefully about strategies activists use to evade social media surveillance, by performing actions “unintelligible to algorithms.” I think this only provides comfort if you are looking at individuals and at specific social media platforms, where it may well be possible to obscure what Jim is doing by using alternate identities, locations, and other means of obscuring who is doing what. But most of the big data and data mining tools focus on populations, not individuals, and on probabilities, not specific events. Here, I don’t think it matters a great deal whether you are purposely obscuring activities or not, because those “purposely obscured” activities also go into the big data hopper, also offer fuel for the analytical fire, and may well reveal much more than we think about intended future actions and behavior patterns, and also leave us much more susceptible than we know to relatively imperceptible behavioral manipulation.

Here it’s ironic that Max Schrems is in the news again, having just published a book in German called Kämpf um deine Daten (English: Fight for Your Data) and spokesman for the Europe v. Facebook group that is challenging not so much the NSA itself but the cooperation between NSA and Facebook in European courts. A recent story about Schrems’s book in the major German newspaper Frankfurter Allgemeine Zeitung (FAZ) notes that what got Schrems concerned about the question of data privacy in the first place was this:

Schrems machte von seinem Auskunftsrecht Gebrauch und erwirkte im Jahr 2011 nach längerem Hin und Her die Herausgabe der Daten, die der Konzern über ihn gespeichert hatte. Er bekam ein pdf-Dokument mit Rohdaten, die, obwohl Schrems nur Gelegenheitsnutzer war, ausgedruckt 1222 Seiten umfassten – ein Umfang, den im letzten Jahrhundert nur Stasi-Akten von Spitzenpolitikern erreichten. Misstrauisch machte ihn, dass das Konvolut auch Daten enthielt, die er mit den gängigen Werkzeugen von Facebook längst gelöscht hatte.

Here’s a rough English translation, with help from Google Translate:

Schrems exercised his Freedom of Information rights and after a long back and forth obtained in 2011 the data Facebook hold on him. He got a PDF document with raw data which, although Schrems was only an occasional user, included 1222 printed pages—a scale that in the last century could have been reached only in the Stasi files of top politicians. What he found especially suspicious was that the documents also contained data that he had long since erased with the normal Facebook tools.

In fact, it’s probably even worse, both if we consider data brokers like Acxiom (who maintain detailed profiles on us whether we like it or not), or even Facebook itself, which it is reasonable to assume does just the same thing, whether we have signed up for it or not. And it is no doubt true that, as the great, skeptical data scientist Cathy O’Neil says over at her MathBabe blog, “this kind of experiment happens on a daily basis at a place like Facebook or Google.” This is the real problem; marking this specific project out as “research” and an unacceptable but unusual effort misses the point almost entirely. Google, Facebook, Twitter, the data brokers, and many more are giant research experiments, on us. “Informed consent” for this kind of experiment would have to be provided by the whole population, even those who don’t use social media at all, and the possible consequences would have to include “total derailing of your political system without your knowledge.”

(As an aside those who have gone out of their way to defend Facebook—see especially Brian Keegan and Tal Yarkoni—provide great examples of cyberlibertarianism in action, emotionally siding with corporate capital as itself a kind of social justice or political cause; Alan Jacobs provides a strong critique of this work.)

This, in the end, is part of why I find very disturbing Greenwald’s interpretation of Snowden’s materials, and his relentless attacks on the US government, and no less his concern for US companies only in so far as their business has been harmed by the Snowden information he and others have publicized. Political control, in any reasonable interpretation of that phrase, refers to the manipulation of the public to take actions and maintain beliefs that they might not arrive at via direct appeal to logical argument. Emotion, prejudice, and impulse substitute for deep thinking and careful consideration. While Greenwald has presented some—but truthfully, only some—evidence that the NSA may engage in political control of this sort, he blames it on the government rather than on the existence of tools, platforms and capabilities that do not just enable but are literally structured around such manipulation. Bizarrely, even Julian Assange himself makes this point in his book Cypherpunks, but it’s a point Greenwald continues to shove aside. Social media is by its very nature a medium of political control. The issue is much less who is using it, and why they are using it, than that it exists at all. What we should be discussing—if we take the warnings of George Orwell and Aldous Huxley at all seriously—is not whether NSA should have access to these tools. If the tools exist, and especially if we endorse some form of the nostrum that Greenwald in other modes rabidly supports, that information must be free, then we have no real way to prevent it from being used to manipulate us. How the NSA (and Facebook, and Acxiom) uses this information is of great concern, to be sure: but the question we are not asking is whether it is not the specific users and uses we should be curtailing, but the very existence of the data in the first place. It is my view that as long as this data exists, its use for political control will be impossible to stop, and that the lack of regulation in private companies means that we should be even more concerned about how they use it (and how it is sold, and to whom) than we are about what heavily-regulated governments do with it. Regardless, in both cases, the solution cannot be to chase after the tails of these wild monkeys—it is to get rid of the bananas they are pursuing in the first place. Instead, we need to recognize what social media and data brokerage does: it does a kind of non-physical violence to our selves, our polities, and our independence. It is time at least to ask whether social media itself, or at least some parts of it—the functionality, not the technology—is too antithetical to basic human rights and to the democratic project for it to be acceptable in a free society.

Posted in "social media", cyberlibertarianism, digital humanities, privacy, surveillance, we are building big brother | Tagged , , , , , , , , , , , , , , , , , , , , , , , | 1 Response

Bitcoinsanity 2: Revolutions in Rhetoric

Bitcoin is touted, publicized and promoted as an innovation in financial technology. Usually those doing the promoting have very little experience with finance in general or with financial technology in particular–a huge, booming industry mostly made up of proprietary technologies that those of us who don’t work for major banks or trading firms know very little about–but are happy to claim with tremendous certainty that this particular financial technology is utterly transformative.

(As a side note, the blockchain itself is not inherently financial technology, and it may well prove more useful and interesting in contexts other than finance, such as the “fully decentralized data management service” offered by companies like MaidSafe; these kinds of developments are preliminary enough that I don’t think it’s yet possible to judge their usefulness).

Like certain other rhetorical constructions (e.g. “Arab Spring,” “open”), at a certain point the rhetoric and the discourse it engenders start to seem as much of the point as are the underlying technical or political facts. The rhetoric overtakes those facts; it becomes the facts. Unlike the “Arab Spring,” Bitcoin can be even harder to see from this angle, because it really is a piece of software, and a distributed network of users of that software.

Regardless: unlike some pieces of software, and like other social practices, Bitcoin’s main function so far is rhetorical. Bitcoin enables and licenses all sorts of argumentative and rhetorical practices that would not otherwise be possible in just this fashion, and the creation and propagation of those practices has become important–perhaps even central–to whatever “Bitcoin” is. This is not peripheral, unavoidable, unexceptionable tomfoolery; it is a core part of what Bitcoin is. Until and unless Bitcoin actually starts to function as a currency (meaning that its value stops fluctuating for a significant period of time) or admits that “value fluctuations” and “currency” are incompatible with each other, this will continue to be the case.

It’s not in any way peripheral. No matter how many of them I read, I am still astonished at the number of pieces that come out nearly every day that “explain” how Bitcoin works (although what they actually describe is the blockchain technology), then give some examples of Bitcoin being exchanged in the real world, then move to “Bitcoin is revolutionizing finance” to “Bitcoin will revolutionize everything” without in any way connecting the dots to what these concepts actually mean as they are used today. Why, just across the transom as I’m writing this comes “How Bitcoin Tech Will Revolutionise Everything from Email to Governments” out of “Virgin Entrepreneur” (run of course by the well-known decentralizer Richard Branson, who surely invests in technologies because they are likely to defuse radically the power of his enormous wealth) where anti-statist libertarian comedian (and if those aren’t qualifications to dismantle the world financial system what would be?) Dominic Frisby (@dominicfrisby) proclaims that the “wonderful Ayn Rand stuff” of which the blockchain is constructed leads us to ask:

What indeed will be the purpose of representative democracy when any issue can be quickly and efficiently decided by the people and voted on via the block chain? The revolution will not be televised. It will be cryptographically time stamped on the block chain.

Well, what is the purpose of representative democracy? One might well ask that question as one advocates the loosing of a technology on the world designed to render it impotent. In the US, and in most of the democratic world, we have representative governments bound by laws and constitutions specifically to avoid the well-known dangers of majoritarian rule and of letting each person pursue their “wonderful Ayn Rand” interests without any sort of check on their powers.

Bitcoin provides a whole new iteration of the far right’s ability to sell these once well-discarded ideas to an unsuspecting and (ironically, in the “information age”) incredibly uninformed public about the way government works and the way democracy and laws have been carefully constructed to work over hundreds of years. Yes, they work incredibly badly. The only thing worse than the way they work is to get rid of them entirely, without any kind of meaningful structure to replace them. After all, we know a lot about what completely unregulated democratic discussions look like today–we need look no further than reddit or 4chan or Twitter. Imagine what that kind of logic and conduct, magnified into governmental power, looks like. Actually you don’t have to imagine, because we are seeing plenty of companies today take that power for themselves, existing laws and structures and regulations be damned.

Here, I’ve collected just a small sampling of real-life statements from Bitcoinistas that demonstrate the level of rhetorical know-nothingism for which Bitcoin is particularly (although by no means exclusively) responsible right now. Most of them were reported by the great Twitter accounts Shit /r/Bitcoin says (which, as the name implies, samples quotations from the /r/Bitcoin subreddit) and bitcoin.txt. Word of warning: if any of what you read in these comments makes sense to you, I probably think you need to read more. A lot more.

Thoughts on Banking, Taxation, & Monetary Theory for Which John Maynard Keynes Bears No Conceivable Responsibility

Of course the government want to control the currency. They want to have ultimate power over everything, the people be damned. Digital currency can compete with the fiat banking system which is used to loot the value of currency on a continual basis. (Source: Robert Zraick, Jan 2013, comment on Forbes article)

bitcoin on reddit

Bitcoin on Reddit (Source: @RedditBTC)

Political Science You Won’t Find in John Locke or The Federalist Papers

Bitcoin is a direct threat to corrupt governments who control and manipulate currency, and use taxpayer funds to buy votes. You better believe they’re going to ban it! But mutual barter systems will prevail on the web… and it’s a great thing. It will destroy the power that government yields uncontrollably and put it back into the hands of the people where it belongs. (Source: Douglas Karr, Jan 2013, comment on Forbes article)

Economic Theory, Courtesy John Birch Society

I understand how they work… unlike ANY of the old-school economists, who also failed to predict the 2008 crash, and who just went along with what it was acceptable to say. The more “established” an economist is, the more likely they are to be wrong about bitcoins. This has been the pattern so far. You might as well ask the doddering self-entitled satin-tour-jackets wearing old twats from the RIAA about torrent protocols. (Source: Genomicon, Apr 2, 2013)

We Come to Build a Better World

Posted in "hacking", "social media", bitcoin, cyberlibertarianism, rhetoric of computation, what are computers for | Tagged , , , , , , , , , , , , , , | Leave a comment

‘Permissionless Innovation’: Using Technology to Dismantle the Republic

There may be no more pernicious and dishonest doctrine among Silicon Valley’s avatars than the one they call “permissionless innovation.” The phrase entails the view that entrepreneurs and “innovators” are the lifeblood of society, and must be allowed to push forward without needing to ask for “permission” from government, for the good of society. The main advocates for the practice are found at the Koch-directed and -funded libertarian Mercatus Center and its tech-specific Technology Liberation Front (TLF), particularly Senior Research Fellow Adam Thierer; it’s also a phrase one hears occasionally from apparently politically-neutral “internet freedom” organizations, as if it were not tied directly to market fundamentalism.

Whether or not “innovators” would be better off in achieving their own goals without needing to ask for “permission,” the fact is that another name for “permission” as it is used in this instance is “democratic governance.” Whether or not it is best for business to have democratic government looking over the shoulders of business, it is absolutely, indubitably necessary for democratic governance to mean anything. That is why libertarians had to come up with a new term for what they want; “laws and regulations don’t apply to us” might tip off the plebes to what is really going on.

Associated with certain aspects of “open internet” rhetoric by, among others, “father of the internet” (and “Google’s Chief Internet Evangelist,” in case you wonder where these positions are coming from) Vint Cerf—yet another site where we should be paying much more careful attention to the deployment of “open”—“permissionless innovation” has gained most traction among far-right market fundamentalists like the TLF.

In comments they submitted to the FAA’s proposed rules for “test sites” for the integration of commercial drones into domestic airspace, the TLF folks wrote:

As an open platform, the Internet allows entrepreneurs to try new business models and offer new services without seeking the approval of regulators beforehand.

Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators.

Note how cleverly the technical nature of the “open platform” of the internet—“open” in that case meaning that the protocols are not proprietary, which entails very little or nothing about regulatory status—merges into the inability or inadvisability of government to regulate it. This is cyberlibertarian rhetoric in its most pointed function—using language that it is hard to disagree with about the nature of technological change so as to garner support for extreme political and economic positions we may not even realize we are going along with. “Open Internet, yes!” “Keep your paternalistic ‘permission’ off our backs—for democracy!” Or not.

The market fundamentalists of TLF and Silicon Valley would love you to believe that “permissionless innovation” is somehow organic to “the internet,” but in fact it is an experiment we conducted for a long time in the US, and the experiment proved that it does not work. From the EPA to the FDA to OSHA, nearly every Federal (and State) regulatory agency exists because of significant, usually deadly failures of industry to restrain itself. We don’t need to look very far to see how destructive unregulated industry can be: just think of the 1980 authorization of the “Superfund” act, enacted after more than a decade of environmental protest proved ineffective in getting industry not simply to stop polluting, but to stop contaminating sites so thoroughly that they directly damaged agriculture and human health (including killing people), to say nothing of more traditional environmental concerns—practices for which “permissionless” industry did not merely shirk responsibility, but which they actively hid. Consider OSHA, created only in 1970, after not merely decades but centuries of employment practices so outrageous that it was not until 60 years after the Triangle Shirtwaist Fire that the government finally acted to limit the number of workers who are directly killed by their employers. When OSHA was created in 1970, 14,000 workers were killed on the job each year in the US; despite the workforce more than doubling since then, in 2009 only 4,400 were killed—which is still, by the way, awful. And industry accepted and accepts OSHA standards kicking and screaming every step of the way.

“Permissionless innovation” suggests that the correct order for dramatic technological changes should be “first harm, then fix.” This is of course the opposite of the way important regulatory bodies like the FDA—let alone doctors themselves following the Hippocratic Oath—approach their business: “first, do no harm.” The “permissionless innovation” folks would have you believe that in the rare, rare case in which one of their technologies harms somebody, they will be the first to step in and fix things up, maybe even making those they’ve harmed whole.

Yet we have approximately zero examples of market fundamentalists stepping in to say that “hey, we asked for ‘permissionless innovation,’ so since we fucked up, it’s our responsibility to fix things up.” On the contrary, they are the same people who then argue that “people make their own choices” when they “choose” to use technology whose consequences they can’t actually fathom at all, but that therefore they are owed nothing. So what they really want is no government beforehand, and no government afterwards—more accurately, no government at all.

It’s tempting to argue that digital technology is different from drugs or food, but that would belie all sorts of facts. Silicon Valley is trying to put its technology inside and outside of every part of the world, from the “Internet of Things” to drones to FitBit to iPhone location services and on and on. These technologies are meant to infiltrate every aspect of our lives—what is needed is more, not less, regulation, and more creative ways to regulate them, since they by design run across many different existing spheres of law and regulation.

polluted WV water

Polluted West Virginia Water. Photo credit: Crystal Good, PBS

This is no idle speculation. Even today, we have more than enough examples of what “permissionless innovation” can do. We need remember back no further than January of this year, when the crafty market fundamentalists at deliberately-named Freedom Industries caused a huge chemical spill, polluting water throughout West Virginia. Freedom Industries deliberately bypassed and found loopholes in existing regulations so as to produce and stockpile chemicals whose impact on human health is unknown. And were good “permissionless innovation” folks at Freedom standing up, taking responsibility for the harm they’d caused? Guess again.

Deliberately getting around the EPA is one thing, but even technological innovations closer to the digital world currently happen, and follow the same pattern of denying the responsibility that permissionless innovation would suggest “innovators” must take. We know that most soap products today contain the chemical triclosan, an antibacterial substance that, when loosed on the environment due to imperfect regulation, does not actually work as advertised. Instead, the FDA believes it harms humans and the environment, actually producing drug-resistant bacteria, a huge concern in an area of diminishing antibiotic effectiveness, and the chemical was banned by the EU in 2010. Despite this, US producers continue to sell the products because they appeal to consumers’ misguided (and advertising-fueled) belief that “killing bacteria” must be good.

In a similar vein, another, the inclusion of so-called “microbeads” in cosmetics, soaps and toothpaste, making them sparkle, follows exactly the desired pattern of permissionless innovation. The new technology, which serves as near as I can determine only marketing purposes (it makes part-liquid substances sparkle), was not covered by existing regulation, and thus has become nearly ubiquitous in a range of products. But it turns out that the beads, because they are so small, leach throughout the environment, escape the effects of water treatment plants that aren’t prepared for them, and then concentrate in marine life, including fish that humans eat. Among the many reasons that is bad, the beads “tend to absorb pollutants, such as PCBs, pesticides and motor oil,” likely killing fish and adding to the toxic load of humans who eat fish.

Some companies—including L’Oreal, Proctor & Gamble, the Body Shop and Johnson & Johnson—agreed to phase out the microbeads when presented with evidence of the damage they cause by researchers. But others haven’t, and just recently the State of Illinois has finally passed legislation to outlaw them altogether, since the Great Lakes, North America’s largest bodies of freshwater, have been found to be thoroughly contaminated with them.

So we go from an apparently harmless product—but one, we note, that served no important function in health or welfare—to an inadvertent and potentially seriously damaging technology that now we have to try to unwind. Scientists are concerned that the microbeads already in the Great Lakes are causing significant damage, so the voluntary cessation is great, but doesn’t solve the problem that’s already been caused.

Cosmetic and pharmaceutical manufacturers are already familiar with regulatory bodies, and so it is not all that surprising that some of them have voluntarily agreed to curb their practices—after the harm has been done. Silicon Valley companies have so far demonstrated very little of the same deference—on the contrary, they continue business practices even after regulators directly tell them that what they are doing violates the law.

“Permissionless innovation” is a license to harm; it is a demand that government not intrude in exactly the area that government is meant for—to protect the general welfare of citizens. It is a recipe for disaster, and I have no hesitation whatsoever about saying that, in the battle between human health and social welfare vs. the for-profit interests of “innovators,” society is well-served by erring on the side of caution. As quite a few of us, including Astra Taylor in her recent People’s Platform have started to say, the proliferation of digital technology into every sphere of human life suggests we need more and more careful regulation, not less–unless we want to learn what the digital equivalent of Love Canal might be.

Posted in cyberlibertarianism, google, materality of computation, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , , | Leave a comment