Among the many default, background, often unexamined assumptions of the digital revolution is that sharing is good. A major part of the digital revolution in rhetoric is to repurpose existing language in ways that advantage the promoters of one scheme or another. It is no surprise that while it may well have been the case that to earlier generations sharing was, in more or less uncomplicated ways, good, the rhetorical revolution works to dissuade us from considering whether the ethics associated with earlier terminology still apply, telling us instead that if we call it sharing, it must be good.
This is fecund ground for critics of the digital, and rightly so. Despite being called “the sharing economy”—a phrase almost as literally oxymoronic as “giant shrimp,” “living dead” or “civil war”—the companies associated with that practice have very little to do with what we have until now called “sharing.” As a rule, they are much more like digital sharecroppers such as Facebook than their promotional materials tell us, charging rent on the labor and property of individuals while centralized providers make enormous profits on volume (and often enough by offloading inherent employer costs to workers, while making it virtually impossible for them to act as organized labor). Of course there is a continuum; there are “sharing economy” phenomena that are not particularly associated with the extraction of profit from a previously unexploited resource, but there are many others that are specifically designed to do just that.
One phenomenon that has so far flown under the radar in discussions of peer-to-peer production and the sharing economy but that demands recognition on its own is one for which I think an apt name would be crowdforcing. Crowdforcing in the sense I am using it refers to practices in which one or more persons decides for one or more others whether he or she will share his or her resources, without the other person’s consent or even, perhaps more worryingly, knowledge. While this process has analogs and has even itself occurred prior to the digital revolution and the widespread use of computational tools, it has positively exploded thanks to them, and thus in the digital age may well constitute a difference in kind as well as amount.
Once we conceptualize it this way, crowdforcing can be found with remarkable frequency in current digital practice. Consider the following recent events:
- In a recent triumph of genetic science, a company called DeCode Genetics has created a database with the complete DNA sequences for all 320,000 citizens of Iceland. Slightly less noted in the news coverage of the story is that DeCode collected actual tissue samples from only about 10,000 people, and then used statistical techniques to extrapolate the remaining 310,000 sequences. This is not population-level data: it is the full genetic sequence of each individual. As the MIT Technology Review reported, the study raises “complex medical and ethical issues.” For example, “DeCode’s data can now identify about 2,000 people with the gene mutation across Iceland’s population, and [DeCode founder and CEO Kári] Stefánsson said that the company has been in negotiations with health authorities about whether to alert them.” Gísli Pálsson, an anthropologist at the University of Iceland, is reported as saying that “This is beyond informed consent. People are not even in the studies, they haven’t submitted any consent or even a sample.” While there are unique aspects to Iceland’s population that makes it particularly useful for a study like this, scientists have no doubt that “This technique can be applied to any population,” according to Myles Axton, chief editor of Nature Genetics, which published some of DeCode’s findings. And while news coverage of the story has dwelt on the complex medical-ethics issues relating to informing people who may not want to know of their risk for certain genetic diseases, this reasoning can and must be applied much more widely: in general, thanks to big data analytics, when I give data to a company with my overt consent, I am often sharing a great deal more data about others to whom I am connected without even their knowledge, let alone any kind of consent. We can see this in the US on popular sites like 23andMe and Ancestry.com, where the explicit goals often include specific inferential data about people who are not using the product. The US itself is in process of conducting a genetic database that may mimic the inferential capacities of the Icelandic one. Genetic information is is one of the better-regulated parts of the data sciences (though regulation even in this domain remains inadequate), and yet even it seems to have an impoverished vision of what is possible with this data; where do we look for constraints placed on this sort of data analysis in general?
- Sharing pictures of your minor children on Facebook is already an interesting enough issue. Obviously, you have the parental right to decide whether or not to post photos of your minor children, but parents likely do not understand all the ramifications of such sharing for themselves, let alone for their children, not least since none of us know what Facebook and the data it harvests will be like in 10 or 20 years. Yet an even more pressing issue occurs when people share pictures on Facebook and elsewhere of other peoples’ minor children, without the consent or even knowledge of those parents. Facebook makes it easy to tag photos with the names of people who don’t belong to it. The refrain we hear ad nauseum—“if you’re concerned about Facebook, don’t use it”—is false in many ways, among which the most critical may be that those most concerned about Facebook, who have therefore chosen not to use it, may thereby have virtually no control over not just the “shadow profile” Facebook reportedly maintains for everyone in the countries where it operates, but even what appears to be ordinary sharing data that can be used by all the data brokers and other social analytic providers. Thus while you may make a positive, overt decision not to share about yourself, and even less about the minor children of whom you have legal guardianship, others can and routinely do decide you are going to anyway.
- So-called “Sharing Economy” companies like Uber, Lyft, and particularly in this case AirBnB insist on drawing focus to the population who looks most sympathetic from the companies’ point of view: first, the individual service providers (individuals who earn extra money by occasionally driving a car for Uber, or who rent out their apartments when out of the city for a few weeks), and second, the individual users (those buying Uber car rides or renting out AirBnB properties). They work hard to draw attention away from themselves as companies (except when they are hoping to attract investor attention), and even more strongly away from their impact on the parts of the social world that are impacted by their services—in so far as these are mentioned at all, it is typically with derisive words like “incumbent” and in contexts where we are told that the world would beyond question be a better place if these “incumbents” would just go away. One does not need to look hard on peers.org, an astroturf industry lobbying group disguised as a grassroots quasi-socialist advocacy organization, to find paean after paean to the benefits brought to individuals by these giant corporate middlemen. (More objectively, much of the “sharing economy” looks like yet particularly broad cases of the time-honored rightist practice of privatizing profits while “socializing” losses.) One has to look much harder—in fact, one will look in vain—to find accounts of the neighbors of AirBnB hosts whose zoned-residential properties have turned into unregulated temporary lodging facilities, with all of the attendant problems these bring. One has to look even harder to find thoughtful economics analyses that discuss what the longer-term effects are of housing properties routinely taking in additional short-term income: it does not take much more than common sense to realize that the additional income flowing in will eventually get added to the value of the properties themselves, eventually pricing the current occupants out of the properties in which they live. The impact of these changes may be most pronounced on those who have played no role whatsoever in the decision to rent out units to AirBnB—in fact, in the case of condominiums, the community may have explicitly ruled out such uses for exactly this reason, and yet, absent intensive policing by the condo board, may find their explicit rules and contracts being violated in any number of ways. And condo boards are among the entities with the most power to resist these involuntary “innovations” on established guidelines; others have no idea they are happening. As Tom Slee’s persistent research has shown, AirBnB has a disturbing relationship with what appear to be a variety of secretive corporate bodies who have essentially turned zoned-residential properties into full-time commercial units, which not only violates laws that were enacted for good reason, but also promises to radically alter property values for residents using the properties as the law currently permits.
- Related to the sharing of genetic information is the (currently) much broader category of information that we call right now the quantified self. In the largest sense we could include in this category any kind of GTD, to-do list, calorie and diet trackers, health monitors, exercise and fitness trackers, monitoring devices such as FitBit and even glucose monitors, and many more to come. On the one hand, there are certainly mild crowdforcing effects of collecting this data on oneself, just as there are crowdforcing effects of SAT prep programs (if they work as advertised, which is debatable) and steroid usage in sports. But when this data is collected and aggregated—whether by companies like FitBit or even user communities—they start to impact all of us in ways that only seem minor today. When these data get accessed by brokers to insurers or insurers themselves, they provide ample opportunities for those of us who choose not to use these methods at all to be directly impacted by other people’s use of them, from whether health insurers start to charge us “premiums” (ie, deny us “discounts”) if we refuse to give them access to our own data, to inferences made about us based on the data they do have run through big data correlations with the richer data provided by QS participants, and so on.
- The concerns raised by practitioners and scholars like Frank Pasquale, Cathy O’Neill, Kate Crawford, Danielle Citron, Julia Angwin and others about the uses of so-called “big data analytics” resonate at least to some extent with the issue of crowdforcing. Big data providers create statistical aggregates of human behavior based on collecting what are thought to be representative samples of various populations. They cannot create these aggregates without the submission of data from the representative group. Those willing to submit their data may not themselves understand the purposes to which their data is being put. A simple example of this is found in the so-called “wellness programs” that are becoming increasingly attached to health insurance. In its most common formulation, insurers offer a “discount” to those customers willing to submit to a variety of tests, data collection routines, and to participate in certain forms of proactive health activities (such as using a gym). Especially in the first two cases, it looks to the user like the insurer is incenting them to take tests that may find markers of diseases that can be easily treated in their early stages and much less easily treated if caught later. Regardless of whether these techniques work or not, which is debatable, the insurers have at least one other motive in pushing these programs, which is to collect data on their customers and create statistical aggregates that affect not just those who submit to the testing, but their entire base of insured customers, and even those it does not currently insure. The differential pricing model that insurers call “discounts” or sometimes “bonuses” (but rarely “penalties,” which speaking financially, they also are: it is another characteristic of the kinds of marketing rhetoric swamping everything in the digital world that literally the same practice can appear noxious if called a “penalty” but welcome if called a “discount”) seem entirely at odds with the central premise of insurance, which is that risk is distributed across large groups regardless of their particular risk profile. Even in the age of Obamacare that discourages insurers from discriminating based on pre-existing conditions, and where large employers are required by law to provide insurance at the same cost to all their employees, these “discounts” allow insurers to do just the opposite, and suggest a wide range of follow-on practices that will discriminate with an even more finely-grained comb. If customers to these companies understood that the “bonuses” have been created to craft discriminatory data profiles at least as much as they are to promote customer wellness, perhaps they would resist these “incentives” more strenuously. As it is, not only do those being crowdforced by these policies have very little access to information that makes clear the purpose of this data collection, but those contributing to it have very little idea of what they are doing to other people or even themselves. And this, of course, is one of the deep problems with social media in general, most notably exemplified by the data brokerage industry.
- As a final example, consider the proposed and so-far unsuccessful launch of Google Glass. One of the most maddening parts of the debate over Google Glass was that proponents would focus almost exclusively on the benefits Glass might afford them, while dismissing what critics kept talking about, which is the impact Glass has on others—on those who choose not to use Glass. Critics said: Glass puts all public (and much private) space under constant recorded surveillance, both by the individuals wearing Glass and, perhaps even more worryingly, by Google itself when it stores, even temporarily, that data on its servers. What was so frustrating about this debate was the recalcitrance of Google and its supporters to see that they were arguing very strongly that the rights of people who choose not to use Glass were not up for the taking by those who did want to use it; that Google’s explicit insistence that I have to allow my children to be recorded on video and Google to store that video simply by dint of their being alive where any person who happens to use Glass might be was nothing short of remarkable. It was not hard to find Glassholes overtly insisting that their rights (and therefore Google’s rights) trump those of everyone else. This controversy was a particularly strong demonstration of the fact that the “if you don’t like it, don’t use it” mantra is false. I think we all have to look at the failure of Google to (so far) create a market for Glass as a real success of the mass of public engagement to reject the ability of a huge for-profit corporation to dictate terms to everyone. It’s even a success of the free market, in the sense that Google’s market research clearly showed that this product was not going to be met with significant consumer demand. But it is partly the visibility of Glass that allowed this to happen; too much of what I talk about here is not immediately visible in the way Glass was.
To some extent, crowdforcing is a variation on the relatively wide class of phenomena to which economists refer as externalities. Externalities refer in general to costs or benefits that impact a party even when they played no direct role in a transaction. These are usually put into two main classes. “Negative” externalities occur when costs are attached to uninvolved parties, of which the classic examples usually have to do with environmental pollution by for-profit companies, for which cleanup and health costs are “paid” by individuals who may have no connection whatsoever with the company. “Positive” externalities occur when someone else’s actions with which I’m uninvolved benefit me regardless: the simplest textbook example is something like neighbors improving their houses, which may raise property values even for homeowners who have done no work on their properties at all. Externalities clearly occur with particular frequency in times of technological and market change; there were no doubt quite a few people who would have preferred to use horses for transportation even after so many people were driving cars that horses could no longer be allowed on the roadways. While these kinds of externalities might be in some way homologous with some of the crowdforcing examples that are economic in focus (such as the impact of AirBnB and Uber on the economic conditions of the properties they share), they strike me as not capturing so well the data-centric aspects of much current sharing. Collecting blood samples from those individuals who were tested certainly allowed researchers in the past to determine normal and abnormal levels of the various components of human blood, but they did not make it possible to infer much (if anything) about my blood without I myself being tested. In fact, in the US, Fifth Amendment protections against self-incrimination extend to the collection of such personal data because it is considered unique to each human body: that is, it is currently impermissible to collect DNA samples from everyone unless a proper warrant has been issued. How do we track this right with the ability to infer everyone’s DNA without most of us ever submitting to collection?
Crowdforcing effects also overlap with phenomena researchers refer to by names like “neighborhood effects” and “social contagion.” In each of these, what some people do ends up affecting what many other people do, in a way that goes much beyond the ordinary majoritarian aspects of democratic culture. That is, we know that only one candidate will win an election, and that therefore those who did not vote for that candidate will be (temporarily) forced to acknowledge the political rule of people with whom they don’t agree. But this happens in the open, with the knowledge and even the formal consent of all those involved, even if that consent is not always completely understood.
Externalities produced by economic transactions often look something like crowdforcing. For example, when people with means routinely hire tutors and coaches for their children for standardized tests, they end up skewing the results even more in their favor, thus impacting those without means in ways they frequently do not understand and may not be aware of. This can happen in all sorts of markets, even in cultural markets (fashion, beauty, privilege, skills, experience). But it is only the advent of society-wide digital data collection and analysis techniques that makes it so easy to sell your neighbor out without their knowledge and consent, and to have what is sold be so central to their lifeworld.
Dealing with this problem requires, first of all, conceptualizing it as a problem. That’s all I’ve tried to do here: suggest the shape of a problem that, while not entirely new, comes into stark relief and becomes widespread due to the availability of exactly the tools that are routinely promoted as “crowdsourcing” and “collective intelligence” and “networks.” As always, this is by no means to deny the many positive effects these tools and methods can have; it is to suggest that we are currently overly committed to finding those positive effects and not to exploring or dwelling on the negative effects, as profound as they may be. As the examples I’ve presented here show, the potential for crowdforcing effects on the whole population are massive, disturbing, and only increasing in scope.
In a time when so much cultural energy is devoted to the self, maximizing, promoting, decorating and sharing it, it has become hard to think with anything like the scrutiny required about how our actions impact others. From an ethical perspective, this is typically the most important question we can ask: arguably it is the foundation of ethics itself. Despite the rhetoric of sharing, we are doing our best to turn away from examining how our actions impact others. Our world could do with a lot more, rather than less, of that kind of thinking.