There may be no more pernicious and dishonest doctrine among Silicon Valley’s avatars than the one they call “permissionless innovation.” The phrase entails the view that entrepreneurs and “innovators” are the lifeblood of society, and must be allowed to push forward without needing to ask for “permission” from government, for the good of society. The main advocates for the practice are found at the Koch-directed and -funded libertarian Mercatus Center and its tech-specific Technology Liberation Front (TLF), particularly Senior Research Fellow Adam Thierer; it’s also a phrase one hears occasionally from apparently politically-neutral “internet freedom” organizations, as if it were not tied directly to market fundamentalism.
Whether or not “innovators” would be better off in achieving their own goals without needing to ask for “permission,” the fact is that another name for “permission” as it is used in this instance is “democratic governance.” Whether or not it is best for business to have democratic government looking over the shoulders of business, it is absolutely, indubitably necessary for democratic governance to mean anything. That is why libertarians had to come up with a new term for what they want; “laws and regulations don’t apply to us” might tip off the plebes to what is really going on.
Associated with certain aspects of “open internet” rhetoric by, among others, “father of the internet” (and “Google’s Chief Internet Evangelist,” in case you wonder where these positions are coming from) Vint Cerf—yet another site where we should be paying much more careful attention to the deployment of “open”—“permissionless innovation” has gained most traction among far-right market fundamentalists like the TLF.
In comments they submitted to the FAA’s proposed rules for “test sites” for the integration of commercial drones into domestic airspace, the TLF folks wrote:
As an open platform, the Internet allows entrepreneurs to try new business models and offer new services without seeking the approval of regulators beforehand.
Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators.
Note how cleverly the technical nature of the “open platform” of the internet—“open” in that case meaning that the protocols are not proprietary, which entails very little or nothing about regulatory status—merges into the inability or inadvisability of government to regulate it. This is cyberlibertarian rhetoric in its most pointed function—using language that it is hard to disagree with about the nature of technological change so as to garner support for extreme political and economic positions we may not even realize we are going along with. “Open Internet, yes!” “Keep your paternalistic ‘permission’ off our backs—for democracy!” Or not.
The market fundamentalists of TLF and Silicon Valley would love you to believe that “permissionless innovation” is somehow organic to “the internet,” but in fact it is an experiment we conducted for a long time in the US, and the experiment proved that it does not work. From the EPA to the FDA to OSHA, nearly every Federal (and State) regulatory agency exists because of significant, usually deadly failures of industry to restrain itself. We don’t need to look very far to see how destructive unregulated industry can be: just think of the 1980 authorization of the “Superfund” act, enacted after more than a decade of environmental protest proved ineffective in getting industry not simply to stop polluting, but to stop contaminating sites so thoroughly that they directly damaged agriculture and human health (including killing people), to say nothing of more traditional environmental concerns—practices for which “permissionless” industry did not merely shirk responsibility, but which they actively hid. Consider OSHA, created only in 1970, after not merely decades but centuries of employment practices so outrageous that it was not until 60 years after the Triangle Shirtwaist Fire that the government finally acted to limit the number of workers who are directly killed by their employers. When OSHA was created in 1970, 14,000 workers were killed on the job each year in the US; despite the workforce more than doubling since then, in 2009 only 4,400 were killed—which is still, by the way, awful. And industry accepted and accepts OSHA standards kicking and screaming every step of the way.
“Permissionless innovation” suggests that the correct order for dramatic technological changes should be “first harm, then fix.” This is of course the opposite of the way important regulatory bodies like the FDA—let alone doctors themselves following the Hippocratic Oath—approach their business: “first, do no harm.” The “permissionless innovation” folks would have you believe that in the rare, rare case in which one of their technologies harms somebody, they will be the first to step in and fix things up, maybe even making those they’ve harmed whole.
Yet we have approximately zero examples of market fundamentalists stepping in to say that “hey, we asked for ‘permissionless innovation,’ so since we fucked up, it’s our responsibility to fix things up.” On the contrary, they are the same people who then argue that “people make their own choices” when they “choose” to use technology whose consequences they can’t actually fathom at all, but that therefore they are owed nothing. So what they really want is no government beforehand, and no government afterwards—more accurately, no government at all.
It’s tempting to argue that digital technology is different from drugs or food, but that would belie all sorts of facts. Silicon Valley is trying to put its technology inside and outside of every part of the world, from the “Internet of Things” to drones to FitBit to iPhone location services and on and on. These technologies are meant to infiltrate every aspect of our lives—what is needed is more, not less, regulation, and more creative ways to regulate them, since they by design run across many different existing spheres of law and regulation.
This is no idle speculation. Even today, we have more than enough examples of what “permissionless innovation” can do. We need remember back no further than January of this year, when the crafty market fundamentalists at deliberately-named Freedom Industries caused a huge chemical spill, polluting water throughout West Virginia. Freedom Industries deliberately bypassed and found loopholes in existing regulations so as to produce and stockpile chemicals whose impact on human health is unknown. And were good “permissionless innovation” folks at Freedom standing up, taking responsibility for the harm they’d caused? Guess again.
Deliberately getting around the EPA is one thing, but even technological innovations closer to the digital world currently happen, and follow the same pattern of denying the responsibility that permissionless innovation would suggest “innovators” must take. We know that most soap products today contain the chemical triclosan, an antibacterial substance that, when loosed on the environment due to imperfect regulation, does not actually work as advertised. Instead, the FDA believes it harms humans and the environment, actually producing drug-resistant bacteria, a huge concern in an area of diminishing antibiotic effectiveness, and the chemical was banned by the EU in 2010. Despite this, US producers continue to sell the products because they appeal to consumers’ misguided (and advertising-fueled) belief that “killing bacteria” must be good.
In a similar vein, another, the inclusion of so-called “microbeads” in cosmetics, soaps and toothpaste, making them sparkle, follows exactly the desired pattern of permissionless innovation. The new technology, which serves as near as I can determine only marketing purposes (it makes part-liquid substances sparkle), was not covered by existing regulation, and thus has become nearly ubiquitous in a range of products. But it turns out that the beads, because they are so small, leach throughout the environment, escape the effects of water treatment plants that aren’t prepared for them, and then concentrate in marine life, including fish that humans eat. Among the many reasons that is bad, the beads “tend to absorb pollutants, such as PCBs, pesticides and motor oil,” likely killing fish and adding to the toxic load of humans who eat fish.
Some companies—including L’Oreal, Proctor & Gamble, the Body Shop and Johnson & Johnson—agreed to phase out the microbeads when presented with evidence of the damage they cause by researchers. But others haven’t, and just recently the State of Illinois has finally passed legislation to outlaw them altogether, since the Great Lakes, North America’s largest bodies of freshwater, have been found to be thoroughly contaminated with them.
So we go from an apparently harmless product—but one, we note, that served no important function in health or welfare—to an inadvertent and potentially seriously damaging technology that now we have to try to unwind. Scientists are concerned that the microbeads already in the Great Lakes are causing significant damage, so the voluntary cessation is great, but doesn’t solve the problem that’s already been caused.
Cosmetic and pharmaceutical manufacturers are already familiar with regulatory bodies, and so it is not all that surprising that some of them have voluntarily agreed to curb their practices—after the harm has been done. Silicon Valley companies have so far demonstrated very little of the same deference—on the contrary, they continue business practices even after regulators directly tell them that what they are doing violates the law.
“Permissionless innovation” is a license to harm; it is a demand that government not intrude in exactly the area that government is meant for—to protect the general welfare of citizens. It is a recipe for disaster, and I have no hesitation whatsoever about saying that, in the battle between human health and social welfare vs. the for-profit interests of “innovators,” society is well-served by erring on the side of caution. As quite a few of us, including Astra Taylor in her recent People’s Platform have started to say, the proliferation of digital technology into every sphere of human life suggests we need more and more careful regulation, not less–unless we want to learn what the digital equivalent of Love Canal might be.