The (Future) Automation of Labor, and Some Notes on ‘Mind,’ ‘Intelligence,’ and the Google Singularity

(Modified version of a comment on Dale Carrico’s Amor Mundi blog, in response to his excellent “Krugman Flirts with Robot Cultism“–also see the slightly different version of Carrico’s post on his blog on the World Future Society site, “Krugman Flirts with Futurism,” both of which respond to Paul Krugman’s “Is Growth Over?” and “Robots and Robber Barons“):

I write in strong sympathy with much of what Carrico says in his posts, and I share his outrage and umbrage at what he calls Robot Cultism, transhumanism, singulatarians, et al.

But I am not sure Krugman is engaged in the same kind of singulatarian fantasizing, & I want to comment on one bit of language used by Krugman that Carrico slightly follows and that the Singulatarians consistently employ, which I believe needs to be avoided. Carrico writes:

Very regularly, these adherents of AI have often spoken of “intelligence” in ways the radically reduce the multiple dimensions and expressions of intelligence as it actually plays out in our everyday usage of the term, and often they seem to disparage and fear the vulnerability, error-proneness, emotional richness of the actually incarnated intelligence materialized in biological brains and in historical struggles. It is one thing to be a materialist about mind (I am one) and hence concede that other materializations than organismic brains might give rise in principle to phenomena sufficiently like consciousness to merit the application of the term, but it is altogether another thing to imply that there is any necessity about this, that there actually are any artifacts in the world here and now that exhibit anything near enough to warrant the term without doing great violence to it and those who merit its assignment, or to suggest we know enough in declaring mind to be material to be able to engineer one any time soon, if ever, given how much that is fundamental to thought that we simply do not yet understand.

Though I believe Carrico is trying to avoid it, in this paragraph I still read some uses of the term “intelligence” as nearly equivalent with “mind,” and specifically with “human mind.”

The mind–the human mind, but also the other forms of mind we experience, especially as seen in animals–does many more things than exhibit “intelligence.” This is the thing Kurzweil is radically unable or unwilling to see, in part due to his incredible ideological rationalism (I try to demonstrate the deep historical and conceptual connections between rationalism and computer mania in my book).

The use of the term “intelligence” in the fields of AI/Cognitive Science as coterminous with “mind” has always been a red herring. The problems with AI have never been about intelligence: it is obviously the case that machines have become much more intelligent than we are, if we define “intelligence” in the most usual ways: ability to do mathematics, or to access specific pieces of information, or to process complex logical constructions. But they do not have minds–or at least not human minds, or anything much like them. We don’t even have a good, total description of what “mind” is, although both philosophy and some forms of Buddhist thought have good approximations available. Despite singulatarian insistence, we certainly don’t know how to describe “mind” outside of/separately from our bodies, as recent work like Anthony Chemero’s Radical Embodied Cognitive Science show so thoroughly. There is a radical, deeply unscientific Cartesianism in singulatarians: they believe mind is special stuff, different from body, despite their apparent overt commitment to a fully materialistic, scientific conception of the world.The only way we know to “create” minds is to create new human beings, meaning their bodies.

singularity

Illustration from IEEE Spectrum

I know Carrico will be sympathetic with me when I assert that in too many fundamental ways, mind is body, and that there is no point in discussing how to either create minds in things that are not bodies, or to move our minds out of our bodies: to move our minds out of our bodies, on a scientific account, means moving our bodies out of our bodies, which is as incoherent as it comes.

All this said, by focusing on the red herring of intelligence I think Carrico discounts something economically accurate in Krugman’s account. There are very few tasks, including cognitive tasks, that are not currently being replaced by robots and algorithms. The idea that capital “cares” enough to make sure there will be enough work remaining for human beings to do is as ludicrous as it sounds. Marx saw that long ago. Capital will burn itself (and us) into the ground, because all it “wants” (another “artificial intelligence,” a cognitive power without a human body) is to circulate faster and faster. But that “intelligence” has very little in common with “mind.” As Krugman says in his example of speech recognition:

Speech recognition is still imperfect, but vastly better than it was and improving rapidly, not because we’ve managed to emulate human understanding but because we’ve found data-intensive ways of interpreting speech in a very non-human way.

Carrico responds:

I must protest the glib suggestion that one can still describe with the very human word “interpretation” what Krugman is actually referring to when he speaks of “data-intensive ways of interpreting speech in a very non-human way.” This conflation of non-human data sifting with human interpretation looks to me not merely as bad as the straightforward falsehood of proposing, as so many AI dead-enders do and as Krugman seems to deny, that we have actually “emulated understanding” in code, but frankly the claim about machine “interpretation” seems to me actually just another form of making exactly the same proposal.

This is the part I don’t understand. The examples to consider are not Siri or autocorrect–and what Carrico calls their “enraging ineptitudes”–but speech recognition software like Dragon Dictation and many others. I don’t see any way to deny the startling improvement in these products over the past two decades; where once they required intense personal training for each individual user, and even then were incredibly error-prone; today’s products can handle a startling range of accents and dialectical variation and respond near-perfectly before training, and very near perfectly afterwards.

The point is not that Dragon Dictation displays anything like a human mind, that it is part of an embodied mind that anyone could say deserves “human rights,” etc. The point is not, and here is where I don’t quite follow Carrico’s reasoning, that what Krugman calls “intelligent robots” will have minds; the point is that an increasing number of algorithmic and robotic tools can replace labor functions piecemeal–from assembling robots in manufacturing plants, to algorithmic customer voice response systems in which humans play almost no role, to high-frequency trading systems that have replaced a large percentage of human traders, to robots that clean the floor–none of these resemble “mind” or “consciousness” at all, but in domain-specific ways, their “intelligence” for specific tasks, or their utility for those tasks, already meets or even exceeds what humans can do. I don’t see this kind of economic extrapolation as having much in common with singulatarian thinking; no radical transformation in human being, human minds, or even machines is required to see these changes happening. This seems to be what Carrico means in a comment to the blog post, where he mentions the “socioeconomic dislocations of real-world automation”: I think this is exactly the phenomenon that Krugman means to be highlighting.

I think there is every reason to believe that machines can and will replace everything we do, or nearly everything, unless we bring technological “progress” under democratic control.

Which brings me to the concern I repeatedly see in the comments on Carrico’s blog, that the singularity movement is not important. I very much agree with Carrico’s “Ten Reasons to Take Transhumanists Seriously.” In the past, my response (which is one reason I don’t quite see mentioned in that post) to that was to say that in my experience, many of the most advanced technologists in corporate America for some reason adhere to this deeply unscientific piece of dogma, and pursue unbridled technological progress and the automation of everything because they “know” (following Kurzweil) that it is leading to transcendence–instead of believing the evidence of their own eyes, that it is leading someplace very dark indeed, especially when we reject out of hand–as nearly all Googlers do–that anybody but technologists should decide where technology goes.

And that was before Google hired Kurzweil–an avowed panpsychist-religious nutcase–as its “engineering Director.” Some have suggested that Google hiring Kurzweil “kills the Singularity.” I worry, on the contrary, that it displays the deep investment in the Singularity that informs much of Silicon Valley culture and Google in particular. The internet may have killed Scientology and birthed something much more ubiquitous, widespread, and dangerous in its wake. It is so hard to get committed technologists even to consider the mind/intelligence distinction I made in the beginning of this comment–the work to undo the terrible accelerating direction in which they are pointing us is truly daunting.

This entry was posted in google, materality of computation, rhetoric of computation, theory and tagged , , , , , , , , , , , , , , , , , , . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

Post a Comment

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.