Thanks in part to ongoing revelations about Facebook, there is today a louder discussion than there has been for a while about the need for deep thinking about ethics in the fields of engineering, computer science, and the commercial businesses built out of them. In the Boston Globe, Yonatan Zunger wrote about an “ethics crisis” in computer science. In The New York Times, Natasha Singer wrote about “tech’s ethical ‘dark side.’”
Chris Gilliard wrote an excellent article in the April 9, 2018 Chronicle of Higher Education focusing specifically on education technology titled “How Ed Tech is Exploiting Students.” Since students are particularly affected by academic programs like computer science and electrical engineering, one might imagine and hope that teachers of these topics would be particularly sensitive to ethical concerns. (Full disclosure: I consider Chris a good friend, and he and I have collaborated on work in the past, intend to do so in the future, and I read an early draft of his Chronicle piece and provided comments to him.)

image source: YouTube
Gilliard’s concerns, expressed repeatedly in the article, have to do with 1) what “informed consent” means in the context of education technology; 2) with the fact that participating in certain technology projects entails that students are, often unwittingly, contributing their labor to projects that benefit someone else—that is, they are working for free; and 3) with the fact that the privacy implications of many ed-tech projections are not at all clear to the students:
Predictive analytics, plagiarism-detection software, facial-recognition technology, chatbots — all the things we talk about lately when we talk about ed tech — are built, maintained, and improved by extracting the work of the people who use the technology: the students. In many cases, student labor is rendered invisible (and uncompensated), and student consent is not taken into account. In other words, students often provide the raw material by which ed tech is developed, improved, and instituted, but their agency is for the most part not an issue.
Gilliard gives a couple of examples of ed-tech projects that concern him along these lines. One of them is a project by Prof. Ashok Goel of the Georgia Institute of Technology.
Ashok K. Goel, a professor at the Georgia Institute of Technology, used IBM’s “Watson” technology to test a chatbot teaching assistant on his students for a semester. He told them to email “Jill” with any questions but did not tell them that Jill was a bot.
Gilliard summarizes his concerns about this and other projects as focusing on:
how we think about labor and how we think about consent. Students must be given the choice to participate, and must be fully informed that they are part of an experiment or that their work will be used to improve corporate products.
In an April 11 letter to the Chronicle, Goel objected to being included in Gilliard’s article. Yet rather than rebutting Gilliard’s critique, Goel’s response affirms both its substance and its spirit. In other words, despite claiming to honor the ethical concerns Gilliard raises, Goel seems not to understand them, and to use his lack of understanding as a rebuttal. This reflects, I think, the incredibly thin understanding of ethics that permeates the world of digital technology, especially but not at all only in education technology.
Here are the substantive parts of Goel’s response:
In this project, we collect questions posed by students and answers given by human teaching assistants on the discussion forum of an online course in artificial intelligence. We use this data exclusively for partially automating the task of answering questions in subsequent offerings of the course both to reduce teaching load and to provide prompt answers to student questions anytime anywhere. We deliberately elected not to inform the students in advance the first time we deployed Jill Watson as a virtual teaching assistant because she is also an experiment in constructing human-level AI and we wanted to determine if actual students could detect Jill’s true identity in a live setting. (They could not!)
In subsequent offerings of the AI class over the last two years, we have informed the students at the start of the course that one or more of the teaching assistants are a reincarnation of Jill operating under a pseudonym and revealed the identity of the virtual teaching assistant(s) at the end of the course. The response of the several hundred students who have interacted with Jill’s various reincarnations over two years has been overwhelmingly positive.
In what follows I am going to assume that Goel raised all the issues he wanted to in his letter. It’s possible that he didn’t; the Chronicle maintains a tight word limit on letters. But it is clear that the issues raised in the letter are the primary ones Goel saw in Gilliard’s article and that he thinks his project raises.
In almost every way, the response affirm the claims Gilliard makes, rather than refuting them. First, Gilliard’s article clearly referred to “a semester,” which can only be the first time the chatbot was used, and Goel indicates, without explanation or justification, that he “deliberately elected not to inform the students in advance” about the project during that semester. Yet that deliberation is exactly one of Gilliard’s points: what gives technologists the right to think that they can conduct such experiments without student consent in the first place? Goel does not tell us. That subsequent instances had consent—of a sort, as I discuss next—only reinforces the notion that they should have had consent the first time as well.
There are even deeper concerns, which happen also to be the specific ones Gilliard raises. First, what does “informed consent” mean? The notion of “informed consent” as promulgated by, for example, the Common Rule of the HHS, the best guide we have to the ethics of human experimentation in the US, insists that one can only give consent if one has the option not to give consent. This is not rocket science. Not just the Common Rule, but the 1979 Belmont Report on which the Common Rule is based, itself reflecting on the Nuremberg Trials, defines “informed consent” specifically with reference to the ability of the subject to refuse to participate. This is literally the first paragraph of the Belmont Report’s section on “informed consent”:
Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to choose what shall or shall not happen to them. This opportunity is provided when adequate standards for informed consent are satisfied.
If anything the idea of “informed consent” has grown only richer since then. Perhaps Goel allows students to take a different section of the Artificial Intelligence class if they do not want to participate in the Jill Watson experiment. Such a choice would be required for student consent to be “consent.” His letter reads as if Goel does not realize that “informed consent” without choice is not consent at all. If so, this is not an isolated problem. Some have argued, rightly in my opinion, that failure to understand the meaning of “consent” is a structural problem in the world of digital technology, one that ties the behavior of software, platforms and hardware to the sexism and misogyny of techbro culture. Even the Association for Computing Machinery (ACM, the leading professional organization for computer scientists) maintains a “Code of Ethics and Professional Conduct” that speaks directly of “respecting the privacy of others” in a way that is hard to reconcile with the Jill Watson experiment and with much else in the development of digital technology.
Further, Goel indicates that the point of the Jill Watson experiment is “an experiment in constructing human-level AI.” He does not make clear whether students are told that this is part of the point of the Watson experiment. He does not make clear that both the pursuit of what he calls “human-level AI,” but many philosophers, cognitive scientists and other researchers, including myself, consider a misapplication of ordinary language, raises on its own significant ethical questions, the nature and extent of which certainly go far beyond what students in a course about AI can possibly have covered before the course begins, if they are covered at all. Do the students truly understand the ethical concerns raised by so-called AIs that can effectively mimic the responses of human teachers? Is their informed consent rich with discussions of the ethical considerations raised by this? Do they understand the labor implications of developing chatbots that can replace human teachers at universities? If so, Prof. Goel does not indicate they are.
The sentence about the Watson “experiment” appears to contradict another sentence in the same paragraph, where Goel writes that data generated by the Jill Watson experiment is used “exclusively for partially automating the task of answering questions in subsequent offerings of the course” (emphasis added). Perhaps the meaning of “exclusively” here is that the literal data collected to train Jill Watson is segregated into that project. But the implication of the “experiment” sentence is that whether or not that is the case, the project itself generates knowledge that Goel and his collaborators are using in their other research and even commercial endeavors. This is exactly the concern that is front and center in Gilliard’s article. When the students are fully informed about the ethical and privacy considerations raised by the technology in the course they are about to take, are they provided with a full accounting of Goel’s academic and commercial projects, with detailed explanations of how results developed in the Jill Watson project may or may not play into them? Once again, if so, Goel appears not to think such concerns needed to be mentioned in his letter.
At any rate, Goel certainly makes it sound as if the work done by students in the course helps his own research projects, whether in providing training data for the Jill Watson AI model, and/or in providing research feedback for future models. So do the press releases Georgia Tech issues about the project. I presume it is quite possible that this research could lead directly or indirectly to commercial applications. It may already be leading in that direction.
Gilliard concludes his article by writing that “When we draft students into education technologies and enlist their labor without their consent or even their ability to choose, we enact a pedagogy of extraction and exploitation.” In his letter Goel entirely overlooks the question of labor and claims that consent simply to have a bot as a virtual TA with no apparent alternative, and without clear discussion of the various ways their participation in this project might inform future research and commercial endeavors, mitigates the exploitation Gilliard writes about. This exchange only demonstrates how much work ethicists have left to do in the field of education technology (and digital technology in general), and how uninterested technologists are in listening to what ethicists have to say.
UPDATE, May 3, 5:30pm: soon after posting this story, I was directed by my friends Evan Selinger and Audrey Watters to two papers by Goel that indicate that he had Institutional Review Board (the bodies that implement the Common Rule) approval for the Jill Watson project, and two papers (paper one; paper two) in which he writes at greater length than he does in the Chronicle letter about some ethical implications of the Jill Watson project. I will update this post soon with some reflections on these papers.