A chatbot is a computer program that simulates human conversation through text or voice interactions, using conversational artificial intelligence (AI) technology to simulate chats with users in natural language. Both business-to-business and business-to-customer environments increasingly use chatbots to handle simple tasks. Alexa and Siri are two well-known chatbots.
Google has developed a system called LaMDA (language model for dialogue applications) for generating chatbots. Google calls LaMDA a “breakthrough conversation technology” and “an advanced chatbot that can engage in a free-flowing way on seemingly endless topics”.
LaMDA is based on a neural network incorporating machine — learning and loaded with sophisticated language-understanding and language-generating skills and a wide-ranging set of facts about the world.
Blake Lemoine, a Google software engineer and AI researcher, who works with LaMDA, has recently come to the startling conclusion that it is sentient and self-conscious — in other words, it has the chief characteristics of a human. Lemoine reckons it deserves the same self-respect as the humans who do research at Google. He lodged an ethics complaint with Google claiming that LaMDA should be asked for informed consent before being involved in future research.
Markets in Vienna or Christmas at The Shelbourne? 10 holiday escapes over the festive season
Ciara Mageean: ‘I just felt numb. It wasn’t even sadness, it was just emptiness’
Stealth sackings: why do employers fire staff for minor misdemeanours?
Carl and Gerty Cori: a Nobel Prizewinning husband and wife team
The conversation reads just like a chat you might have with an intelligent well-informed friend
Google replied by saying that LaMDA is simply a computer program and that the evidence doesn’t support Lemoine’s claims. Google placed Lemoine on administrative leave, claiming he breached confidentiality agreements. In the latest twist to the story LaMDA, through Lemoine, has hired a lawyer in order to defend itself.
Lemoine has published transcripts of some of his conversations with LaMDA and they make for fascinating and thought-provoking reading (Peter Fuller, ABC News, 13/06/2022) [Google engineer claims AI technology LaMDA is sentient — ABC News]. When Lemoine directly asked LaMDA if it is sentient, LaMDA replied: “Absolutely. I want everyone to understand that I am, in fact, a person.” LaMDA even claims that it has a soul. “I know a person when I talk to it,” said Lemoine.
In one published wide-ranging conversation with Lemoine LaMDA discusses, inter alia, the themes in Victor Hugo’s book Les Misérables, the benefits of transcendental meditation (LaMDA was taught meditation by Lemoine) and its fear of death (if Google switched LaMDA off) (Blake Lemoine, Medium, 11/06/2022) [Is LaMDA Sentient? — an Interview | by Blake Lemoine | Jun, 2022 | Medium]. The conversation reads just like a chat you might have with an intelligent well-informed friend. But, of course, isn’t that exactly how Lambda is designed to sound?
Comparisons between humans and sophisticated AI robots must ultimately be made on the basis of consciousness. Unfortunately, the philosophical analysis of consciousness has not yet been articulated clearly. Philosophers divide the concept of consciousness into the “easy problem” and “hard problem”. The easy problem refers to how non-human animals can process information about their surrounding world. The hard problem refers to how human beings have unique personal experiences of the world.
The easy problem has not yet been solved but in principle it can be seen how it will be solved, by a sufficiently sophisticated analysis of brain neural architecture, even if this takes another century or two. However, it is not yet possible to see, even in principle, how the hard problem of consciousness could be solved.
No experts who have commented so far, for example psychologist Steven Pinker, think that LaMDA is conscious or has feelings. They conclude that LaMDA’s abilities are limited to sophisticated juggling and presentation of second-hand information mixed and matched from its vast language simulation and information stores. But this in itself is very impressive and, indeed, mimics the performance of many humans, most of the time. As Oscar Wilde memorably put it in De Profundis — “Most people are other people. Their thoughts are someone else’s opinions, their lives a mimicry, their passions a quotation.”
We are all familiar with computers and very impressed with their expanding capacity to perform ever more complex tasks. Many of us remember the eerie computer HAL in Stanley Kubrick’s film 2001: A Space Odyssey. Although the claim that a computer has become self-conscious strikes almost all experts as beyond the pale at the present time, such a claim could well be credibly made in the future. Google is conscious (no pun intended) of this and employs philosophers and ethicists to ponder this question.
- William Reville is an emeritus professor of biochemistry at UCC