On June 11th, The Washington Post published a story about an AI engineer at Google named Blake Lemoine. Lemoine worked on a language AI system at Google called LaMDA. Convinced that the AI had become self-aware, Blake reported his suspicions to his supervisors. Google's directors eventually rejected that his findings were indications the program had achieved sentience and placed Lemoine on administrative leave. Then, in defiance of the leave, Lemoine when public with his findings.
Lemoine based his conclusions on long conversations he'd had with the program on a variety of topics. Natasha Tiku from The Washington Post reports: "As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics." Impressive, but does it mean the program is self-aware?
Language AI
Language AI programs like LaMDA allow a human user to pose questions to it. The program then searches quickly through the internet and assembles a response based on millions of bits of data and conversations between human beings. From these observations, the program then uses statistical analysis to produce what it believes to be the most appropriate answer to the question.
Programs like LaMDA aren't, in fact, all that new. Earlier, cruder forms of similar programming exists in the form of programs like Cleverbot. If you pose some questions to Cleverbot, you'll often get answers that aren't, ahem, terribly clever. Sometimes you'll just get nonsense.
Language AI programming of this kind has grown more sophisticated over time. Predictive text programs have long since been folded into SMS messages on smart phones as well as predictive text suggestions in Gmail and similar email services. LaMDA is on the forefront of this technology, having refined the programming to such a degree that a person can carry on long, coherent exchanges with it. The exchanges were uncanny enough to convince at least one Google engineer that it had achieved a kind of personhood.
So, Is It Sentient?
It may be impossible to tell for sure, as the question "is the machine self-aware?" first requires one to determine what self-awareness is. Philosophers, scientists and theologians have been debating whether humans are self-aware for hundreds of years, never-mind computer programs.
So, let's table the broader philosophical questions and consider some concrete facts about AI programs themselves. In an aptly titled article from CNN, called "No, Google's AI is Not Sentient," Rachel Metz summarizes the case against self-awareness by breaking down how the AI actually functions.