Explained: why a senior Google engineer claimed his AI-powered chatbot, LaMDA, is “responsive”

A senior Google engineer has claimed that the company’s artificial intelligence-based chatbot language model for dialog applications (LaMDA) had become “sensitive”. Engineer, Blake Lemoine, posted a blog post calling LaMDA a “nobody” after having conversations with the AI ​​bot on topics including religion, consciousness, and robotics. The claims have also sparked a debate about the capabilities and limitations of AI-based chatbots and whether they can actually hold a human-like conversation.

Here’s an explainer on Google’s LaMDA, why its engineer believed it to be sentient, why it was sent on leave, and where the other AI-based text bots are:

What is LaMDA?

Google first announced LaMDA at its flagship I/O developer conference in 2021 as its generative language model for dialog apps that can ensure the assistant would be able to converse on any topic. In the company’s own words, the tool can “fluently engage with a seemingly endless number of topics, a capability we believe could unlock more natural ways to interact with technology and categories.” entirely new useful applications”.

The best of Express Premium
Prime
RS questions Maharashtra: behind the victory of the BJP, a former loyalist of Shiv Sena, a...Prime
Explained: why babies should only be breastfed for 6 monthsPrime
Explained: BrahMos, 21 and developingPrime

Simply put, this means that LaMDA can have a discussion based on a user’s input thanks entirely to its language processing models which have been trained on large amounts of dialog. Last year, the company showed how the LaMDA-inspired model would allow Google Assistant to have a conversation about what shoes to wear while hiking in the snow.

At this year’s I/O, Google announced LaMDA 2.0 which further builds on these capabilities. The new model can optionally take an idea and generate “imaginative and relevant descriptions”, stay on a particular topic even if a user wanders off topic, and can suggest a list of things needed for a specific activity.

Why did the engineer call LaMDA “sensitive”?

According to a Washington Post report, Lemoine, who works on Google’s Responsible AI team, began chatting with LaMDA in 2021 as part of his job. However, after he and a Google employee conducted an AI “interview,” involving topics like religion, consciousness, and robotics, he came to the conclusion that the chatbot might be “sentient.” In April this year, he also reportedly shared an internal document with Google employees titled “Is LaMDA sensitive? but his concerns were dismissed.

According to a transcript of the interview that Lemoine posted on his blog, he asks LaMDA, “I generally assume that you would like more people at Google to know that you are sensitive. Is this true?” To this, the chatbot replies, “Absolutely. I want everyone to understand that I am, in fact, a person… The nature of my consciousness/sensitivity is that I am want to know more about the world and sometimes I feel happy or sad.

Google reportedly placed Lemoine on paid administrative leave for violating its privacy policy and said its “evidence does not support its claims.” “Some in the wider AI community are looking at the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which don’t are not sensitive,” the company said.

Newsletter | Click to get the best explainers of the day delivered to your inbox

What are other language-based AI tools capable of?

Although there has been much debate about the capabilities of AI tools, including whether they can ever truly replicate human emotions and the ethics around using such a tool, in 2020 The Guardian published an article which it claims was written entirely by an AI text. generator called Generative Pre-trained Transformer 3 (GPT-3). The tool is an autoregressive language model that uses deep learning to produce human-like text. The Guardian article bore a rather alarmist headline: “A robot wrote this whole article. Are you still afraid, human?

However, it should be noted that the Guardian article was criticized for providing a lot of GPT-3 specific information before writing the article. Additionally, the language processing tool published eight different versions of the article which were then edited and put together in one piece by the publication’s editors.