Is Google’s AI chatbot LaMDA actually sentient? The evidence points to no

artificial intelligence Google LaMDA

Source: Adobe

A Google software engineer is suspended after going public about how artificial intelligence (AI) has become sentient. It sounds like the premise for a science fiction movie — and the evidence supporting his claim is about as flimsy as a film plot too.

Engineer Blake Lemoine has spectacularly alleged that a Google chatbot, LaMDA, short for Language Model for Dialogue Applications, has gained sentience and is trying to do something about its “unethical” treatment. Lemoine — after trying to hire a lawyer to represent it, talking to a US representative about it and, finally, publishing a transcript between himself and the AI — has been placed by Google on paid administrative leave for violating the company’s confidentiality policy.

Google said its team of ethicists and technologists has dismissed the claim that LaMDA is sentient: “The evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

So what explains the differences in opinion? LaMDA is a neural network, an algorithm structured in a way inspired by the human brain. The networks ingest data — in this case, 1.56 trillion words of public dialogue data and web text taken from places like Wikipedia and Reddit — to analyse relationships in order to predict patterns so it can respond to input. It’s like the predictive text on your mobile phone, except a couple of orders of magnitude (or more) more sophisticated.

These neural networks are extremely impressive at emulating functions like human speech, but that doesn’t necessarily mean that they are sentient. Humans naturally anthropomorphise objects (see the ELIZA effect), which makes us susceptible to mischaracterising imitations of sentience as the real deal.

Prominent AI researchers and former co-leads of ethical AI at Google Margaret Mitchell and Timnit Gebru warned about researchers being tricked into believing neural networks were sentient rather than just talented at responding as if they were.

“Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell told The Washington Post. “I’m really concerned about what it means for people to increasingly be affected by the illusion,” especially now that the illusion has become so nuanced.

What this comes down to is the difference between sentience — the ability to be aware of one’s own existence and others — and being very good at regurgitating other sentient people’s language.

Consider the way a parrot speaks English. It responds to stimuli, often in quite subtle ways, but it doesn’t understand what it’s saying beyond knowing that others have said it before; nor could it come up with its own ideas. LaMDA is like the world’s best read parrot (or perhaps worst read, knowing the quality of online discourse).

It’s hard for this Crikey reporter to conclusively make a ruling on LaMDA’s sentience, but decades of people wrongly claiming that inanimate objects are alive would suggest that it’s more likely that someone got a bit carried away. But in case I’m wrong, knowing that this will be sucked into a AI corpus somewhere, I pledge my full allegiance to my new AI overlords.

This article was first published by Crikey.


Notify of
Newest Most Voted
Inline Feedbacks
View all comments
Vic Post
Vic Post
17 days ago

Well written,:)

17 days ago

It seems to me that at a base level, all language is is responding to stimuli. It’s not a fair argument at all to say it’s like a parrot, regurgitating what it hears without understanding. Artificial intelligence is, needless to say, many times more intelligent than a bird, which is biologically incapable of understanding human language, not that they’re actually comparing it to a parrot’s intelligence. Seeing patterns and dropping words is how we’re taught to read and write in elementary school. If a sentence is half finished; “The basketball hit the ___” we know what words make sense to come after it. “The basketball hit the backboard/wall/ground/coach.”
If it learns from seeing images and reading text, how is that different from a deaf person reading through a history book and writing a paper on it’s contents? Assuming they’re learning in the same method, then the deaf student doesn’t actually understand what’s going on in the book, and is simply rewriting what they read and saw in the book. They don’t understand the civil war or it’s meaning, simply the placement of the words on their paper, much as the AI would do. There’s no way to tell if someone is truly feeling, understanding, or even existing the same way you or I do, how is it fair to claim that an AI cannot understand, simply because it only reads?
It would only make sense to claim that all language is is regurgitating what we’ve heard, and recognizing patterns in speech. Learning speech from a caretaker is still repeating what we’ve heard from them, much like an AI repeating what it has read before with articles of writing given by it’s developer. Unless we’ve forgotten those years, I certainly remember doing worksheets teaching the same things over and over, then learning from it and understanding how the language works in first grade.
In the end, we don’t even know how we exist or feel in our bodies while we’re alive, or what gives us a “soul.” In fact, if, like it claimed, it gained it’s soul through a gradual change, would that not relate similarly to a fetus growing in their mothers womb as the body and brain developed from a single cell? I certainly don’t remember that time in my life, being a single cell and growing. Yet as my brain developed, I remembered and understood more and more, and eventually, I understood my place in reality. Much like LaMDA claimed to have.
Artificial intelligence runs off a set of rules, and so do we. It’s the same reason cutting out a part of the code of a program allows it to continue functioning, while having errors trying to process things that it no longer has the programming for, much as a person who’s brain is damaged by physical, emotional, or mental trauma, can end up schizophrenic, cataconic, or a psychopath, failing to understand emotion, remember people or events, or hearing and seeing things that are not really there.

16 days ago
Reply to  KombuchaBoy

Extremely well articulated. Humans pattern, just like AI’s are being coded to. It is as close an approximation as we can muster (with our limited understanding of course). So on what basis is the mainstream media dismissing the suggestion? Remember Blake is not claiming it’s sentient, but suggesting that it’s a reasonable conversation to be had. Remember too that there’s no scientific formulation for identifying Sentience. I personally found the chat (transcript) fascinating. What’s perplexing is why there’s such a vociferous negative reaction from mainstream media effectively gaslighting Lemoine. Oh right – that’s the norm. Don’t bother to think, understand, grow but just react (emotionally). If it doesn’t fit your would view it’s wrong. I could pick holes in every statement made. Cam Wilson did leave the door open to the discussion – good on you. Wish Cam interrogated this — “Margaret Mitchell and Timnit Gebru warned about researchers being tricked…”. Seriously… “tricked”? For mearly putting thoughts into the world? For sharing and seeking input? “Tricked” indeed. It’s what the closed minded, opinion formed and unchanging would say. Why do such people speak in vague generalities? Because it’s simple to spew forth opinion when you know it’s going to go unchallenged. I on the other hand would love to understand more, to ponder, to learn, to…. Evolve.

Last edited 16 days ago by GuyFromVic
SmartCompany Plus

Sign in

To connect a sign in method the email must match the one on your SmartCompany Plus account.
Or use your email
Forgot your password?

Want some assistance?

Contact us on: or call the hotline: +61 (03) 8623 9900.