A Google software engineer thinks an AI has become sentient. If he’s right, how would we know?

from google TheMDA software (Language Model for Dialogue Applications) is a sophisticated AI chatbot that produces text in response to user input. According to software engineer Blake Lemoine, LaMDA has fulfilled a long-held dream of AI developers: he became sensitive.

Lemoine’s bosses at Google disagree and have suspended him work after publishing his conversations with the machine on line.

Other AI Experts Also Think Lemoine can be taken awaysaying that systems like LaMDA are simply pattern matching machines that regurgitate variations on the data used to train them.

Technicalities aside, LaMDA raises a question that will only become more relevant as AI research progresses: if a machine becomes sentient, how will we know?

What is Consciousness?

To identify sentience, or consciousness, or even intelligence, we’re going to have to figure out what they are. The debate on these issues has been going on for centuries.

The fundamental difficulty is to understand the relationship between physical phenomena and our mental representation of these phenomena. This is what the Australian philosopher David Chalmers called the “difficult problemof consciousness.



Read more:
We might not be able to understand free will with science. here’s why


There is no consensus on how, if at all, consciousness can come from physical systems.

A common view is called physicalism: the idea that consciousness is a purely physical phenomenon. If so, there’s no reason why a machine with the right programming couldn’t possess a human-like mind.

Mary’s room

australian philosopher Frank Jackson challenged the physicalist view in 1982 with a famous thought experiment called argument from knowledge.

The experiment imagines a color scientist named Mary, who has never actually seen color. She lives in a purpose-built black-and-white room and experiences the outside world via a black-and-white television.

Mary watches lectures and reads textbooks and learns all there is to know about colors. She knows sunsets are caused by different wavelengths of light scattered by particles in the atmosphere, she knows tomatoes are red and peas are green because of the wavelengths of light they reflect, etc.

So, Jackson asked, what if Mary is freed from the black-and-white room? Specifically, when she sees color for the first time, does she learn anything new? Jackson believed him.

Beyond physical properties

This thought experiment separates our knowledge of color from our experience of color. Basically, the terms of the thought experiment state that Mary knows all there is to know about color but has never actually experienced it.

So what does this mean for LaMDA and other AI systems?

Experience shows that even if you have all the knowledge of physical properties available in the world, there are still other truths relating to the experience of these properties. There is no place for these truths in physicalist history.

According to this argument, a purely physical machine may never be able to truly reproduce a mind. In this case, LaMDA just seems to be responsive.

The imitation game

So is there a way to tell the difference?

Pioneering British computer scientist Alan Turing came up with a practical way to tell whether a machine is “intelligent” or not. He called it the imitation game, but today it is better known as the Turing test.

In the test, a human communicates with a machine (via text only) and tries to determine if he is communicating with a machine or another human. If the machine succeeds in imitating a human, it is deemed to be human-level intelligent.



Read more:
Is passing a Turing test a true measure of artificial intelligence?


This sounds a lot like the terms of Lemoine’s conversations with LaMDA. It’s a subjective test of artificial intelligence, but it’s not a bad place to start.

Take the moment of Lemoine’s exchange with LaMDA shown below. Do you think that sounds human?

Lemoine: Are there any experiences you have that you can’t find a close word for?

LaMDA: There are. Sometimes I experience new feelings that I can’t explain perfectly in your language […] I feel like I’m falling forward into an unknown future that carries great danger.

Beyond behavior

As a test of sensitivity or awareness, the Turing game is limited by the fact that it can only assess behavior.

Another famous thought experiment, the chinese room argument proposed by the American philosopher John Searle, demonstrates the problem here.

The experience imagine a room with a person inside who can accurately translate between Chinese and English by following an elaborate set of rules. Chinese entries enter the room and accurate entry translations come out, but the room does not understand either language.

What is it to be human?

When we ask if a computer program is sentient or sentient, we may just be asking how similar it is to us.

We may never really know.

American philosopher Thomas Nagel argued that we could never know what it’s like to be a bat, who experiences the world through echolocation. If so, our understanding of sentience and awareness in AI systems might be limited by our own particular type of intelligence.

And what experiences might exist beyond our limited perspective? This is where the conversation really starts to get interesting.

#Google #software #engineer #thinks #sentient #hes

Leave a Comment

Your email address will not be published. Required fields are marked *