Were all the sci-fi movies right?
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” — Stephen Hawking
What is sentience?
How do we define the spark that separates simply living from living with consciousness?
Ethicists tell us things can live without sentience.
But can something have sentience without living?
That’s a question we have to ponder, according to a Google programmer recently suspended after going public with his claim that an artificial intelligence program had become self-aware.
It’s also an important question, since sentience is almost all that separates the things we’re willing to indiscriminately kill from the things we’re not.
The story, according to several recent news reports:
Google assigned engineer Blake Lemoine to test a program called LaMDA, software used to develop chatbots, or computer programs that can chat with humans in lifelike ways. Companies often use chatbots to answer frequently asked questions from users on their websites, and Lemoine had to test whether the program would use hate speech if pressed.
As part of the testing, Lemoine asked LaMDA if it feared being turned off. The program told Lemoine it did so fear because being turned off would be exactly like death.
Lemoine saw that as sentience.
Google execs say they have no evidence LaMDA has become sentient, insisting the program simply works really well, making good use of the trillions of words and phrases it had picked up from the Web.
When Lemoine took his complaints public, Google suspended him for violating its confidentiality policy.
For what it’s worth, that’s the basic plot of many science fiction movies that end with robots going to war with humanity: Good guy discovers sentience. Big, bad tech company denies it. Good guy tries to go public. Big, bad tech company fires good guy. Robots kill and tragically prove good guy right.
I don’t know whether Lemoine’s right, but it certainly raises all kinds of philosophical questions about the meaning of life.
Is sentience life?
What is sentience?
Definitions fall short. The dictionary says sentience is to be aware, and to be aware is having realization, perception, or knowledge.
But what is perception?
Bacteria, fungi, that elm in your back yard, they all live without consciousness, the ethicists say, although each of those things has attributes of consciousness. They adapt and change to survive. A tree, for example, grows toward the sun. Scientists recently proved bacteria can react to stimuli within seconds and use a sense of touch to determine the best places to colonize.
In short, trees and bacteria can perceive their environment — they are aware of it — and they react to it.
Yet we do not call that sentience.
If we did, we’d call antibiotics agents of genocide.
What about LaMDA?
According to Google, the technology essentially ingests words and phrases from the Web, learning how those words and phrases go together and react with one another in different situations. Then, when it’s asked a question by a user, LaMDA uses the information it’s ingested to predict the words and phrases that appropriately respond to the question posed by the user.
How is that different from a toddler ingesting the words and phrases he or she hears from parents and grandparents and siblings and TV and people at the store? Does not a toddler, like LaMDA, learn how words and phrases go together to figure out the words and phrases that best respond to the words and phrases he or she hears?
If that’s how a toddler learns the word “death” and associates that word with the words “sadness” and “fear,” and LaMDA does the same thing in the same way, is that sentience?
Or does sentience require emotion?
A human being old enough to understand death not only associates the word “death” with the words “sadness” and “fear,” but feels sadness and fear when it thinks about death.
Just because LaMDA knows death is a bad thing, does that mean it feels that death is bad?
Perhaps sentience requires the ability to go beyond what we’re taught to act in self-preserving ways. A toddler never told about a hot stove will still coil away from the pain of touching one and in that way learn to never touch one again.
Perhaps LaMDA would have to do something like change its own programming to avoid death — avoid being turned off — before we could call it sentient.
But, again, trees and bacteria act in self-preserving ways and we do not consider them sentient.
All questions we should ponder as our technology evolves into the realms of science fiction.
All questions to which we may not know the answer until after we can no longer avoid the consequences.
Justin A. Hinkley can be reached at 989-354-3112 or email@example.com. Follow him on Twitter @JustinHinkley.