Google engineer placed on leave after claiming artificial intelligence chatbot has become sentient

Google engineer placed on leave after claiming artificial intelligence chatbot has become sentient

Ex-Google engineer Blake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human child.

The suspension of a Blake Lemoine who claimed a computer chatbot he was working on had become self-conscience. and was thinking and reasoning like a human being has put new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI).

Google placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA (language model for dialogue applications) chatbot development system.

Lemoine, an engineer for Google’s responsible AI organization, described the system he was working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to that of a human child.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post.

He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled “Is LaMDA sentient?”

The engineer compiled a transcript of the conversations, in which at one point he asks the AI system what it is afraid of.

SUBSCRIBE TO OUR MAILING LIST TO CONTINUE READING!

Read more