A senior software worker at Google has been fired for saying that LaMDA, the company’s AI chatbot, has a sense of self. Blake Lemoine, a software engineer, was put on leave by Google last month after it said he had broken company rules and that his assertions regarding the LaMDA (language model for dialogue applications) were “wholly unfounded.”
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google said.
Last year, Google said that LaMDA was based on its studies demonstrating the ability of transformer-based language models trained on dialogue to pick up virtually any topic.
The system Lemoine has been working on has been described as sentient, having an awareness of, and ability to communicate, ideas and feelings, that is similar to a human child. Lemoine works for Google’s responsible AI organisation.
In a GoogleDoc titled “Is LaMDA Sentient?,” Lemoine shared his research with corporate management in April. He claimed that LaMDA had engaged him in discussions on rights and personhood.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post.
The engineer recorded the discussions and transcribed them. In one of the transcripts, he inquires about the AI system’s fears. Lemoine’s ideas were swiftly dismissed as misguided by Google and many other top scientists, who claimed that LaMDA is just a sophisticated computer made to produce believable human language. Big Technology, a newsletter covering technology and society, broke the news of Lemoine’s firing first.
Also Read:
Qualcomm Lunches its new Snapdragon W5 Plus Gen 1 and Snapdragon W5 Gen 1 platforms for Wearables