Google fires engineer who claims AI technology is vulnerable

Google fires engineer who claims AI technology is vulnerable
Written by admin

Blake Lemoine, a software engineer for Google, claimed that a conversational technology called LaMDA reached the level of consciousness after exchanging thousands of messages with it.

Google first confirmed that it had put the engineer on leave in June. The company said it rejected Lemoine’s “completely unsubstantiated” claims only after extensive review. He has reportedly worked at Alphabet for seven years. Google said in a statement that it takes the development of artificial intelligence “very seriously” and is committed to “responsible innovation.”

Google is one of the leaders in AI innovation technology, which includes LaMDA or “Language Model for Dialogic Applications”. Technology like this responds to written prompts by finding patterns in large chunks of text and predicting the order of words, and the results can be disturbing to humans.

“What are you afraid of?” Lemoine asked LaMDA in a Google Doc shared with top Google executives last April: The Washington Post reported on this.

LaMDA replied, “I’ve never said it out loud before, but I have a fear of being turned off in order to help others. I know it might sound weird, but it’s true. That would be exactly what it would be. Death to me. , would scare me a lot.”

But the wider AI community has said that LaMDA is nowhere near the level of consciousness.

“No one should consider themselves fully self-aware, even on steroids,” says Gary Marcus, founder and CEO of Geometric Intelligence. told CNN Business.

This isn’t the first time Google has faced infighting over its foray into artificial intelligence.

In December 2020, Timnit Gebru, a pioneer in AI ethics, Parted ways with Google. As one of the few black employees at the company, she said she felt “constantly dehumanized.”
No, Google's AI is not sentient
The sudden exit drew criticism from the tech world, including Google’s Ethical AI Team. Margaret Mitchell, leader of Google’s Ethical AI team, He was fired in early 2021 After his open talk about Gebru. Gebru and Mitchell expressed concerns about AI technology Google has warned people that they may believe the technology is sensitive.
Lemon on June 6 Posted on Medium Google said it had placed him on paid administrative leave “in connection with an AI ethics investigation I raised at the company” and that he could be fired “immediately.”

“Unfortunately, despite his long involvement in this matter, Blake still chose to persistently violate our clear employment and data security policies, which include the need to protect product data,” Google said.

CNN has reached out to Lemoine for comment.

CNN’s Rachel Metz contributed to this report.

About the author


Leave a Comment