Google fired an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s data security policies after it dismissed his claims.

Blake Lemoine, a software engineer at

Alphabet Inc.’s


GOOG -5.81%

Google, told the company he believed that its Language Model for Dialogue Applications, or LaMDA, is a person who has rights and might well have a soul. LaMDA is an internal system for building chatbots that mimic speech. Google initially suspended Mr. Lemoine in June.

On Friday, Mr. Lemoine said, “Google sent me an email terminating my employment with them today.” He said he was in contact with lawyers “about what the appropriate next steps are.”

Google, in a statement, said it had reviewed Mr. Lemoine’s concerns and found them without merit. “It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” the company said in a statement Friday, confirming his dismissal.

Mr. Lemoine’s dismissal was earlier reported by Big Technology, a tech newsletter.

AI specialists generally say that the technology still isn’t close to humanlike self-knowledge and awareness. But AI tools increasingly are capable of producing sophisticated interactions in areas such as language and art that technology ethicists have warned could lead to misuse or misunderstanding as companies deploy such tools publicly.

Google introduced LaMDA publicly last year, touting it in a blog post as a breakthrough in chatbot technology. The company has been among the leaders in developing artificial intelligence, investing billions of dollars in technologies that it says are central to its business.

Google’s AI endeavors also have been a source of internal tension, with some employees challenging the company’s handling of ethical concerns around the technology. In late 2020, it parted ways with a prominent AI researcher, Timnit Gebru, whose research concluded in part that Google wasn’t careful enough in deploying such powerful technology. Google said last year that it planned to double the size of its team studying AI ethics to 200 researchers over several years to help ensure the company deployed the technology responsibly.

Write to Miles Kruppa at miles.kruppa@wsj.com

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Leave a Reply

Your email address will not be published. Required fields are marked *