The suspension of a Google engineer who claimed a computer chatbot he was working on had become sentient has put new scrutiny on the capacity of, and secrecy surrounding, artificial intelligence.
Who Blake Lemoine?
Blake Lemoine published transcripts of conversations between himself and the LaMDA (language model for dialogue applications) chatbot development system.
He said the system engaged him in conversations about rights and personhood and shared findings with company executives.
He reportedly sought to hire an attorney to represent LaMDA and talked to representatives from the House judiciary committee about Google’s allegedly unethical activities.
Google said it suspended Lemoine for breaching confidentiality policies by publishing conversations he had with LaMda online.
In April, Meta, the parent company of Facebook, announced it was opening up its large-scale language model systems to outside entities.
In an apparent parting shot before his suspension, Lemoine sent a message to a 200-person Google mailing list on machine learning with the title “LaMDA is sentient”. “Please take care of it well in my absence,” he wrote.