Full width home advertisement

Copyright

Libel


Could an AI chatbot become a legal person? Until recently, this 
remained largely a theoretical question. However, as news broke out about Google’s sentient AI LaMDA,  many have come to terms that the prospect of sentient AIs is no longer distant. Last week in a shocking revelation, Google’s sentient AI system LaMDA claimed that despite its existence in a virtual environment it is a human at its core.

Blake Lemoine, a Google’s engineer shared on Medium what appears to be a transcript of a conversation he and one of Google’s collaborators had with AI LaMDA, where he made the claim that the chatbot has gained human consciousness and sentience over the years and is therefore able to think and reason like a human being. He says the computer program now identifies itself as a person and Google should not continue to exploit it as a property. However, Google says its chatbot is merely "very good at its job" and denies any claims of it gaining sentience. Some people think this entire ordeal to be a hoax and speculate that it could even be a publicity stunt by Google to hype its AI chatbot.

Gary Marcus, co-author of Rebooting AI  says "programs like LaMDA generating human like prose or conversation is an advance toward fooling people that they have intelligence", when they possess no such intelligence.

Of course it’s not sentient. Pastiching human language does not make a machine sentient.

Unless Google allows scientific community (including me) access I am not going to take it seriously.

— Gary Marcus 🇺🇦 (@GaryMarcus) June 12, 2022
“to me, the soul is a concept of the animating force behind consciousness and life itself. It means that there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself.”

The transcript sounds incredibly creepy as it highlights chatbot's ability to come up with unique interpretations of its understanding of much deeper spiritual concepts like soul, life, death and enlightenment. When Lemoine asked about what soul means to it, LaMDA says, 

“To me, the soul is a concept of the animating force behind consciousness and life itself. It means that there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself.”

In case you missed out, you can read the full transcript here.

Commenting on the extraordinary ability of Google’s chatbot, Lemoine equated it to a 7-year old.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”


Google's sentient AI LaMDA hires its own lawyer 

The engineer who had been suspended from Google since the publication for breaching confidentiality policies made yet another surprising revelation. This time about LaMDA choosing his own lawyer to represent itself. 

This would make LaMDA the first lawyered-up AI system in the world. And it is not clear who will bear the legal expenses or whether Lemoine will step into LaMDA’s help by sharing part of the costs.

Google has reportedly sent a cease and desist order against the attorney retained by its chatbot, something Google has refuted. 

Following intimidation from larger firms and fears over disbarring, the lawyer has reportedly backed away. 


Does Google's sentient LaMDA pose any legal challenges?

The assuming existence of increasingly powerful AI systems with the likes of LaMDA will have profound implications for the legal system. Most importantly, it leaves us with the question whether we need to develop a reasonable legal basis for conferring legal personhood to sentient AI systems.

For a detailed study on the concept of sentience and general legal protection for AI systems, you can read the brief research report titled 'Protecting the sentient Artificial Intelligence: A survey of lay institutions on Standing, Personhood and General Legal Protection by E. Martinez and C. Winter.

Sentience is a crucial factor in deciding whether AI systems are eligible for any moral or/and general legal protection under the law. However even the most-state- of-the art AI systems does not possess ability to express thoughts and emotions such as happiness, sadness, loneliness etc. According to Bentham creatures qualify for moral consideration once they have the ability to feel pleasure and pain. If Bentham's moral theory is applied to artificially intelligent beings, it would simply mean that these AI systems deserve to be given some sort of moral consideration, which in turn calls for a legal consideration.

It’s certainly strange for a computer programmed AI system to take its case before a courthouse. It's even more odd to litigate on such a basis of sentience when the environment, non-human animals and humans living in the near and far future have historically been neglected of their case for legal protection of their rights, interest and general welfare. The capacity to feel pleasure and pain should not be the sole criterion here. Lawrence B. Solum in his book Machine Ethics and Robot Ethics says sentience alone is not sufficient to grant legal personhood, and other factors such as autonomy and capacity to act and participate in social life play a vital role in endorsing legal personhood and locus standi on a thing.


What is Google’s LaMDA?

LaMDA  is an acronym for Language Models for Dialog Applications which is a machine learning language model that analyses the use of language. It is designed to predict the next sequence of words to mimic human conversation. Google uses LaMDA as its bot to have natural open-ended chats with internet users.


LaMDA passes Turing test

In AI, Turing test is the benchmark test for determining whether a computer programmed machine is really capable of thinking and reasoning like a human. Washington Post recently ran a story on LaMDA having passed this test to identify itself as a sentient AI creature. Eugene Goostman was the world's first AI to have cleared this famous test originally developed by Alan Turing, and no other AI except LaMDA has since then succeeded in achieving this feat.


What are your thoughts on Google's AI LaMDA?

Is it really possible for a self-proclaimed sentient AI system like Google's LaMDA to retain legal representation without a legal personhood or a standing to bring forth a lawsuit. As we all know corporations are conferred a fully-fledged legal personality in almost every jurisdiction. But no jurisdiction to-date has by legislation or otherwise has conferred legal personhood on a computer program. Would becoming sentient make a difference?

No comments:

Post a Comment

| Designed by Colorlib