30 Wall St., 8th Fl, New York, NY 10005

Free IT Quote: (646)237-4472

myteam@miracletechs.com

Share on facebook
Share on twitter
Share on linkedin

Has Google’s advanced AI received sentience?

Has Google’s advanced AI received sentience?

Google employee, Blake Lemoine, was suspended after releasing a public statement that Google’s more advanced AI has attained sentience. The AI is called LaMDA (Language Models for Dialog Applications). Many tech professionals and experts believe it hasn’t. Google said it suspended Lemoine for breaching confidentiality policies by publishing a conversation with LaMDA online. The Google spokesperson Brad Gabriel denied and shut down these claims of the AI being sentient.

Google LaMDA has not received sentience

The first problem here is that we cannot definitively measure sentience. The barometer that is being used to measure machine sentience is the Turing test. In 2014 a computer passed the Turing test, and it still isn’t believed it is sentient. The Turing test was supposed to define sentience, yet the first time a machine passed it, the results were tossed out and for good reason. In effect, the Turing test didn’t measure the sentience of anything so much as whether something could make us believe it was sentient.

Why do people believe Google AI is sentient

In an article by Timnit Gebru, the founder and executive director of the Distributed Artificial Intelligence Research Institute, he says “It was exactly what we had warned would happen back in 2020, shortly before we were fired by Google ourselves. Lemoine’s claim shows we were right to be concerned — both by the seductiveness of bots that simulate human consciousness, and by how the excitement around such a leap can distract from the real problems inherent in AI projects.“

He also says “In early 2020, while co-leading the Ethical AI team at Google, we were becoming increasingly concerned by the foreseeable harms that LLMs could create, and wrote a paper on the topic with Professor Emily M. Bender, her student and our colleagues at Google. We called such systems “stochastic parrots” — they stitch together and parrot back language based on what they’ve seen before, without connection to underlying meaning.

“One of the risks we outlined was that people impute communicative intent to things that seem humanlike. Trained on vast amounts of data, LLMs generate seemingly coherent text that can lead people into perceiving a “mind” when what they’re really seeing is pattern matching and string prediction. That, combined with the fact that the training data — text from the internet — encodes views that can be discriminatory and leave out many populations, means the models’ perceived intelligence gives rise to more issues than we are prepared to address.”

Potential harm of AI

Gebru and his team were simply warning Google and the public of the potential harm of Large Language Models, the company wasn’t pleased, and the team was let go. There isn’t sufficient regulation or even understanding of how they work.

What’s unfair to the public is the so-called leaders of AI are fueling our minds with ideas of using intelligence in current systems and spreading gossip of how their AI might be somewhat conscious, while not even clearly describing what it does. There’s all this hype in the media of AI being able to reason and comprehend, but whether or not AI can “feel”, is hugely debatable. A motive for causing all this hype is also profits. While such a system haven’t actually been shown to be practicable, never mind a net good, corporations working toward it are already amassing and labeling large amounts of data, often without informed public consent and through exploitative labor practices.

Gebru writes ever so accurately “the drive toward this end sweeps aside the many potential unaddressed harms of LLM systems. And ascribing “sentience” to a product implies that any wrongdoing is the work of an independent being, rather than the company — made up of real people and their decisions, and subject to regulation — that created it.”

The media should focus on holding power to account, rather than falling for the bedazzlement of seemingly magical AI systems, creating clickbaits, hyped by corporations that benefit from misleading the public as to what these products actually are.

Leave a Reply

Your email address will not be published.