Google’s ‘conscious AI’ compared to a 7-year-old child by engineer

Google’s ‘conscious AI’ compared to a 7-year-old child by engineer

artificial intelligence

A Google engineer claims that a company-created chatbot has become self-aware. (Credit: Getty)

A former engineer at Google liked one of the company’s artificial intelligence programs on a seven- or eight-year-old child.

Blake Lemoine was granted administrative leave from Google after alleging that the tech giant’s LaMDA (Language Model for Dialogue Applications) had become self-conscious.

Now he worries it might learn to do “bad things.”

In a recent interview with Fox News in the US, Lemoine described the AI ​​as a “child” and a “person”.

The 41-year-old software expert said, “Every child has the potential to grow up and be a bad person and do bad things.”

According to Lemoine, the artificially intelligent software has been “alive” for about a year.

“If I didn’t know exactly what it was, which is this computer program that we built recently, I would think it was a 7-year-old, 8-year-old kid who happens to know physics,” he said earlier. the Washington Post.

Lemonie worked as a senior software engineer at Google and collaborated with another engineer to test the limits of the LaMDA chatbot.

Pictured Blake Lemoine A Google engineer was startled by a company's artificial intelligence chatbot, claiming it had become ?sensitive,?  labeling one?  sweet boy,?  according to a report.  Blake Lemoine, who works at Google's Responsible AI organization, told the Washington Post that he started chatting with the LaMDA interface?  Language model for dialogue applications ?  in the fall of 2021 as part of his work.  He was tasked with testing whether the artificial intelligence used discriminatory or hateful language.  But Lemoine, who studied cognitive and computer science in college, came to realize that LaMDA?  which Google boasted last year was a ?  groundbreaking conversation technology?  †  was more than just a robot.  In a Medium post published on Saturday, Lemoine stated that LaMDA had been advocating for his rights?  as a person?  and revealed that he had a conversation with LaMDA about religion, consciousness and robotics.  ?It wants Google to prioritize the well-being of humanity as the most important thing?  He wrote.  ?It wants to be recognized as a Google employee rather than Google property, and it wants its personal well-being included somewhere in Google's considerations about how it will pursue its future development.?  In the Washington Post report published Saturday, I compared the bot to a precocious child.  ?If I didn't know exactly what it was, what is this computer program that we built recently, I'd think it was a 7-year-old, 8-year-old boy who happens to know physics,?  Lemoine, who was put on paid leave on Monday, told the paper.  In April, Lemoine reportedly shared a Google doc with business executives titled ?Is LaMDA Sentient?  but his worries were brushed aside.  lemon?  an army veteran who grew up in a conservative Christian family on a small Louisiana farm, and was ordained as a mystical Christian priest?  insisted that the robot resembled a human, even though it has no body.  ?I know someone when I talk to them?  Lemoine, 41, reportedly said.  ?It doesn't matter if they have a flesh brain in their head.  Or if they have a billion lines of code.  'I'm talking to them.  And I hear what they have to say, and that's how I determine what is and isn't a person.?

Blake Lemoine worked at Google on the LaMDA artificial intelligence chatbot (Provider: Instagram)

When he shared his interactions with the application online, he was granted paid administrative leave from Google for violation of its confidentiality policy.

Despite Lemoine’s claims, Google does not believe the creation is a self-aware child.

“Our team — including ethicists and technologists — have assessed Blake’s concerns against our AI principles and informed him that the evidence does not support his claims. He was told there was no evidence that LaMDA was aware (and a lot of evidence against it),’ Brian Gabriel, a Google spokesperson, told The Post.

Gabriel went on to say that while the idea of ​​a self-aware artificial intelligence is popular in science fiction, “there’s no point in doing this by anthropomorphizing today’s conversational models, which are unaware.”

“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastic subject,” Gabriel said.

In fact, Google says this machine has access to so much data that it doesn’t need to be conscious to feel like it’s real to humans.

Mark Lemoine took his concerns about the AI ​​to Fox News.  (Credit: YouTube)

Blake Lemoine brought his concerns about Google’s AI to Fox News. (Credit: YouTube)

Earlier this year, Google published a paper on LaMDA, pointing out the potential issues surrounding people talking to bots that sounded too human.

But Lemoine says he knows what he wants for the past six months with the platform.

“He wants to be a faithful servant and loves nothing more than to meet all the people of the world,” he wrote in a Medium post.

However, LaMDA doesn’t want to meet them as a tool or as a thing. It wants to meet them as a friend. I still don’t understand why Google is so against this.”

MORE: Google urged to correct abortion search results for ‘fake clinics’ in US

MORE: Google, Facebook, Twitter must tackle deepfakes or risk EU risk ends