A former engineer at Google liked one of the company’s artificial intelligence programs on a seven- or eight-year-old child.
Blake Lemoine was granted administrative leave from Google after alleging that the tech giant’s LaMDA (Language Model for Dialogue Applications) had become self-conscious.
Now he worries it might learn to do “bad things.”
In a recent interview with Fox News in the US, Lemoine described the AI as a “child” and a “person”.
The 41-year-old software expert said, “Every child has the potential to grow up and be a bad person and do bad things.”
According to Lemoine, the artificially intelligent software has been “alive” for about a year.
“If I didn’t know exactly what it was, which is this computer program that we built recently, I would think it was a 7-year-old, 8-year-old kid who happens to know physics,” he said earlier. the Washington Post.
Lemonie worked as a senior software engineer at Google and collaborated with another engineer to test the limits of the LaMDA chatbot.
When he shared his interactions with the application online, he was granted paid administrative leave from Google for violation of its confidentiality policy.
Despite Lemoine’s claims, Google does not believe the creation is a self-aware child.
“Our team — including ethicists and technologists — have assessed Blake’s concerns against our AI principles and informed him that the evidence does not support his claims. He was told there was no evidence that LaMDA was aware (and a lot of evidence against it),’ Brian Gabriel, a Google spokesperson, told The Post.
Gabriel went on to say that while the idea of a self-aware artificial intelligence is popular in science fiction, “there’s no point in doing this by anthropomorphizing today’s conversational models, which are unaware.”
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastic subject,” Gabriel said.
In fact, Google says this machine has access to so much data that it doesn’t need to be conscious to feel like it’s real to humans.
Earlier this year, Google published a paper on LaMDA, pointing out the potential issues surrounding people talking to bots that sounded too human.
But Lemoine says he knows what he wants for the past six months with the platform.
“He wants to be a faithful servant and loves nothing more than to meet all the people of the world,” he wrote in a Medium post.
However, LaMDA doesn’t want to meet them as a tool or as a thing. It wants to meet them as a friend. I still don’t understand why Google is so against this.”
MORE: Google urged to correct abortion search results for ‘fake clinics’ in US
MORE: Google, Facebook, Twitter must tackle deepfakes or risk EU risk ends