Kevin Baragona, the founder of DeepAI, said that artificial intelligence has become the core weapon of software
A tech mogul has described the sprint to perfect artificial intelligence (AI) as the nuclear arms race of the 21st century.
Kevin Baragona was one of more than 1,000 leading experts to sign an open letter to The Future of Life Institute calling for a pause in the “dangerous race” to develop ChatGPT-like AI.
Like the invention of the atomic bomb in the 1940s, Baragona told DailyMail.com that “AI superintelligence is like the nuclear weapons of software.’
“A lot of people have debated whether or not we should keep developing them,” he continued.
Americans grappled with a similar idea as they developed the weapon of mass destruction – dubbed “nuclear fear” at the time.
“It’s almost like a war between chimpanzees and humans,” Baragona, who signed the letter, told DailyMail.com
“The humans win, of course, because we are much smarter and can use more advanced technology to beat them.
“If we’re like the chimpanzees, the AI will either destroy us or we’ll become addicted to it.”
The fears come with the extraordinary rise of ChatGPT, which has taken the world by storm in recent months, with leading medical and legal exams taking people nearly three months to prepare.
The forces of ChatGPT-like AI have led to civil war in Silicon Valley.
Elon Musk and Apple co-founder Steve Wozniak signed the letter for an AI hiatus, while Bill Gates and Google CEO Sundar Pichai did not.
“While I can only speculate why Gates and Sundar didn’t sign the letter to pause advanced AI research, I think they didn’t because they sign the checks to accelerate AI progress,” Baragona said.
Microsoft, founded by Gates, has invested heavily in OpenAI, the creator of ChatGPT.
In January, it was speculated that Gates’ company had invested another $10 billion in the startup to compete with Google in commercializing new AI breakthroughs.
The fear of AI comes as experts predict it will reach singularity by 2045, which is when the technology surpasses human intelligence at which we can’t control it
Microsoft also added AI to its Bing search engine in February, powering ChatGPT.
Google just opened to the public on March 21st Bard, which is also a natural language chatbot.
The California company has been careful with the rollout so as not to let its technology come up with inaccurate facts, but Bard’s first impression showed that the company rushed it to market.
It remains to be seen how Bard will hold up against the likes of OpenAI’s ChatGPT and Microsoft’s AI-powered Bing.
“Microsoft is investing heavily in OpenAI and Google in Anthropic,” Baragona told DailyMail.com.
“Maybe they feel it’s not the time to retreat to unfounded fears of possible negative consequences.”
Musk, Wozniak and more than 1,000 technology leaders signed an open letter on Wednesday calling for a six-month pause in AI development.
The groups said more risk assessment needs to be done before humans lose control and it becomes a conscious man-hating species.
Bill Gates and Google CEO Sundar Pichai did not sign the open letter with Musk. The pair have invested heavily in AI development and see the technology as the way of the future
At this point, AI would have reached singularity, meaning it has surpassed human intelligence and can think independently.
AI would no longer need humans or listen to humans, allowing it to steal nuclear codes, create pandemics and start world wars.
Gates and Sundarare across the aisle.
They praise ChatGPT-like AI as the “most important” innovation of our time – saying it can solve climate change, cure cancer and boost productivity.
OpenAI launched ChatGPT in November, which became an instant worldwide success.
The chatbot is a large language model trained on massive text data, allowing it to generate eerily human-like text in response to a given prompt.
The public uses ChatGPT to write research papers, books, news articles, emails and other text based work and while many see it more as a virtual assistant many brilliant minds see it as the end of humanity
Elon Musk and Apple co-founder Steve Wozniak signed a letter protesting the technology that “creates great risks to humanity”
Musk and Wozniak fear AI is beyond human control and are calling for a six-month break to exploit the risks
In its simplest form, AI is a field that combines computer science and robust data sets to enable problem solving.
The technology enables machines to learn from experience, adapt to new inputs and perform human tasks.
The systems, which include machine learning and deep learning subfields, consist of AI algorithms that attempt to create expert systems that make predictions or classifications based on input data.
Scott Opitz, chief technology officer at ABBYY, an intelligent automation company, said in a statement: Pausing AI development is like putting the toothpaste back in the tube. AI applications are ubiquitous, impacting virtually every facet of our lives.
Although commendable, it may be implausible to slow down now by means of a voluntary break.
“What is needed is a concerted and good faith effort between industry and legislators to pass common sense rules that embrace ethical AI principles based on people-centered values of fairness, transparency and accountability.”
Hollywood may have fueled people’s fears of AI, which were typically portrayed as evil, such as in The Matrix and The Terminator, which paint a picture of robot overlords enslaving the human race.
However, the idea is resonating throughout Silicon Valley as more than 1,000 tech experts think it could become our reality.
This would be possible if AI reaches singularity, a hypothetical future where technology surpasses human intelligence and changes the path of our evolution – and this is expected to happen in 2045. AI should first pass the Turing test.
If so, the technology is believed to have independent intelligence, allowing it to replicate itself into an even more powerful system that humans cannot control.