The possibility of general AI

We are excited to bring Transform 2022 back in person July 19th and virtually July 20th – 28th. Join AI and data leaders for informative talks and exciting networking opportunities. Register today!


One of the challenges in following the news on developments in the field of artificial intelligence is that the term “AI” is often used indiscriminately to mean two unrelated things.

The first use of the term AI is something more precisely called narrow AI. It’s powerful technology, but it’s also pretty simple and straightforward: You take a bunch of data about the past, use a computer to analyze it and find patterns, and then use that analysis to make predictions about the future. This type of AI affects our entire life many times a day as it filters spam out of our email and guides us through traffic. But because it is trained with data about the past, it only works where the future looks like the past. Therefore, it can identify cats and play chess because they do not change on an elementary level from day to day.

The other use of the term AI is to describe something we call general AI, or often AGI. It does not yet exist except in science fiction, and no one knows how to make it. A general AI is a computer program that is as intellectually versatile as a human being. It can teach itself completely new things with which it has never been trained before.

The difference between narrow and general AI

In the movies are AGI Data from “Star Trek”, C-3PO from “Star Wars” and the replicants in “Blade Runner”. Although it may intuitively look like narrow AI is the same type of thing like general AI, just a less mature and sophisticated implementation, this is not the case. General AI is something else. For example, identifying a spam email is not computer-aided as being really creative, which would be a common intelligence.

I previously hosted a podcast on AI called “Voices in AI.” It was a lot of fun to do, because most of the great practitioners of science are approachable and were willing to be on the podcast. So I ended up with a gallery of over a hundred great AI thinkers who talked in depth about the topic. There were two questions I would ask most guests. The first was: “Is general AI possible?” Virtually everyone – with only four exceptions – said yes, it is possible. Then I would ask them when we are going to build it. Those answers were all over the map, some as early as five years and others as long as 500.

Why would that be?

Why would virtually all my guests say that general AI is possible, yet offer such a wide range of informed estimates about when we will make it? The answer goes back to a statement I made earlier: We do not know how to build general intelligence, so your guess is as good as anyone else’s.

“But wait!” you might say. “If we do not know how to make it, why do the experts agree so overwhelmingly that it is possible?” I would ask them that question too, and I usually got a variant of the same answer. Their confidence that we will build a truly intelligent machine is based on a single core belief: that humans are intelligent machines. Because we are machines, the reasoning goes, and we have general intelligence, we build machines with general intelligence must be possible.

Mens vs. machine

To be sure, if humans are machines, then those experts are right: General intelligence is not merely possible, but inevitable. However, if it turns out that humans are not just machines, then there is something about humans that may not be reproducible in silicone.

The interesting thing about this is the disconnect between those hundred or so AI experts and everyone else. When I give speeches on this topic to general audiences and they ask who they believe are machines, about 15% raise their hand, not the 96% of AI experts.

On my podcast, when I refute this assumption about the nature of human intelligence, my guests would usually – quite politely, of course – accuse me of indulging in some kind of magical thinking that is at its core anti-science. “What else can we possibly do than non-biological machines?”

This is a fair question and an important one. We know of only one thing in the universe with general intelligence, and that is us. How do we happen to have such a powerful creative superpower? We do not really know.

Intelligence as a superpower

Try to remember the color of your first bike or the name of your first grade teacher. You may not have thought of either in years, but your brain could probably recover them with little effort, which is all the more impressive when you consider that “data” is not stored in your brain as it is . to hard drive. In fact, we do not know how it is stored. We can discover that each of the hundred-billion neurons in your brain is as complex as our most advanced supercomputer.

But this is just where the mystery of our intelligence begins. From there, it gets harder. It seems that we have something called a mind, which is different from the brain. The mind is all that the three pound goo can do in your head that seems like it should not be able to, like having a sense of humor or falling in love. Your heart does not do this, nor does your liver. But somehow you do.

We do not even know for sure that the mind is only a product of the brain. More than a few people are born without up to 95% of their brain, but still have normal intelligence and often do not even know about their condition until later in life when they get a diagnostic exam. Furthermore, it seems that we have a lot of intelligence that is not stored in our brain but is distributed by our bodies.

General AI: The Additional Complexity of Consciousness

Even if we do not understand the brain or the mind, it actually becomes more difficult from there: General intelligence may very well require consciousness. Consciousness is the experience you have of the world. A thermometer can tell you the temperature accurately, but it can not feel warm. That difference between knowing something and experiencing something is consciousness, and we have little reason to believe that computers can experience the world just more than a chair can.

So here we are with brains we do not understand, thoughts we cannot explain and, as far as consciousness is concerned, we do not even have a good theory about how it is even possible for mere matter to have an experience not. Yet, despite all this, the AI ​​people who believe in general AI are confident that we can replicate all human abilities in computers. To me, this is the argument that seems to resonate with magical thinking.

I’m not saying this to be dismissive or to trivialize someone’s beliefs. They could very well be correct. I only regard the idea of ​​general AI as an unproven hypothesis, not an obvious scientific truth. The desire to build such a being, and then to control it, is an old dream of mankind. In the modern era, it’s centuries old, and perhaps begins with Mary Shelley’s. Frankenstein, and then manifests in a thousand later stories. But it’s really much older than that. As far back as we write, we have such imaginations, such as the story of Talos, a robot created by the Greek god of technology, Hephaestus, to guard the island of Crete.

Somewhere deep within us is a desire to create this creature and command its awesome power, but nothing so far should be taken as an indication that we really can.

Byron Reese is a technologist and author.

DataMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the latest ideas and updated information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more about DataDecisionMakers