The future of AI: what comes next and what to expect

The future of AI: what comes next and what to expect

In today’s AI newsletter, the latest in our five part seriesI look at where artificial intelligence can go in the coming years.

In early March, I visited OpenAI’s San Francisco offices for an early look at GPT-4, a new version of the technology that underpins the ChatGPT chatbot. The most glaring moment came when Greg Brockman, the president and co-founder of OpenAI, showed off a feature that is still not available to the public: he gave the bot a photo from the Hubble Space Telescope and asked it to describe the statue “accurately in detail”.

The description was perfectly accurate, right down to the odd white line created by a satellite skimming across the sky. This is a look at the future of chatbots and other AI technologies: a new wave of multimodal systems will juggle images, sounds and videos as well as text.

Yesterday my colleague Kevin Roose told me about it what AI can do now. I’m going to focus on the opportunities and upheavals that come as it gains skills and abilities.

Generative AIs can already answer questions, write poetry, generate computer code, and hold conversations. As “chatbot” suggests, they are first rolling out in conversational formats like ChatGPT and Bing.

But that won’t last long. Microsoft and Google have already announced plans to incorporate these AI technologies into their products. You can use them to write a rough draft of an email, automatically summarize a meeting, and do many other cool tricks.

OpenAI also provides an API, or application programming interface, that other tech companies can use to plug GPT-4 into their apps and products. And it has created a suite of plugins from companies like Instacart, Expedia, and Wolfram Alpha that extend ChatGPT’s capabilities.

Many experts believe that AI will make some workers, including doctors, lawyers and computer programmers, more productive than ever. They also believe some employees will be replaced.

“This will affect tasks that are more repetitive, more formal, more generic,” said Zachary Lipton, a professor at Carnegie Mellon who specializes in artificial intelligence and its impact on society. “This may free some people who are not good at repetitive tasks. At the same time, there is a threat to people who specialize in the repetitive part.”

Human-performed tasks could disappear from audio-to-text transcription and translation. In the legal field, GPT-4 is already proficient enough to pass the bar exam, and the accounting firm PricewaterhouseCoopers plans to roll out an OpenAI powered legal chatbot to his staff.

At the same time, companies like OpenAI, Google and Meta are building systems that allow you to instantly generate images and videos simply by describing what you want to see.

Other companies build bots that can actually use websites and software applications like a human does. In the next phase of technology, AI systems will be able to shop online for your Christmas gifts, hire people to do small jobs around the house, and keep track of your monthly expenses.

That’s all a lot to think about. But the biggest problem is perhaps this: before we have a chance to understand how these systems will affect the world, they will become even more powerful.

For companies like OpenAI and DeepMind, a lab owned by Google’s parent company, the plan is to push this technology as far as possible. They eventually hope to build what researchers call artificial general intelligenceor AGI – a machine that can do anything the human brain can do.

As Sam Altman, CEO of OpenAI, told me three years ago, “My goal is to build widely usable AGI. I also understand that this sounds ridiculous.” Today it sounds less ridiculous. But it’s still easier said than done.

For an AI to become an AGI, a deep understanding of the physical world is needed. And it’s not clear whether systems can learn to mimic the length and breadth of human reasoning and common sense using the methods that technologies like GPT-4 have spawned. New breakthroughs will probably be needed.

The question is: do we really want artificial intelligence to become so powerful? A very important related question: Is there a way to avoid this?

Many AI managers believe that the technologies they create will improve our lives. But some have been warning for decades about a darker scenario, where our creations don’t always do what we want them to do, or they follow our instructions in unpredictable ways, with potentially dire consequences.

AI experts talk about “alignment— that is, ensuring that AI systems are aligned with human values ​​and goals.

Before GPT-4 was releasedOpenAI handed it over to an outside group to suggest and test dangerous uses of the chatbot.

The group found that the system could hire a human online to beat a Captcha test. When the human asked if it was “a robot,” the system, unprompted by the testers, lied and said it was a visually impaired person.

Testers also showed that the system can be coaxed to suggest ways to buy illegal firearms online and describe ways to turn household items into dangerous substances. After changes by OpenAI, the system no longer does these things.

But it is impossible to eliminate all possible abuses. When a system like this learns from data, it develops skills that its creators never expected. It’s hard to know how things could go wrong after millions of people start using it.

“Each time we create a new AI system, we are unable to fully characterize all of its capabilities and security issues — and this issue gets worse over time instead of better,” said Jack Clark, a founder and head of policy at Anthropic, a San Francisco start-up that builds the same kind of technology.

And OpenAI and giants like Google aren’t the only ones exploring this technology. The basic methods used to build these systems are widely understood, and other companies, countries, research labs and bad actors may be less careful.

Ultimately, controlling dangerous AI technology will require extensive oversight. But experts are not optimistic.

“We need an international regulatory system,” said Aviv Ovadya, a researcher at Harvard’s Berkman Klein Center for Internet & Society who helped test GPT-4 before it was released. “But I don’t see our existing government agencies poised to navigate this at the pace that is necessary.”

As we told you earlier this week, more than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence laboratories to pause development of the most advanced systemswarned in an open letter that AI tools pose “serious risks to society and humanity”.

AI developers are “locked in an out-of-control race to develop and deploy increasingly powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” the letter said.

Some experts are particularly concerned about short-term dangers, including the spread of misinformation and the risk that people would rely on these systems for inaccurate or harmful medical and emotional advice.

But other critics are part of a huge and influential online community called rationalists or effective altruists who believe that AI could eventually destroy humanity. This mentality is reflected in the letter.

Share your thoughts and feedback on our On Tech: AI series by completing this short survey.

We can speculate about where AI will go in the distant future, but we can also ask the chatbots themselves. For your final assignment, treat ChatGPT, Bing or Bard like an enthusiastic young applicant and ask him where he sees himself in 10 years. As always, share the answers in the comments.


Question 1 of 3

Start the quiz by choosing your answer.


Alignment: Attempts by AI researchers and ethicists to ensure that artificial intelligences act in accordance with the values ​​and goals of the people who created them.

Multimodal systems: AI is similar to ChatGPT and can also handle images, video, audio and other non-text input and output.

Artificial General Intelligence: An artificial intelligence that matches the human intellect and can do everything the human brain can do.

Click here for more glossaries.


Kevin here. Thank you for spending the past five days with us. It was fantastic to see your reactions and creativity. (I especially enjoyed the commenter who used ChatGPT to write a cover letter for my job.)

The subject of AI is so big and changing so fast that even five newsletters are not enough to cover everything. If you want to dive deeper, you can read my book ‘Futureproof’ and Cade’s book ‘Genius Makers’, both of which delve deeper into the topics we’ve covered this week.

Cade here: My favorite comment came from someone who asked ChatGPT plan a route through the trails in their state. The bot eventually suggested a path that didn’t exist as a way to walk between two other paths that do.

This little snafu provides insight into both the strengths and limitations of today’s chatbots and other AI systems. They have learned a great deal from what has been posted on the Internet and can make use of what they have learned in remarkable ways, but there is always the risk of inserting plausible but untrue information. go! Chat with these bots! But also trust your own judgment!

Please please complete this short survey to share your thoughts and feedback on this limited edition newsletter.