The strange new world of AI power, politics and the ‘pause’ | The AI ​​beat

Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more


The boisterous debates around AI risk and regulation grew several decibels last week, while at the same time becoming even harder to decipher.

There was the backlash by tweet by Senator Chris Murphy (D-CT) on ChatGPT, including that “Something is coming. We are not ready.” Then there was the complaint to the FTC about OpenAI, as well as Italy’s ban on ChatGPT. And, most notably, the open letter signed by Elon Musk, Steve Wozniak, proposing a six-month “pause” for large-scale AI development. It was released by an organization focused on “x-risk” called Future of Life Instituteand according to Eliezer Yudkowsky, it didn’t even go far enough.

Not surprisingly, the fierce debate over AI ethics and risks, both short and long term, has been fueled by the massive popularity of OpenAI’s ChatGPT since it was launched. issued on November 30. And the growing number of industry-led AI tools built on large language models (LLMs) – from Microsoft’s Bing and Google’s Bard to a slew of startups – has led to a much larger scale of AI discussion in mainstream media, industry pubs. and on social platforms.

>>Follow VentureBeat’s ongoing generative AI coverage<

Event

Transform 2023

Join us on July 11-12 in San Francisco, where top executives will talk about how they integrated and optimized AI investments for success and how they avoided common pitfalls.

register now

AI debates have moved in the direction of the political

But it seems that as AI leaves the research lab and fully unfolds in the cultural zeitgeist, promising tantalizing opportunities and posing real societal dangers, we’re also entering a strange new world of AI power and politics. AI debates are no longer just about technology, or science, or even reality. They are also about opinions, fears, values, attitudes, beliefs, perspectives, resources, incentives and downright weirdness.

This isn’t inherently bad, but it does lead to the DALL-E drawn elephant in the room: For months now, I’ve been trying to figure out how to cover the confusing, rather creepy, semi-creepy angles of AI development. These focus on the hypothetical possibility of artificial general intelligence (AGI) destroying humanity, with threads of what has recently become known as “TESCREALideologies — including “effective altruism” and “long-termism” and “transhumanism” intertwined. You’ll find some science fiction sewn into this AI team jersey, with the words “AI safety” and “AI alignment” embroidered in red.

Each of these areas of the AI ​​landscape has its own rabbit hole to go down, some of which seem relatively down-to-earth, while others lead to articles about the Paperclip maximizing problem; a posthuman future created by artificial superintelligence; and a San Francisco pop up museum dedicated to highlighting the AGI debate with a sign reading “sorry for killing most of humanity.”

The decoupling between applied AI and AI predictions

Much of my VentureBeat coverage focuses on the effects of AI on the enterprise. Frankly, you don’t see C-suite executives worrying about whether AI will take out their atoms to turn into paperclips – they wonder if AI and machine learning can improve customer service or make employees more productive.

The disconnect is that there are plenty of voices at top companies, from OpenAI and Anthropic to DeepMind and all over Silicon Valley, that have an agenda based at least in part on some of the TESCREA issues and belief systems. That might not have mattered much 7, 10 or 15 years ago when deep learning research was in its infancy, but it’s certainly getting a lot of attention now. And it’s getting harder and harder to discern the agenda behind some of the biggest headlines.

This has led to suspicion and accusations: last week, for example, a Los Angeles Times article highlighted the contradiction that OpenAI CEO Sam Altman has stated that he was “a little scared” of the company’s technology that he is “currently helping build and trying to distribute, for profit, as widely as possible.”

The article read: “Let’s take a moment to consider the logic behind these statements: why would you, a CEO or executive at a high-profile technology company, repeatedly return to the public scene to proclaim how concerned you are about the product you are selling? are you building? and sell? Answer: If apocalyptic ominous statements about the terrifying power of AI serve your marketing strategy.”

The shift from technology and science to politics

Over the weekend I posted a Twitter thread. I didn’t know, I wrote, how to address the issues lurking beneath the AI ​​pause letter, the information that led to Senator Murphy’s tweets, the polarizing debates over open and closed source AI, Sam’s biblical prophecy Altman… style message on AGI. All of these discussions are driven in part by people with beliefs that most people have no idea about – both that they have those beliefs and what they mean.

What should a modest reporter do who tries to be balanced and reasonably objective? And what can everyone in the AI ​​community – from research to industry to policy – do to get to grips with what’s going on?

Former White House policy adviser Suresh Venkatasubramanian replied that the problem is that “there is a huge political agenda behind much of what masquerades as technical discussion.” And others agreed that the discourse around AI has shifted from the realm of technology and science to politics – and power.

Of course, technology has always been political. But perhaps it helps to recognize that current AI debates have entered the stratosphere (or sunk into the mire, depending on your opinion) of political discourse.

Spend time on real risks

There were other helpful recommendations for how we can all get some perspective: Rich Harang, a chief security architect at Nvidia, tweeted that it’s important to talk to people who actually build and deploy these LLM models. “Ask people who dig deep into AI “x-risk” about their hands-on experience doing applied work in the area,” he advised, adding that it is important to “spend some time on real-world risks that exist right now. that come out of ML R&D. There’s plenty, from safety issues to environmental issues to labor exploitation.”

And B Cavello, director of emerging technologies at the Aspen Institute, pointed out that “predictions are often areas of contention.” They added that they’ve been working to focus less on the disagreements and more on where people align. For example, many of those who disagree on AGI do agree on the need for regulation and for AI developers to take more responsibility.

I am grateful to everyone who responded to me Twitter thread, both in the comments and direct messages. Have a good week.

VentureBeat’s mission is to become a digital city plaza where tech decision makers can learn about transformative business technology and execute transactions. Discover our Briefings.