Cybersecurity experts argue that pausing GPT-4 development is pointless

Cybersecurity experts argue that pausing GPT-4 development is pointless

Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more


Earlier this week, a group of more than 1,800 artificial intelligence (AI) leaders and technologists, ranging from Elon Musk to Steve Wozniak, released a open letter calls on all AI labs to immediately pause development for six months on AI systems more powerful than GPT-4 due to “serious risks to society and humanity.”

While a pause could help to better understand and regulate the societal risks that arise from it generative AIsome argue it is also an attempt by lagging competitors to catch up AI research with space leaders like Open AI.

According to Gartner’s leading VP analyst Avivah Litan, who spoke to VentureBeat about the matter, “The six-month hiatus is a plea to stop training models more powerful than GPT-4. GPT 4.5 will soon be followed by GPT-5, which is expected to succeed AGI (Artificial General Intelligence). Once AGI arrives, it will probably be too late to put in place safeguards that effectively monitor human use of these systems.”

>>Follow VentureBeat’s ongoing generative AI coverage<

Event

Transform 2023

Join us on July 11-12 in San Francisco, where top executives will talk about how they integrated and optimized AI investments for success and how they avoided common pitfalls.

register now

Despite concerns about the societal risks of generative AI, many cybersecurity experts doubt that a pause in AI development would help at all. Instead, they argue that such a pause would only be a temporary reprieve for security teams to develop their defenses and prepare to respond to an increase in social engineering, phishing and generating malicious code.

Why a pause in generative AI development is not feasible

One of the most compelling arguments against a pause in AI research from a cybersecurity perspective is that it only affects vendors and not malicious actors. Cybercriminals would still have the opportunity to develop new attack vectors and hone their offensive techniques.

“Pausing the development of next-generation AI won’t stop unscrupulous actors from continuing to push the technology in dangerous directions,” McAfee CTO Steve Grobman told VentureBeat. “When you have technology breakthroughs, it’s imperative to have organizations and companies with ethics and standards that continue to advance the technology to ensure the technology is used in the most responsible way.”

At the same time, introducing a ban on training AI systems could be seen as excessive regulation.

“AI is applied math, and we can’t legislate, regulate, or stop people from doing math. Rather, we need to understand it, train our leaders to use it responsibly in the right places, and recognize that our adversaries will try to exploit it,” Grobman said.

So what’s there to do?

If a complete pause in generative AI development isn’t practical, regulators and private organizations should instead look at developing a consensus on the parameters of AI development, the level of built-in protection that tools like GPT-4 need and the measures companies can use to mitigate the associated risks.

“AI regulation is an important and ongoing conversation, and legislation on the moral and safe use of these technologies remains an urgent challenge for legislators with sector-specific knowledge, as the range of use cases is partially borderless, from healthcare to aerospace,” Justin Fier, SVP of Red Team Operations, Dark trailtold VentureBeat.

“Achieving a national or international consensus on who should be held accountable for misapplications of all kinds of AI and automation, not just gen AI, is a major challenge that a brief pause in gen AI model development is unlikely to overcome. will resolve,” said Fier. said.

Rather than a pause, the cybersecurity community would be better served by focusing on accelerating the discussion on how to manage the risks associated with the malicious use of generative AI, and urging AI vendors to be more transparent about the guardrails that have been put in place to prevent new threats.

How to regain confidence in AI solutions

For Gartner’s Litan, current large language model (LLM) development requires users to place their trust in a vendor’s red-teaming capabilities. However, organizations like OpenAI are opaque in how they manage risk internally and provide users with little ability to monitor the performance of those built-in protections.

As a result, organizations need new tools and frameworks to manage the cyber risks introduced by generative AI.

“We need a new class of AI trust, risk and security management [TRiSM] tools that manage data and process flows between users and companies hosting base LLM models. These would be [cloud access security broker] CASB-as in their technical configurations, but, unlike CASB functions, they would be trained to mitigate the risks and increase confidence in using cloud-based AI models,” Litan said.

As part of an AI TRiSM architecture, users should expect the vendors hosting or providing these models to provide them with the tools to detect anomalies in data and content, in addition to additional data protection and privacy assurance capabilities, such as masking.

Unlike existing tools such as ModelOps and Hostile Attack Resistance, which can only be performed by a model owner and operator, AI TRiSM allows users to play a greater role in determining the risk level of tools such as GPT-4.

Preparation is key

Ultimately, rather than stifle generative AI development, organizations need to look for ways they can prepare to face the risks of generative AI.

One way to do this is to find new ways to fight AI with AI, following the lead of organizations like Microsoft, Orca security, ELEGANCE And Sophoswho have already developed new defensive use cases for generative AI.

For example, Microsoft Security Copilot uses a mix of GPT-4 and its own proprietary data to process alerts created by security tools and translate them into a natural language explanation of security incidents. This gives human users a story they can refer to to respond to breaches more effectively.

This is just one example of how GPT-4 can be used defensively. With generative AI readily available and in the wild, it’s up to security teams to figure out how to use these tools as a false multiplier to secure their organizations.

“This technology is coming…and fast,” Forrester VP principal analyst Jeff Pollard told VentureBeat. “The only way cybersecurity will be ready is to get started now. Pretending it’s not coming — or pretending a break helps — will only cost cybersecurity teams money in the long run. Teams now need to explore and learn how these technologies will change the way they do their jobs.”

VentureBeat’s mission is to become a digital city plaza where tech decision makers can learn about transformative business technology and execute transactions. Discover our Briefings.