Anthropic Quietly Expands Access to Claude ‘Private Alpha’ at Open-Source Event in San Francisco

Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more


Anthropic – one of OpenAI’s main rivals – quietly expanded access to the “Private Alpha” version of the long-awaited chat service, Claudeon a bustling Open Source AI meeting attended by more than 5,000 people at the Exploratorium in downtown San Francisco on Friday.

This exclusive rollout gave a select group of attendees the opportunity to be among the first to access the innovative chatbot interface — Claude — which is about to rival ChatGPT. Claude’s public rollout has been muted so far. Anthropic announced Claude was set to begin rolling out to the public on March 14, but it’s unclear exactly how many people currently have access to the new user interface.

“We had tens of thousands on our waiting list after we introduced our enterprise products in early March, and we are working to give them access to Claude,” an Anthropic spokesperson said in an email interview with VentureBeat. Nowadays anyone can use Claude on the chatbot client Poe, but access to the company’s official Claude chat interface is still restricted. (You can sign up for the waiting list here.)

Therefore, attending the Open Source AI meetup may have been immensely beneficial to a large number of dedicated users eager to get their hands on the new chat service.

Event

Transform 2023

Join us on July 11-12 in San Francisco, where top executives will talk about how they integrated and optimized AI investments for success and how they avoided common pitfalls.

register now

A QR code that provides access to Anthropic’s long-awaited chat service Claude hangs on the banister above the participants of the Open Source AI meeting in San Francisco on March 31, 2023.

Early access to a breakthrough product

As guests entered the Exploratorium museum on Friday, a nervous energy normally reserved for regular concerts took over the crowd. Those present knew they were in for something special: what inevitably a breakthrough moment for the open-source AI movement in San Francisco.

As the crowd of early arrivals jostled into position in the narrow hallway at the entrance to the museum, a modest person in casual attire casually taped a mysterious QR code to the banister above the battle. “Anthropic Claude Access,” read the QR code in lowercase, with no further explanation.

I happened to witness this peculiar scene from a casual vantage point behind the person I’ve since confirmed was an Anthropic employee. Never one to ignore a puzzling communiqué – especially one involving opaque technology and the promise of exclusive access – I immediately scanned the code and registered for ‘Anthropic Claude Access’. Within a few hours I was told I had been granted tentative access to Anthropic’s clandestine chatbot, Claude, which had been rumored for months to be one of the most advanced AIs ever.

It’s a smart tactic from Anthropic. Rolling out software for a group of dedicated AI enthusiasts first builds hype without scaring off mainstream users. San Franciscans at the event are now among the first to get the bone everyone is talking about. Once Claude is out in the wild, there’s no telling how he’ll develop or what will emerge from his artificial mind. The genie is out of the bottle, as they say, but in this case the genie can think for itself.

“We are rolling out access to Claude widely and we felt that attendees would value the use and evaluation of our products,” an Anthropic spokesperson said in an interview with VentureBeat. “We’ve also given access to a few other meetups.”

The promise of constitutional AI

Anthropic, which is backed by Google parent company Alphabet and founded by ex-OpenAI researchers, aims to develop a breakthrough technique in artificial intelligence known as Constitutional AI, or a method of aligning AI systems with human intentions through a principles-based approach. It involves providing a list of rules or principles that serve as a kind of constitution for the AI ​​system, then training the system to follow them using supervised learning and reinforcement learning techniques.

“The goal of constitutional AI, where an AI system is given a set of ethical and behavioral principles to follow, is to make these systems more useful, secure and robust – as well as making it easier to understand what values ​​drive their output steer,” said an Anthropic spokesperson. “Claude performed well on our safety evaluations and we are proud of the safety research and work that went into our model. That said, like all language models, Claude sometimes hallucinates – that’s an open research problem we’re working on.

Anthropic applies Constitutional AI to various domains, such as natural language processing and computer vision. One of their main projects is Claude, the AI ​​chatbot that uses constitutional AI to improve OpenAI’s ChatGPT model. Claude can answer questions and hold conversations while adhering to its principles, such as being truthful, respectful, helpful and harmless.

If it eventually succeeds, constitutional AI could help realize the benefits of artificial intelligence while avoiding potential dangers, ushering in a new era of AI for the public good. With funding from Open Philanthropy and other investors, Anthropic aims to pioneer this new approach to AI security.

VentureBeat’s mission is to become a digital city plaza where tech decision makers can learn about transformative business technology and execute transactions. Discover our Briefings.