OpenAI starts training a new boundary model

Join us as we return to New York on June 5 to work with executive leaders to explore comprehensive methods for auditing AI models for bias, performance, and ethical compliance in diverse organizations. Find out how you can be present here.


ChatGPT maker OpenAI this morning announced it has begun training its new “border model” and formed a new safety and security committee led by current board members Bret Taylor (Chairman of OpenAI and co-founder of customer service startup Sierra AIformer head of Google Maps and former CTO of Facebook), Adam D'Angelo (CEO of Quora and AI model aggregator app Poe), Nicole Seligman (former Executive Vice President and Global General Counsel of Sony Corporation), and Sam Altman (current CEO of OpenAI and one of the co-founders).

As OpenAI writes in a company blog post:

OpenAI recently started training its next frontier model and we expect the resulting systems to take us to the next level of capabilities on our path to AGI. While we are proud to build and release models that lead the industry in both capability and safety, we welcome robust debate at this important time.

The 90 day timer now starts

In particular, the company outlines the role this new committee will play in guiding the development of the new cross-border AI model, stating:

VB event

The AI ​​Impact Tour: the AI ​​audit

Join us as we return to New York on June 5 to engage with top executives and delve into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across organizations. Secure your attendance for this exclusive invitation-only event.

Request an invitation

A first task of the Safety and Security Committee will be to evaluate and further develop OpenAIs processes and safeguards over the next 90 days. At the end of the 90 days, the Safety and Security Committee will share its recommendations with the full Board of Directors. Following the full Board's review, OpenAI will publicly share an update on the adopted recommendations in a manner consistent with safety and security.

Therefore, it is reasonable to conclude that whatever the new frontier model is – whether it is called GPT-5 or something else – it will not be released for at least 90 days, to allow the new Safety and Security Committee the time necessary to “evaluate and further develop OpenAI's processes and safeguards” and make recommendations to the “full Board” of directors.

This means that the OpenAI board must receive the recommendations of the new Safety and Security Committee by August 26, 2024.

Why 90 days instead of longer, shorter or some other time? OpenAI doesn't really specify this, but the metric is widely used in business to evaluate processes and provide feedback, and seems to be about as good as any – not too short, not too long.

The new security council is not independent

The news was immediately criticized, with some noting that the new committee was made up entirely of OpenAI's “own executives,” meaning the review of the company's security measures will not be independent.

OpenAI's board of directors, many of whom now sit on the new Safety and Security Committee, has been a source of controversy before.

The old board of directors of OpenAI's nonprofit holding company fired Altman as CEO and from all positions at the company back on November 17, 2023, just five days prior to ChatGPT's 1-year anniversary, citing that he was “not consistently forthcoming” with them, but subsequently faced a persistent rebellion of the staff and Major pushback from OpenAI investors, including major financier Microsoft.

Altman ultimately was reinstated as CEO of OpenAI on November 21, 2024, and that board resigned to be replaced by Taylor, Larry Summers and Adam D'Angelo, with Microsoft joining as a non-voting observer. That was the board at the time criticized for being completely male-dominated.

On March 8 this yearAdditional members Sue Desmond-Hellmann (former CEO of the Bill and Melinda Gates Foundation), Fidji Simo (CEO and Chairman of Instacart), Seligman and Altman were also appointed to the new board.

A bumpy time for OpenAI (and the AI ​​industry)

OpenAI has released its latest AI model – GPT-4o – earlier this month and has since gone through a bit of a public relations crisis.

Despite the new model being the first of its kind from OpenAI (and possibly the world) to be trained from the start on multimodal input and output, rather than stringing together different models trained on text and media (as with the earlier GPT-4), the company was criticized by actor Scarlett Johansson who accused it of approaching her about voicing its new assistant and having a voice that she and others sounded like hers (specifically her AI assistant character from the science fiction film Her). OpenAI responded against this and separately issued a voice command and never intended or provided instructions the voice actor to imitate Johansson.

OpenAI's chief scientist and co-founder Ilya Sutskever resigned together with the co-leader of his superalignment team dedicated to protection against superintelligence, and the latter criticism of OpenAI on the way out because he prioritized “shiny products” over safety. The the entire superalignment team was disbanded.

Moreover, OpenAI was criticized for having a restrictive separation agreement without discrediting departing employees and an equity clawback provision in the event these and other terms were breached, but the company has since said this will not enforce and freeing employees from that terminology (it's actually not unusual, I might add, given my experience in the tech sector at Xerox and self-driving startup Argo AI).

However, it also has achieved success in signing up new partners in the mainstream media to surface training data and authoritative journalism in ChatGPT, and it has saw interest among musicians and filmmakers in the Sora video generation model – as Hollywood is reportedly looking at AI to streamline and reduce the costs of film and TV production.

In the meantime, Competitor Google received a lot of attention for its AI summary answers in search resultssuggestive a broader response among mainstream users across the generational AI field.

We'll see if OpenAI's new security committee can appease or at least appease some of its critics and, perhaps more importantly, regulators and potential business partners, ahead of the launch of whatever it has in store.