OpenAI Forms Safety Committee As it Trains Latest AI Model

( – OpenAI has said that they are setting up a safety and security committee as it begins to train its newest AI model to take over the GPT-4 system. The startup said that it would advise a full board on “critical safety and security decisions” for its projects.

The safety committee is a necessary addition with the rising controversy and the growing concern over the increase in artificial intelligence programs and their potential for harm.

Researcher Jan Leike resigned and criticized OpenAI, saying that they let “safety take a backseat to shiny products.” Co-founder and chief scientist Ilya Sutskever also resigned, which dismantled the original team that had AI safety as a top priority.

OpenAI said that it has “recently begun training its next frontier model” and that its AI models lead the industry on capability and safety. According to the company, “We welcome a robust debate at this important moment.”

The safety committee put in place is filled with company insiders, like OpenAI CEO Sam Altman and Chairman Bret Taylor, as well as four other technical and policy experts. It also includes board members such as Adam D’Angelo, who’s the CEO of Quora, and Nicole Seligman, a former Sony general counsel.

The first job of the committee is to evaluate and further develop OpenAI’s processes and safeguards to make its recommendations to the board in three months. The company ended by saying that it would publicly release the recommendations it’s adopting “in a manner that is consistent with safety and security.”

Copyright 2024,