Safety is a hot topic for every company developing AI technology, and this goes especially for one of the biggest AI companies in the industry, ChatGPT maker OpenAI. After an extensive safety review of the company’s safety and security-related processes and safeguards, OpenAI has established an independent safety board. The thing is that we don’t really know how independent it really is.
OpenAI AI has experienced less drama than some of the larger tech companies, but that’s not to say that there isn’t any. There’s been a bit of tension around the company and how it handles safety. The former safety oversight board consisted of all people associated with OpenAI. Imagine giving a child the responsibility of putting themself in time-out if they misbehave.
There were even rumors of the company denying compute and resources to its previous safety researchers. One of its former employees basically said that safety would take a backseat to shiny new products. So, the safety culture at the company still has a long way to go.
OpenAI has a new independent safety board
The 90-day review ended, and the hammer came down. OpenAI has formed a new independent oversight board. This board will have some significant sway over the company’s products including the ability to even delay the launch of a model if it detects any red flags. This is similar to the oversight board that Meta has, but there’s a pretty big difference.
The thing about Meta’s oversight board is that, while it has “Meta” in the name, no one on the board is associated with the company. OpenAI’s board, on the other hand, fully consists of OpenAI members. These members include Adam D’Angelo, Paul Nakasone, and Nicole Seligman with Zico Kolter as the chair. Sam Altman will not act on this board.
According to an OpenAI blog post, the board will “be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed.”
Independent?
The question remains as to whether this could actually be considered an independent board. You need to have outside forces governing the AI releases if you want an independent board. How can we trust a board of people who are directly associated with OpenAI to properly police the company? If the company’s next model is a hit, then that could rocket OpenAI (and the board members) to new heights, so there’s an incentive to let safety go by the wayside.
OpenAI has been stumbling along when it comes to safety. Again, we’ve heard complaints of the company depriving its safety researchers of compute, then we hear that it established an oversight board of OpenAI staff back in May. Now, after the review, we have yet another oversight board of OpenAI members! Only time will tell if the company can actually keep itself safe. If not, then the consequences could be severe.
2024-09-17 15:08:18