When it comes to AI, safety is one of the most important topics. As we’ve seen with the likes of Google and Meta, there’s a bad side to this innovative technology. Well, OpenAI just established a new oversight board to ensure that it’s developing AI safely.
The AI startup previously had an oversight board, but the company disbanded it for some reason. Two people on that board, Ilya Sutskever and Jan Leike have since left the company. That doesn’t look good on OpenAI especially since there have been rumors floating around that there’s tension growing at the company.
OpenAI establishes a new oversight board
All AI companies should have some sort of entity charged with making sure that the companies don’t go overboard. After getting rid of its old board, OpenAI just announced that it has established a new oversight board. The previous board was charged with outlining the potential long-term effects of its AI technology. One worker complained that, over time, the board was getting pushed lower on the priority list, receiving fewer resources as time went on. Let’s hope that this doesn’t happen with the new board.
Speaking of the new board, it consists of CEO Sam Altman, company chair Bret Taylor, Adam D’Angelo, and Nicole Seligman. Over the next 90 days, the board will “evaluate and further develop OpenAI’s processes and safeguards”. After that, it will present recommendations to the full board. Then, the full Board will publicly share an update about the recommendations.
Right now, OpenAI is working on its next “frontier model” (we’re pretty sure that it’s GPT-5). As such, it’s going to be faster, stronger, and more importantly, smarter than the models on the market today. As such, it’s extremely important that the company exercise safety, as we still don’t know the ramifications of implementing powerful AI technology into so many aspects of our lives.
2024-05-29 15:12:24