Ever since the inception of generative AIs like ChatGPT, experts and governments have had their concerns surrounding the unregulated development in the field. Now, in an effort to address these concerns, OpenAI is forming a dedicated team aimed at managing the potential risks associated with superintelligent AIs.
The team, led by Ilya Sutskever, co-founder of OpenAI, along with Jan Leike, a prominent member of OpenAI’s alignment team, will develop methods to handle the hypothetical scenario in which superintelligent AI systems surpass human intelligence and begin operating autonomously. Although this scenario may seem far-fetched, experts argue that superintelligent AIs could become a reality within the next decade, underscoring the importance of developing safeguards today.
“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI and preventing it from going rogue,” reads the blog post announcing the decision.
The Superalignment team
The Superalignment team, established by OpenAI, will have access to approximately 20% of the current computational resources, along with scientists and engineers from OpenAI’s previous alignment division to develop a “human-level automated alignment researcher,” which would primarily assist in evaluating other AI systems and conducting alignment research.
“As we make progress on this, our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study and develop better alignment techniques than we have now,” reads another blog post.
An AI system to check other AIs
While the proposition of developing an AI system to check on other AIs may seem unusual, OpenAI argues that AI systems can make faster progress in alignment research compared to humans. This approach would not only save time for human researchers but also allow them to focus on reviewing alignment research conducted by AI rather than solely generating it themselves.
However, it is important to note that OpenAI acknowledges the potential risks and dangers associated with this approach. Additionally, the company plans to release a roadmap outlining its research direction in the future.
2023-07-06 15:06:58