OpenAI Shuffles Safety Strategy; Dissolves Team Amid Leadership Exit
OpenAI has disbanded a team dedicated to ensuring the safety of potential future ultra-capable AI systems, following the departure of the group’s two leaders, including co-founder and chief scientist, Ilya Sutskever.
Sutskever, a highly respected researcher, announced on Tuesday that he was leaving OpenAI after previous disagreements with CEO Sam Altman regarding the pace of AI development.
Jan Leike, another OpenAI veteran, revealed his departure shortly after with a post on social media. In a statement, he said the superalignment team had been fighting for resources. “Over the past few months my team has been sailing against the wind.”
The superalignment team was established to address long-term threats. OpenAI announced its formation last July, stating that it would focus on controlling and ensuring the safety of future AI software that surpasses human intelligence.
Instead of keeping the so-called superalignment team as a separate entity, OpenAI is now integrating the group more thoroughly into its research efforts to better achieve its safety objectives, the company informed Bloomberg News.
Iranian President Raisi and Foreign Minister Abdollahian Die In Helicopter Crash: Iran Media
Click here