all AI news
Open AI Dissolves Superalignment Team Amid Concerns Over Safety and Societal Impact
The Artificial Intelligence Podcast linktr.ee
Open AI, a leading artificial intelligence research organization, has dissolved its Superalignment team, which aimed to mitigate long-term risks associated with advanced AI systems. The team was established just a year ago and was supposed to dedicate 20% of the company's computing power over four years. The departure of two high-profile leaders, Ilya Sutskever and Jan Leike, who were instrumental in shaping the team's mission, has raised concerns about the company's priorities on safety and societal impact. Leike believes more …
advanced advanced ai ai systems artificial artificial intelligence computing computing power concerns impact intelligence long-term open ai organization power research risks safety superalignment systems team the company