OpenAI is losing key safety leaders, including Ilya Sutskever and Jan Leike, who have resigned or been forced out, resulting in a weakened superalignment team and disbanded AGI Readiness efforts. Insiders report a cultural shift toward commercialization under Sam Altman’s leadership, with safety resources significantly reduced. Contracts reportedly silenced dissent until whistleblowers emerged, highlighting concerns about ignored safety priorities.
As OpenAI accelerates the release of powerful models like GPT-4o, critical voices are disappearing, raising questions about the lab's ability to manage the risks associated with AGI. Daniel Kokotajlo, a former governance team member, expressed lost trust in OpenAI's leadership regarding responsible AGI management.