When OpenAI’s co-founder Ilya Sutskever abruptly left the company in May 2024, his departure wasn’t just a personnel change—it was a warning flare. Days later, Jan Leike, head of OpenAI’s “superalignment” safety team, resigned, declaring that “safety culture has taken a backseat to shiny products.” Their exits expose a critical question: Can AI companies prioritize safety while racing to dominate a trillion-dollar market?
The Great AI Safety Exodus
OpenAI’s founding mission was to ensure artificial general intelligence (AGI) “benefits all of humanity.” But recent high-profile departures suggest internal fractures. Sutskever, once a board member who briefly ousted CEO Sam Altman in 2023, had long advocated for cautious AGI development. His new startup, Safe Superintelligence Inc., explicitly rejects commercial pressures, aiming to build “uncontrollable” superintelligence safely—a direct critique of OpenAI’s trajectory.
Leike’s resignation was even more damning. In a series of posts, he argued that OpenAI’s leadership “should be laser-focused on getting ready for [AGI’s] challenges,” but instead, resources for safety research dwindled. This aligns with reports that Altman prioritized rapid scaling, including pursuing Middle Eastern funding for AI chip ventures. As one former employee told The Decoder, “It’s like building a hyper-engineered rocket without a parachute.”
From Nonprofit to Capped Profit
OpenAI’s 2019 transition to a hybrid “capped-profit” model marked a pivotal shift. Backed by Peter Thiel and led by Sam Altman, the restructuring aimed to attract billions in capital for AI development while maintaining a nonprofit mission to “benefit all of humanity.” But critics saw red flags. Returns for early investors were capped at 100x—a figure critics called “Silicon Valley greed masquerading as altruism.” Elon Musk, an original co-founder, had already left in 2018 over disagreements about OpenAI’s direction, later calling the capped-profit model “a deal with the devil.”
The move exemplified a growing tension: Can AI firms ethically balance commercial pressures with existential safeguards? As Altman shifted focus to partnerships like Microsoft’s $13 billion investment, internal priorities tilted toward scaling. By 2024, safety teams faced shrinking resources, a pattern Leike called “reckless” in his resignation letter.
A Clash of Visions
The tension between profit and safety isn’t new. In 2023, billionaire investor Peter Thiel reportedly warned Altman about OpenAI’s “conflicting incentives” as it transitioned from a nonprofit to a capped-profit entity. Thiel, an early backer of OpenAI competitor Anthropic, has consistently argued that AI safety requires structural independence from corporate interests.
Thiel’s skepticism mirrors broader debates in Silicon Valley. Effective altruism (EA), a philosophy emphasizing long-term AI risks, once heavily influenced OpenAI. But EA’s credibility took a hit after FTX founder Sam Bankman-Fried, a vocal EA proponent, was convicted of fraud. Critics now argue that “AI safety” has become a buzzword, diluted by corporate agendas.
Sutskever’s Gambit on Safety-First
Sutskever’s new venture, Safe Superintelligence Inc., positions itself as the anti-OpenAI. By focusing solely on safety research without product deadlines, he aims to solve what he calls “the most important technical problem of our time.” But skeptics question whether any organization can outpace well-funded giants like Google or Meta.
“The real test isn’t just intention—it’s insulation from competition,” says AI ethicist Meredith Whittaker. “When billions are at stake, safety becomes a liability unless regulated.” Indeed, OpenAI’s partnership with Microsoft, which invested $13 billion, underscores the challenge: Can profit-driven alliances coexist with rigorous safeguards?
Trust, Transparency, and the Road Ahead
OpenAI’s turmoil reflects a industry-wide reckoning. As governments scramble to draft AI regulations, companies face a dilemma: slow down to address existential risks or sprint ahead to avoid being overtaken.
But here’s the uncomfortable truth: No entity, including OpenAI, has yet proven it can balance these priorities. Sutskever’s出走 and Leike’s warnings reveal a system where safety teams lack authority to counter executive ambitions.
So, what’s next? If OpenAI can’t realign with its original mission, will governments step in? Or will a new wave of safety-focused startups redefine the rules?
Share your thoughts: Can profit and AI safety truly coexist, or is this another tech bubble prioritizing growth over survival?
References
- Bernal, N. (2019, March 12). Peter Thiel-backed firm building ‘safe’ AI turns itself into a for-profit enterprise. The Telegraph. https://telegraph.co.uk/technology/2019/03/12/peter-thiel-backed-firm-building-safe-ai-turns-for-profit-enterprise
- Bastian, M. (2025, March 30). Peter Thiel reportedly warned Sam Altman about AI safety conflicts shortly before the OpenAI crisis. THE DECODER. https://the-decoder.com/peter-thiel-reportedly-warned-sam-altman-about-ai-safety-conflicts-shortly-before-the-openai-crisis
- Bastian, M. (2024, June 19). Former OpenAI chief scientist Ilya Sutskever launches new company for safe superintelligent AI. THE DECODER. https://the-decoder.com/former-openai-chief-scientist-ilya-sutskever-launches-new-company-for-safe-superintelligent-ai
- Bastian, M. (2024a, May 15). OpenAI loses its biggest names in AI safety as Ilya Sutskever and Jan Leike walk away. THE DECODER. https://the-decoder.com/openai-loses-its-biggest-names-in-ai-safety-as-ilya-sutskever-and-jan-leike-walk-away
- Gebru, T. (2022, November 30). Effective altruism is pushing a dangerous brand of ‘AI safety.’ WIRED. https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried