Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has announced the launch of a new artificial intelligence (AI) company named Safe Superintelligence Inc. (SSI). This new venture, which comes just a month after his departure from OpenAI, aims to develop advanced AI systems with a primary focus on safety.
Sutskever's departure from OpenAI in May 2024 was marked by significant internal conflict. He played a pivotal role in the controversial attempt to oust OpenAI CEO Sam Altman in November 2023, a move that led to his removal from the company's board following Altman's reinstatement. This period of turmoil highlighted Sutskever's growing concerns about the rapid pace of AI development and the associated safety risks.
During his tenure at OpenAI, Sutskever co-led the Superalignment team with Jan Leike. This team was dedicated to ensuring that AI systems remained beneficial and aligned with human values. However, the team was disbanded shortly after Sutskever and Leike's departures, with Leike moving on to lead a team at Anthropic, another AI firm focused on safety.
Safe Superintelligence Inc. (SSI) was officially unveiled on June 19, 2024, through a post on the social media platform X (formerly Twitter). In the announcement, Sutskever emphasized the company's singular focus on developing a safe superintelligence.
Joining Sutskever in this new venture are Daniel Gross and Daniel Levy. Gross, a former AI lead at Apple and a partner at Y Combinator, brings a wealth of experience in AI development and startup acceleration. Levy, who holds a doctorate in computer science from Stanford, was a senior developer at OpenAI and has previously worked with tech giants like Microsoft, Meta, and Google.
The trio's combined expertise positions the new AI company to tackle the complex challenges of creating safe and advanced AI systems. “Our team, investors, and business model are all aligned to achieve SSI,” the founders stated. “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs.”
SSI's business model is designed to insulate safety, security, and progress from short-term commercial pressures. Unlike OpenAI, which transitioned from a non-profit to a for-profit entity to meet financial demands, SSI is established as a for-profit company from the outset. This structure allows the company to focus solely on its mission without the distractions of management overhead or product cycles.
The company has set up offices in Palo Alto, California, and Tel Aviv, Israel. These locations were chosen for their deep roots in the tech industry and their ability to attract top technical talent. SSI is actively recruiting engineers and researchers who are dedicated to the mission of developing safe superintelligence.
Sutskever has long been an advocate for addressing the complex issues surrounding AI safety. In a 2023 blog post co-authored with Jan Leike, he predicted that AI superior to human intelligence could emerge within the decade, emphasizing the need for research into its control and restriction. This vision continues at SSI, where safety and capabilities are treated as intertwined challenges to be addressed through innovative engineering and scientific breakthroughs.
“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the company stated. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”
The launch of SSI comes at a critical juncture in the AI industry, as major tech companies compete for dominance in the generative AI field. The company's focus on safety sets it apart from other players in the market, which often prioritize rapid development and commercialization.
Sutskever's departure from OpenAI and the subsequent launch of SSI have raised intriguing questions about the future direction of AI research and development. As the race to develop superintelligence intensifies, SSI aims to carve out a unique niche by prioritizing safety, a critical aspect often overlooked in the pursuit of AI advancements.
Ilya Sutskever's new venture, Safe Superintelligence Inc., represents a significant shift in the AI landscape. With a singular focus on developing safe and advanced AI systems, SSI is poised to address one of the most important technical challenges of our time. The company's commitment to safety, combined with the expertise of its founding team, positions it well to make a substantial impact on the future of artificial intelligence.
As the world eagerly awaits the progress of Safe Superintelligence, the company's mission to create a secure AI environment will undoubtedly shape the discourse around AI safety and development in the years to come.