Safe Superintelligence (SSI), a new AI startup co-founded by former OpenAI chief scientist, Ilya Sutskever, focused on developing safe superintelligent systems, has secured $1 billion in funding.
The company, which came into existence just three months ago, is now valued at $5 billion. This very huge investment shows the critical importance of AI safety in today’s rapidly evolving technological realm.
SSI was founded in June 2024 by a trio of: Ilya Sutskever, former OpenAI chief scientist, who now serves as SSI’s chief scientist; Daniel Gross, ex-Apple AI initiatives lead, oversees SSI’s computing power and fundraising; and Daniel Levy, a former OpenAI researcher, holds the position of principal scientist with SSI.
The company’s mission is to develop safe AI systems that surpass human intelligence. Unlike many startups rushing to market, SSI plans to dedicate years to research and development before releasing any products. This approach aligns with growing concerns that unchecked AI development could potentially act against human interests or pose existential risks.
The $1 billion funding was led by top-tier venture capital firms NFDG, a16z, Sequoia, DST Global, and SV Angel. With this funding, SSI aims to acquire substantial computing power for advanced AI research, recruit top talent in the AI field, and build a team of researchers and engineers.
Conclusion
The unprecedented funding and valuation of Safe Superintelligence represent a significant vote of confidence in the importance of AI safety research and development. As the company continues on its ambitious mission to create safe superintelligent AI systems, everyone will be watching closely to see how this substantial investment translates into groundbreaking advancements in the field of artificial intelligence.
With its exceptional founding team, focus on safety, and now substantial financial backing, SSI is poised to play a pivotal role in shaping the future of AI – a future that prioritizes safety alongside capability.