Ex-OpenAI Scientist Sutskever Launches SSI for AI Safety

Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has collaborated with Daniel Levy, a former OpenAI engineer, and Daniel Gross, an investor previously associated with the startup accelerator Y Combinator, to launch a new venture called Safe Superintelligence, Inc. (SSI). The intent of the new company is clearly indicated by its name. SSI, with locations in Palo Alto and Tel Aviv, aims to push the envelope in artificial intelligence (AI) by ensuring both the advancement and safety of AI technologies. In their announcement on June 19, the founders emphasized that their focused approach allows them to avoid distractions from managerial tasks or product cycles and ensures that their operations are not influenced by short-term commercial concerns.

Ilya Sutskever’s journey toward founding SSI included a notable exit from OpenAI on May 14, after which he played a contentious role in the company’s dynamics, including the firing and subsequent return of CEO Sam Altman. Similarly, Daniel Levy, another key figure, departed OpenAI shortly after Sutskever’s exit. Together with Jan Leike, Sutskever had co-led OpenAI’s Superalignment team. This team was formed in July 2023 with the goal of directing and managing AI systems that could eventually surpass human intelligence, known as artificial general intelligence (AGI).

Initially, OpenAI had dedicated 20% of its computing resources to the Superalignment team. Jan Leike also left the company in May and subsequently took a leading position at Anthropic, an Amazon-backed AI startup. Although OpenAI’s president, Greg Brockman, defended the company’s safety measures in a detailed online post, the Superalignment team was disbanded following the departure of its key members.

This concern for AI safety isn’t isolated to former OpenAI members. Numerous tech industry leaders have expressed apprehension about the potential risks posed by AI. For instance, Ethereum co-founder Vitalik Buterin weighed in on the issue, acknowledging the risks associated with AGI but suggesting that these risks are surpassed by those posed by corporate or military use of AI.

Elon Musk, the CEO of Tesla and a former supporter of OpenAI, along with Steve Wozniak, the co-founder of Apple, was among over 2,600 technology leaders and researchers who advocated for a six-month pause in AI system training. They highlighted the “profound risk” these advanced technologies could present, urging humanity to carefully consider these implications.

The founders of SSI are taking tangible steps towards addressing these concerns. The company’s recent announcement included a call for engineers and researchers to join their mission, emphasizing the necessity of specialized expertise in their pursuit of safe AI development.

Despite the turbulence at OpenAI, the launch of SSI signals a proactive approach to both AI progression and precaution. By focusing exclusively on the safety and advancement of AI, SSI hopes to mitigate the risks that have alarmed some of the tech industry’s most influential figures.

In this landscape, Safe Superintelligence, Inc. represents an important and strategic initiative. With early steps focused on building a team of dedicated professionals, the company is placing a priority on creating AI systems that are not only advanced but also secure and well-regulated.

29 thoughts on “Ex-OpenAI Scientist Sutskever Launches SSI for AI Safety

  1. It’s exciting to see top minds in AI coming together to prioritize safety. SSI is a step in the right direction!

  2. Super excited about this new venture by SSI! With brilliant minds like Ilya and Daniel, we’re in good hands.

  3. Kudos to SSI for addressing AI safety concerns head-on! This is the kind of leadership the tech industry needs.

  4. Hats off to SSI for concentrating on AI safety! It’s crucial for the technology’s future. πŸŽ©πŸ”

  5. SSI’s launch is a testament to the importance of balancing innovation with safety. Way to go!

  6. SSI looks like an amazing step forward in AI progression with a focus on safety. Hats off to Ilya and the team! πŸŽ©πŸ€–

  7. Sounds like a major distraction, just another startup trying to market ‘safety’ as a pitch.

  8. The formation of SSI is timely and important. Ensuring AI safety will benefit everyone! Great move.

  9. A truly commendable effort to ensure AI safety! Excited to see the groundbreaking work SSI will achieve. 🌟🧠

  10. Absolutely thrilled about SSI! Their focus on safe AI is what the world needs right now.

  11. Launching in two tech hubs? More like setting themselves up for the same old problems. πŸš€β†—οΈ

  12. Dont trust them after the OpenAI drama. This feels like a rebound project.

  13. This is a vision for a safer future with AI. Kudos to SSI for leading the way! πŸŒŸπŸ€–

  14. SSI is setting the bar high by focusing on AI safety. Looking forward to seeing their achievements! πŸŒŸπŸ”

  15. Great, another tech venture in Palo Alto and Tel Aviv. More tech, more issues.

  16. Impressed by the initiative and vision of SSI. Here’s to a safer future with AI! πŸš€πŸ”

  17. Powerful combination of brains and dedication at SSI. Future of AI looks promising and secure!

  18. I’m really excited about SSI! Ensuring the safety of AI is crucial for our future. Great team with lots of promise!

  19. Incredible initiative! SSI’s commitment to AI safety is inspiring. Exciting times ahead! πŸš€πŸ”

  20. Amazing combination of talent at SSI. Ensuring AI safety is a much-needed priority, and theyre tackling it head-on!

  21. How many more AI startups do we really need? Feels like a money grab. πŸ˜’πŸ’Έ

  22. I’m skeptical. Are they actually focused on safety, or just saying it for the headlines? πŸ“°πŸ€¨

  23. Is this just another hype company with fancy names and no substance? πŸ€”

  24. Fantastic to see key figures in AI focusing on safety and regulation. Go SSI!

  25. Truly inspirational work by the SSI team. Their focus on safety is exactly what we need in AI advancements. πŸš€πŸ”

  26. Such an inspiring move! SSI is bound to make significant strides in safe AI development.

Leave a Reply

Previous post Crypto Super PAC Hits $169M After Jump Crypto’s $10M Boost
Next post Circle CEO Predicts Stablecoins to Hit 10% of Global Money in Next Decade