Ex-OpenAI Scientist Sutskever Launches SSI for AI Safety
Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has collaborated with Daniel Levy, a former OpenAI engineer, and Daniel Gross, an investor previously associated with the startup accelerator Y Combinator, to launch a new venture called Safe Superintelligence, Inc. (SSI). The intent of the new company is clearly indicated by its name. SSI, with locations in Palo Alto and Tel Aviv, aims to push the envelope in artificial intelligence (AI) by ensuring both the advancement and safety of AI technologies. In their announcement on June 19, the founders emphasized that their focused approach allows them to avoid distractions from managerial tasks or product cycles and ensures that their operations are not influenced by short-term commercial concerns.
Ilya Sutskever’s journey toward founding SSI included a notable exit from OpenAI on May 14, after which he played a contentious role in the company’s dynamics, including the firing and subsequent return of CEO Sam Altman. Similarly, Daniel Levy, another key figure, departed OpenAI shortly after Sutskeverβs exit. Together with Jan Leike, Sutskever had co-led OpenAIβs Superalignment team. This team was formed in July 2023 with the goal of directing and managing AI systems that could eventually surpass human intelligence, known as artificial general intelligence (AGI).
Initially, OpenAI had dedicated 20% of its computing resources to the Superalignment team. Jan Leike also left the company in May and subsequently took a leading position at Anthropic, an Amazon-backed AI startup. Although OpenAIβs president, Greg Brockman, defended the companyβs safety measures in a detailed online post, the Superalignment team was disbanded following the departure of its key members.
This concern for AI safety isn’t isolated to former OpenAI members. Numerous tech industry leaders have expressed apprehension about the potential risks posed by AI. For instance, Ethereum co-founder Vitalik Buterin weighed in on the issue, acknowledging the risks associated with AGI but suggesting that these risks are surpassed by those posed by corporate or military use of AI.
Elon Musk, the CEO of Tesla and a former supporter of OpenAI, along with Steve Wozniak, the co-founder of Apple, was among over 2,600 technology leaders and researchers who advocated for a six-month pause in AI system training. They highlighted the “profound risk” these advanced technologies could present, urging humanity to carefully consider these implications.
The founders of SSI are taking tangible steps towards addressing these concerns. The companyβs recent announcement included a call for engineers and researchers to join their mission, emphasizing the necessity of specialized expertise in their pursuit of safe AI development.
Despite the turbulence at OpenAI, the launch of SSI signals a proactive approach to both AI progression and precaution. By focusing exclusively on the safety and advancement of AI, SSI hopes to mitigate the risks that have alarmed some of the tech industryβs most influential figures.
In this landscape, Safe Superintelligence, Inc. represents an important and strategic initiative. With early steps focused on building a team of dedicated professionals, the company is placing a priority on creating AI systems that are not only advanced but also secure and well-regulated.
29 thoughts on “Ex-OpenAI Scientist Sutskever Launches SSI for AI Safety”
Leave a Reply
You must be logged in to post a comment.
It’s exciting to see top minds in AI coming together to prioritize safety. SSI is a step in the right direction!
Super excited about this new venture by SSI! With brilliant minds like Ilya and Daniel, we’re in good hands.
Kudos to SSI for addressing AI safety concerns head-on! This is the kind of leadership the tech industry needs.
Didnt they leave OpenAI on bad terms? This doesnt inspire confidence.
Hats off to SSI for concentrating on AI safety! It’s crucial for the technology’s future. π©π
How many times do these tech veterans need to rebrand themselves?
SSI’s launch is a testament to the importance of balancing innovation with safety. Way to go!
SSI looks like an amazing step forward in AI progression with a focus on safety. Hats off to Ilya and the team! π©π€
Sounds like a major distraction, just another startup trying to market ‘safety’ as a pitch.
The formation of SSI is timely and important. Ensuring AI safety will benefit everyone! Great move.
A truly commendable effort to ensure AI safety! Excited to see the groundbreaking work SSI will achieve. ππ§
Absolutely thrilled about SSI! Their focus on safe AI is what the world needs right now.
Launching in two tech hubs? More like setting themselves up for the same old problems. πβοΈ
Dont trust them after the OpenAI drama. This feels like a rebound project.
This is a vision for a safer future with AI. Kudos to SSI for leading the way! ππ€
SSI is setting the bar high by focusing on AI safety. Looking forward to seeing their achievements! ππ
Great, another tech venture in Palo Alto and Tel Aviv. More tech, more issues.
Impressed by the initiative and vision of SSI. Here’s to a safer future with AI! ππ
Powerful combination of brains and dedication at SSI. Future of AI looks promising and secure!
Sounds like more of the same old tech nonsense. When will it end?
I’m really excited about SSI! Ensuring the safety of AI is crucial for our future. Great team with lots of promise!
Incredible initiative! SSI’s commitment to AI safety is inspiring. Exciting times ahead! ππ
Amazing combination of talent at SSI. Ensuring AI safety is a much-needed priority, and theyre tackling it head-on!
How many more AI startups do we really need? Feels like a money grab. ππΈ
Iβm skeptical. Are they actually focused on safety, or just saying it for the headlines? π°π€¨
Is this just another hype company with fancy names and no substance? π€
Fantastic to see key figures in AI focusing on safety and regulation. Go SSI!
Truly inspirational work by the SSI team. Their focus on safety is exactly what we need in AI advancements. ππ
Such an inspiring move! SSI is bound to make significant strides in safe AI development.