OpenAI co-founder and former chief scientist Ilya Sutskever, alongside Cue co-founder Daniel Gross and former OpenAI engineer Daniel Levy, has officially announced the launch of Safe Superintelligence Inc. (SSI), a startup firm focused on creating a safe but powerful AI system.
In an official statement, SSI claims to be the world’s first straight-shot SSI lab that is guided by one goal and one product – a safe superintelligence.
Back in November 2023, Sutskever and other OpenAI board members were part of a movement to oust current OpenAI CEO Sam Altman over AI safety concerns.
OpenAI is the AI research and deployment company behind the popular chatbot and virtual assistant ChatGPT which is now being used by millions of users – from answering questions and conceptualizing topics to writing code.
SSI differentiates itself from OpenAI by taking a side-by-side approach when it comes to developing capabilities and establishing safety guardrails, a concept that the company views as a technical problem for organizations building AI systems.
“This way, we can scale in peace,” the company stated. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”
With this strong emphasis on delivering safe superintelligence to the market, the startup aims to not be distracted by management overhead or product cycles, and the business model is propped up so that safety, security, and progress are all “insulated from short-term commercial pressures.”
Now registered as an American company, SSI will actively recruit top technical talent in Palo Alto and Tel Aviv where it now hosts its offices.