Ilya Sutskever, one of the founders of OpenAI who was involved in a failed effort to push out CEO Sam Altman, said he’s starting a safety-focused artificial intelligence company. Sutskever, a respected AI researcher who left the ChatGPT maker last month, said in a social media post on Wednesday that he’s creating Safe Superintelligence Inc. with two co-founders.
The company’s sole focus is on safely developing “superintelligence” – a reference to AI systems that are smarter than humans. Sutskever and his co-founders, Daniel Gross and Daniel Levy, said in a statement that Safe Superintelligence will be “insulated from short-term commercial pressures” and dedicated to AI safety and security.
Sutskever was part of a group that unsuccessfully tried to oust Altman last year, leading to a period of internal turmoil at OpenAI over whether the company was prioritizing business opportunities over AI safety. When he left OpenAI, Sutskever said he had plans for a “very personally meaningful” project, though he offered no details at the time.
Days after Sutskever’s departure, his team co-leader Jan Leike also resigned and criticized OpenAI for letting safety “take a backseat to shiny products.” OpenAI later announced the formation of a safety and security committee, but it’s been filled mainly with company insiders.
At OpenAI, Sutskever jointly led a team focused on safely developing better-than-human AI known as artificial general intelligence, or AGI. The new company, Safe Superintelligence, is an American company with roots in Palo Alto, California, and Tel Aviv, where the founders say they can recruit top technical talent.
The company’s business model is designed to ensure that work on safety and security is “insulated from short-term commercial pressures,” according to the statement from Sutskever and his co-founders. They said Safe Superintelligence will not be distracted by “management overhead or product cycles,” allowing it to focus solely on the development of safe superintelligence.
The move by Sutskever comes as regulators and policymakers around the world are grappling with the potential risks and benefits of advanced AI systems. In the US, antitrust enforcers have decided to investigate the roles of Microsoft, Nvidia, and OpenAI in the AI boom, according to people familiar with the pending actions.
Sutskever’s departure from OpenAI and the creation of his new company, Safe Superintelligence, underscores the ongoing debates and concerns around the development of powerful AI technologies and the need to prioritize safety and security in their advancement.