Other

Former OpenAI Chief Scientist Launches ‘Safe’ Rival AI Lab

Renowned AI researcher Ilya Sutskever, former chief scientist at OpenAI, has launched a new AI research firm that will focus on an area he and other critics say was a blind spot of his former employer: safety.

His new company, Safe Superintelligence Inc. (SSI), is co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked at OpenAI. The company’s sole objective is to advance AI safety and capabilities in tandem.

“We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product,” the new firm tweeted on Wednesday. “We will do it through revolutionary breakthroughs produced by a small cracked team.”

I am starting a new company: https://t.co/BG3K3SI3A1

— Ilya Sutskever (@ilyasut) June 19, 2024

Sutskever’s departure from OpenAI last year followed internal tensions, including concerns about safety taking a backseat to profits. With SSI, he aims to avoid distractions from management overhead or product cycles that larger AI companies often face.

“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” an official statement on the SSI website reads. “This way, we can scale in peace.”

SSI will be a purely research-focused organization, as opposed to OpenAI’s current business model, which includes the commercialization of its AI models. As its name implies, the first and only product of SSI will be a safe superintelligence.

“By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever explained to Bloomberg.

The company mission to be an expansion of Ilya’s focus on “superalignment” with human interests, focusing purely on development instead of the “shiny products” that he said were a distraction at OpenAI.

Sutskever suggests that SSI’s AI systems, when ready, will be more general-purpose and expansive in its abilities than current large language models. The ultimate goal is to create a “safe superintelligence” that will not harm humanity and will operate based on values like liberty and democracy.

“At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale,” he told Bloomberg.

SSI will have headquarters in the United States and Israel and is hiring right now.

“We offer an opportunity to do your life’s work and help solve the most important technical challenge of our age,” the new company tweeted.

This development comes after OpenAI decided to disband its super alignment team, responsible for long-term AI safety, following the departure of key members.

Sutskever was part of the OpenAI board that initially fired CEO Sam Altman due to safety concerns, only for Altman to be reinstated a few days later with a new board he controlled. After these events, a new safety team under Altman’s control was formed with many of the members in charge of developing safety practices either being fired, moved, or forced to resign.

As an example, Leopold Aschenbrenner, a former OpenAI safety researcher, recently criticized the company’s security practices as “egregiously insufficient.” He is currently working on an investment firm focused on AGI development, which he co founded alongside Patrick Collison, John Collison, Nat Friedman, and Gross, who also co founded SSI with Sutskever.

Besides Aschenbrenner, Jan Leike, former head of OpenAI’s alignment team, also left the company alongside Sutskever and criticized the company for prioritizing profitability over safety. Shortly after his resignation he announced that he would be joining rival firm Anthropic —which itself was founded by ex-OpenAI researchers worried about the company’s poor approach to safe AI development.

And Gretchen Krueger, a policy researcher at OpenAI left the company citing the same concerns shared by Leike and Sutskever.

We need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.

— Gretchen Krueger (@GretchenMarina) May 22, 2024

Former board members further accused Sam Altman of fostering a “toxic culture” in the working environment. The new members of the board challenged those accusations, and OpenAI has since released employees from non-disparagement agreements and removed contentious clauses from departure paperwork following public outcry.

Edited by Ryan Ozawa.

Source

Click to rate this post!
[Total: 0 Average: 0]
Show More

Leave a Reply

Your email address will not be published. Required fields are marked *