Cyber Monday Sale! 50% Off All Access

Now that OpenAI's Superalignment Team Has Been Disbanded, Who's Preventing AI from Going Rogue? We spoke to an AI expert who says safety and innovation are not separate things that must be balanced; they go hand in hand.

By Sherin Shibu Edited by Melissa Malamut

Key Takeaways

  • Former OpenAI research lead Jan Leike and chief scientist, Ilya Sutskever, resigned last week.
  • Leike stated that it was because he felt safety took a backseat to new products at OpenAI.
  • One AI expert tells "Entrepreneur" that safety and innovation are not separate things that need to be balanced — they should go hand in hand.

How do we prevent AI from going rogue?

OpenAI, the $80 billion AI company behind ChatGPT, just dissolved the team tackling that question — after the two executives in charge of the effort left the company.

The AI safety controversy comes less than a week after OpenAI announced a new AI model, GPT-4o, with more functionality — and a voice eerily similar to Scarlett Johansson's. The company paused the rollout of that particular voice on Monday.

Related: Scarlett Johansson 'Shocked' That OpenAI Used a Voice 'So Eerily Similar' to Hers After Already Telling the Company 'No'

Sahil Agarwal, a Yale PhD in applied mathematics who co-founded and currently runs Enkrypt AI, a startup focused on making AI less of a risky bet for businesses, told Entrepreneur that innovation and safety are not separate things that need to be balanced, but rather two things that go hand in hand as a company grows.

"You're not stopping innovation from happening when you're trying to make these systems more safe and secure for society," Agarwal said.

OpenAI Exec Raises Safety Concerns

Last week, the former OpenAI chief scientist and co-founder Ilya Sutskever and former OpenAI research lead Jan Leike both resigned from the AI giant. The two were tasked with leading the superalignment team, which ensures that AI is under human control, even as its capabilities grow.

Related: OpenAI Chief Scientist, Cofounder Ilya Sutskever Resigns

While Sutskever stated he was "confident" that OpenAI would build "safe and beneficial" AI under CEO Sam Altman's leadership in his parting statement, Leike said he left because he felt OpenAI did not prioritize AI safety.

"Over the past few months my team has been sailing against the wind," Leike wrote. "Building smarter-than-human machines is an inherently dangerous endeavor."

Leike also said that "over the past years, safety culture and processes have taken a backseat to shiny products" at OpenAI and called for the ChatGPT-maker to put safety first.

OpenAI dissolved the superalignment team that Leike and Sutskever led, the company confirmed to Wired on Friday.

Sam Altman, chief executive officer of OpenAI. Photographer: Dustin Chambers/Bloomberg via Getty Images

Altman and OpenAI president and co-founder Greg Brockman released a statement in response to Leike on Saturday, pointing out that OpenAI has raised awareness about the risks of AI so that the world can prepare for it and the AI company has been deploying systems safely.

How Do We Prevent AI from Going Rogue?

Agarwal says that as OpenAI tries to make ChatGPT more human-like, the danger is not necessarily a super-intelligent being.

"Even systems like ChatGPT, they are not implicitly reasoning by any means," Agarwal told Entrepreneur. "So I don't view the risk as from a super-intelligent artificial being perspective."

The problem is that as AI becomes more powerful and multifaceted, the possibility of more implicit bias and toxic content increases and the AI becomes riskier to implement, he explained. By adding more ways to interact with ChatGPT, from image to video, OpenAI has to think about safety from more angles.

Related: OpenAI's Launches New AI Chatbot, GPT-4o

Agarwal's company released a safety leaderboard earlier this month that ranks the safety and security of AI models from Google, Anthropic, Cohere, OpenAI, and more.

They found that the new GPT-4o model potentially contains more bias than the previous model and can possibly produce more toxic content than the previous model.

"What ChatGPT did is it made AI real for everyone," Agarwal said.

Sherin Shibu

Entrepreneur Staff

News Reporter

Sherin Shibu is a business news reporter at Entrepreneur.com. She previously worked for PCMag, Business Insider, The Messenger, and ZDNET as a reporter and copyeditor. Her areas of coverage encompass tech, business, strategy, finance, and even space. She is a Columbia University graduate.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Business News

'I Stand By My Decisions': A CEO Is Going Viral For Firing Almost All of the Company's Employees — Here's Why

The Musicians Club CEO Baldvin Oddsson fired 99 workers at once over Slack for missing a morning meeting. But there's a catch.

Marketing

How to Beat the Post-Holiday Sales Slump and Crush Your Q1 Goals

Overcome the post-holiday sales slump and keep the momentum strong with these key tips.

Franchise

Subway's CEO Steps Down Amid a Major Transition for the Sandwich Giant

John Chidsey will step down at the end of 2024, marking the close of a transformative five-year tenure.

Making a Change

Get Babbel at Our Unbeatable Price This Cyber Monday

Learn up to 14 new languages with lifetime access.

Business News

'If It Seems Too Good to Be True It Probably Is': $18 Million Worth of 'Great Deals' Confiscated By Border Cops

A shipment of 3,000 fake Gibson guitars from Asia was seized at the Los Angeles-Long Beach Seaport.