Advertisement

Musk, experts warn of ‘risks to society’ in calling for AI development pause

Elon Musk and a group of technology experts have called for a six-month pause to the development of advanced artificial intelligence (AI) systems, warning that it could pose “risks to society.”

The group said in an open letter that AI systems with human-competitive intelligence can put “society and humanity” at risk, citing research and AI labs, and require planning with enough resources.

“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” they said.

Some of the notable signers also include Apple co-founder Steve Wozniak and entrepreneur and former 2020 presidential candidate Andrew Yang.

ADVERTISEMENT

The signers said that modern AI systems are becoming competitive with humans at general tasks, leading to questions about whether AI should be able to “flood our information channels with propaganda and untruths,” make all jobs automated instead of being done manually and if society should risk “loss of control of our civilization.”

The letter states that powerful AI systems should only be created once society can be sure that they will produce positive effects and manageable risks.

The group said all AI labs should pause training systems more powerful than GPT-4, an updated version of ChatGPT from the company OpenAI, for at least six months. It said the pause should be public and verifiable, and the government should step in if a pause cannot happen on its own quickly.

The letter states that all labs and independent experts should use the pause to jointly develop and implement safety protocols that independent experts audit and supervise.

“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt,” the letter reads. “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

The group also called on AI developers to work with policymakers to accelerate the development of regulations to govern the technology, which they said should include measures like new regulatory authorities charged with overseeing AI, oversight and tracking of AI systems, liability for harm from AI and “well-resourced institutions” to deal with economic and political disruptions.

The group said humanity has previously paused development on other technologies with possibly “catastrophic” effects on society like human cloning, germline modification research and eugenics and should do so again with AI.

For the latest news, weather, sports, and streaming video, head to The Hill.