Elon Musk, others sign letter calling for a pause on AI experiments




Comment

A group of business leaders and academics signed a letter asking companies like OpenAI, Google and Microsoft to stop training more powerful AI systems so the industry can assess the risks they pose.

Twitter CEO Elon Musk, veteran AI computer scientist Yoshua Bengio and Emad Mostaque, the CEO of fast-growing start-up Stability AI, all signed the letter, along with around 1,000 other members of the business, academic and tech worlds, though the list was not independently verified.

“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?,” the letter said. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The list did not include senior executives at OpenAI or the Big Tech companies. It also didn’t include prominent AI critics like former Google engineer Timnit Gebru who have been warning of the more immediate risks of the technology for months and years.

The letter was sponsored by the Future of Life Institute, a non-profit that is part of the longtermism movement — a school of philosophy that focuses on long-term risks to humanity that is popular with tech billionaires. The group’s donors include Musk, Skype founder Jaan Tallinn and creator of the etherium cryptocurrency Vitalik Buterin.

Experts have fretted about the risks of building supersmart AIs for years, but the conversation has become louder over the last six months as new image generators and chatbots that can have eerily humanlike conversations have been released to the public. Interacting with the newly-released chatbots like OpenAI’s GPT4 has prompted many to declare that a human-level AI is just around the corner, but other experts point out that the way the chatbots work is by simply guessing the right words to say next based on their training, which included reading trillions of words online. The bots often devolve into bizarre conversational loops if prompted for long enough, and pass off made-up information as factual.

AI isn’t magic or evil. Here’s how to spot AI myths.

AI ethics researchers have repeatedly said in recent months that focusing on the risks of building AIs with human-level intelligence or greater is distracting from the more immediate problems that the technology has already created, like infusing sexist and racist biases into more technology products. Pushing the theory that AI is about to become sentient can actually have detrimental effects by obscuring the harm the technologies are already causing, Gebru and others have argued.

AI has been used for years in recommendation algorithms used by social media platforms, and AI critics have argued that by removing human control over these technologies, the algorithms have promoted anti-vaccine conspiracies, election denialism and hate content.

The letter calls for a six-month pause on building new AI tools that are more powerful than OpenAI’s GPT4, which the company released to the public earlier this month. AI labs and companies working in the space should instead develop a set of guidelines for new AI tech that can be audited by outside experts.

The letter calls for governments to step in and enforce a “moratorium” on AI development if the companies don’t willingly agree to one.

Some U.S. lawmakers have called for new regulations on AI and its development, but no substantive proposals have advanced through the legislature. The European Union released a detailed proposal for regulating AI in 2021, but it still has not been adopted into law.