AI more dangerous than nukes, says Musk

Top Stories

AI more dangerous than nukes, says Musk

Austin - Facebook founder Mark Zuckerberg said Musk's doomsday AI scenarios are unnecessary and "pretty irresponsible".

By CNBC

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Published: Thu 15 Mar 2018, 10:44 PM

Last updated: Fri 16 Mar 2018, 1:23 AM

Calling artificial intelligence more dangerous than nuclear warheads, Tesla and SpaceX boss Elon Musk said there needs to be a regulatory body overseeing the development of super intelligence, CNBC reported.
"I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me," the billionaire tech entrepreneur told the South by Southwest tech conference in Austin, Texas. "It's capable of vastly more than almost anyone knows and the rate of improvement is exponential."
Some have called his tough talk fear-mongering. Facebook founder Mark Zuckerberg said Musk's doomsday AI scenarios are unnecessary and "pretty irresponsible". And Harvard professor Steven Pinker also recently criticised Musk's tactics. Musk, however, is resolute, calling those who push against his warnings fools at the tech conference, CNBC reported.
"The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are," said Musk. "This tends to plague smart people. They define themselves by their intelligence and they don't like the idea that a machine could be way smarter than them, so they discount the idea - which is fundamentally flawed."
Musk pointed to machine intelligence playing the ancient Chinese strategy game Go to demonstrate rapid growth in AI's capabilities. For example, London-based company, DeepMind, which was acquired by Google in 2014, developed an artificial intelligence system, AlphaGo Zero, that learned to play Go without any human intervention. It learned simply from randomised play against itself. The Alphabet-owned company announced this development in a paper published in October.
Musk worries AI's development will outpace our ability to manage it in a safe way.
"So the rate of improvement is really dramatic. We have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one," CNBC quoted him as saying.
"I am not normally an advocate of regulation and oversight - I think one should generally err on the side of minimising those things - but this is a case where you have a very serious danger to the public," he told the conference.
"It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important. I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane," he said at SXSW.
"And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane."
In his analysis of the dangers of AI, Musk differentiates between case-specific applications of machine intelligence like self-driving cars and general machine intelligence, which he has described previously as having "an open-ended utility function" and having a "million times more compute power" than case-specific AI.


More news from