There is great reason to be concerned, according to the tech genius.
Elon Musk, big boss of Tesla and SpaceX, has made a serious warning about artificial intelligence. According to him, AI is a bigger threat to humankind than nuclear weapons.
He recently made the comments while speaking at a Southwest tech conference in Austin, Texas. He further suggested that there should be a regulatory body that will oversee the development of super intelligence – before it’s too late.
Mark Zuckerberg, founder of Facebook, has said that Musk’s doomsday warnings are “pretty irresponsible” and unnecessary. Meanwhile Steven Pinker, a professor at Harvard University, also attacked Musk’s way of thinking.
His response? He thinks they are “fools”.
Musk commented:
“The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are.
“This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”
Musk said:
“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me. It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”
To illustrate his point, he mentioned machine intelligence drastically grows, such as in the case of AlphaGo Zero, an AI system created by London-based tech firm DeepMind, which learned to play an old Chinese strategy game called Go “without any human intervention,” the reports said.
He also added:
“So the rate of improvement is really dramatic. We have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one.”
“I am not normally an advocate of regulation and oversight — I think one should generally err on the side of minimizing those things — but this is a case where you have a very serious danger to the public.
“It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important. I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane.
“And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.”
Table of Contents: Introduction: The Significance of the March 14th Holiday The Mathematical Marvel: Unraveling…
Who made them disappear? What was the reason? Where did they go? Why? Gather round,…
Swarm of bees stings the eyes of penguins in Cape town 60 penguins died from…
A massive ocean sunfish measuring 2,000 kilograms was caught on North African Coast It is named Mola alexandrini or…
A businessman in Georgia utilized the Covid-19 relief to buy a limited edition Charizard Pokemon card He committed…
Man captures an alligator in his neighbor's yard in FloridaHe uses a trash can to…
This website uses cookies to ensure you get the best experience.
Leave a Comment