I don’t think AI will become an existential threat to humanity.I’m not saying that it’s impossible, but we would have to be very stupid to let that happen.Others have claimed that we would have to be very smart to prevent that from happening, but I don’t think it’s true.If we are smart enough to build machine with super-human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity. And Contorary to the VDO use your head guys… Always Remember Whatever the Elite’s Say things are allways Quiet the Opposite.. Stephen Hawking is a retard…All this 3 stooges are in fact AI-Morons…Fear Mongers.. Allways remember The Greatest Threat To Humanity is from Humanity Itself… Its not from AI but IAM (Human)
~ Galactic human ~
In The AFR I explain why Microsoft’s Bill Gates, Tesla’s Elon Musk, celebrated Cambridge University physicist Stephen Hawking, and nobel prize winning scientists think the single greatest existential threat to humanity is evolving a successor to our species in the form of “strong” artificial intelligence.
Pity the poor meat bags. They are doomed if a growing number of scientists, engineers and artists are to be believed.
Prof Stephen Hawking has joined a roster of experts worried about what follows when humans build a device or write some software that can properly be called intelligent.
Such an artificial intelligence (AI), he fears, could spell the end of humanity.
Similar worries were voiced by Tesla boss Elon Musk in October. He declared rampant AI to be the “biggest existential threat” facing mankind. He wonders if we will find our end beneath the heels of a cruel and calculating artificial intelligence.
So too does the University of Oxford’s Prof Nick Bostrom, who has said an AI-led apocalypse could engulf us within a century.