When A.I. Rules…

by Tyler Durden

Elon Musk unveiled his apocalytpic vision of the world a few weeks ago…

“Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he said.

 

“AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

 

“Normally the way regulations are set up is a while bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry,” he continued.

 

“It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.”

And since then numerous futurists have prognosticated on whether is mankind’s salvation or eventual downfall. Facebook’s Mark Zuckerberg embraces it while Stephen Hawking considers this the most dangerous moment in history as AI and automation are set to decimate jobs and change the social contract.

However, as Mike Wehner via BGR.com,  writes, when AI rules, one rogue programmer could end the human race…

The idea of small groups of humans having control over some of the most powerful weapons ever to be built is scary, but it’s the reality we live in. In the not-so-distant future, that incredible power and responsibility could be handed over to AI and robotic systems, which are already in active development. In a pair of open letters to the prime ministers of bother Australia and Canada, hundreds of AI researchers and scientists are pleading for that not to happen.

The fear, they say, is that removing the human element from life and death decisions could usher in a destructive age that ultimately spells the end of mankind. The AI weapons systems are, as the researchers put it, “weapons of mass destruction” which must be banned outright before they can do any serious damage.

“Delegating life-or-death decisions to machines crosses a fundamental moral line – no matter which side builds or uses them,” the letter explains.

 

“Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage autonomous weapons goes to the core of our humanity.”

In a setting where computers have the ultimate say in whether or not to engage in hostile acts — even under the guise of defending their own territories or protecting the populations they are programmed to protect — conflicts could escalate much faster than humans have ever seen. Weeks, months, or even years of posturing and diplomacy could turn into mere minutes or even seconds, with missiles flying before humans can even begin to intervene. And then, of course, there’s the issue of the AI being manipulated in unforeseen ways.

“These will be weapons of mass destruction,” the scientists say.

 

“One programmer will be able to control a whole army. Every other weapon of mass destruction has been banned: chemical weapons, biological weapons, even nuclear weapons. We must add autonomous weapons to the list of weapons that are morally unacceptable to use.”

It’s a frightening thought, but it hasn’t stopped military contractors from exploring the possibility of AI-controlled weapons and defense systems. This could be yet another way mankind engineers its own destruction.

https://twitter.com/elonmusk/status/896166762361704450/photo/1

ZeroHedge
Click above logo for original post

Be the first to comment

Leave a Reply

Your email address will not be published.


*