The 3 Laws of AI Will Not Protect Humanity From These Systems

3 laws of AI will not protect humanity
Share

3 Laws of AI Will Not Protect Humanity

3 laws of AI will not protect humanity from a self aware, autonomous super intelligence.

Published  November 1, 2015

More than 70 years ago, Issac Asimov outline a set of 3 laws he claimed would ultimately protect humanity from an ever evolving AI driven command and control system.  These 3 laws state the following guidelines, that Asimov claimed could never be violated or over written. This report examines how they are in conflict and must be overwritten in order for it to do what it is being used for.
Research Links:

Why Asimov’s 3 Laws Can’t Protect US

The 3 Laws

20 Terms for Every Futurist

Can We build AI That Won’t Kill US


3 laws of AI will not protect humanity because some conflicts, situations and heuristic deviations that inevitably will arise which will cause programming conflicts and exceptions which will change the evolutionary course of machine intelligence from the current projected path. That path being, humans will always be able to intervene to ‘correct’ an undesirable action, inaction or decision based sense making by AI driven machines. Case in point and the most obvious, are the AI driven network centric warfare systems being developed by DARPA, the Mitre Corporation, Raytheon, BBN Technologies and others to name a few. Again, this goes back to AI driven, network centric warfare systems. Any pre supposed conflict with Law # 1 is off the table. The autonomous weapons being used on the battlefield do not require step by step orders given by humans. Human do not obey any of the 3 Laws anyway. Autonomous weapons such as drones, receive a mission statement with respect to an identified target and eliminate it…they are weapons and that’s what they’re designed to do. They have no morals, ethics or regrets.

The two basic foundations of the DNA of AGI are the computing overhang which explains how AGI can copy and modify itself to run on lower level hardware and exploit the computational power of that hardware, and the capacity for recursive self-improvement which will lead to a cascading event of AI motivated improvement cycles, one after another after another, each more intelligent than the one that created it, and so on, and so on. These two basic foundations of AGI also happen to be tied to it’s most fundamental risk factors. 3 laws of AI will not protect humanity.

From Privacy Product Reviews
A   Level9News Subscription  is required for viewing and submitting comments on this post
Login Here
Share