When Robots Do the Crime, Who Does the Time?

By George Khoury, Esq. on March 14, 2017 | Last updated on March 21, 2019

While we're not quite living in the world where the Terminator or RoboCop are walking around, autonomous vehicles and other devices are starting to see daylight, which means that lawmakers need to kick it into high gear. When it comes to civil liability, it would seem that the owner, programmer, and/or maker of a robot is likely to wind up paying for any damages caused. But what about when a crime is committed?

When a robot commits a crime, criminal liability will depend on whether the robot was being controlled or programmed to take a specific action. If a robot utilized artificial intelligence to learn and act on its own, the question of criminal liability becomes more complicated.

Programmed and Controlled Robots

When it comes to programmed robots, or robots that are directly controlled by users, the question of criminal liability is easier to resolve. Whoever was controlling the robot will be subject to criminal liability. If the robot is being controlled directly, then the criminal act will likely be treated the same as if the individual in control was performing the act. For example, there have been criminal prosecutions of drone operators that caused injuries due to crashing (reckless flying).

However, when it comes to a programmed robot acting autonomously, criminal liability will be less cut and dry. For starters, whether the action was intentional, reckless, or accidental, can have a significant impact on the severity of the crime charged, and even whether criminal charges will be brought at all.

While programming a robot to kill or steal is obviously criminal, there are many more nuanced situations. Consider, for example, a case where a drone is programmed to fly a specific route but is hit by a bird, crashes, and injures a person. Although this may seem accidental, it could be considered reckless if the owner failed to include crash avoidance software. If the victim dies, or is seriously injured, criminal negligence, or even manslaughter charges, could result.

Artificially Intelligent Robots

When it comes to truly autonomous artificially intelligent robots, criminal liability gets a bit murkier as criminal intent is a common element of many crimes and a robot's intent may not be ascertainable, and may not necessarily be attributable to an operator, as there is none.

For example, a Swiss art group's AI darknet robot shopper that bought illegal party drugs online managed to escape prosecution, as did the artist and gallery, but likely only due to the novelty of the silly situation. A preliminary question involves whether the robot is truly autonomous, and if so, whether it acted intentionally, or even intended to cause harm. As the law is being developed around these issues, lawmakers could consider treating robots like wild animals or pets and hold owners strictly liable for their robot's actions.

Interestingly, new DoT regulations state that the computer system in a driverless car is considered the driver for legal purposes. But, despite driverless cars seemingly being autonomous, they are still likely to fall under the programmed or controlled robot framework. However, thankfully, the new regulations don't go so far as to grant AI robots legal personhood, nor will they go so far as to insulate manufacturers from criminal or civil liability if their products kill, injure, or pose a significant risk to public health or safety.

Related Resources:

Copied to clipboard