Dallas Shooting: Is It Ethically Correct to Allow Robots to Take Human Life?

Dallas Shooting: Is It Ethically Correct to Allow Robots to Take Human Life?

Jul 10, 2016 @ 20:39 | 

Dallas Police used a robot to kill a shooter on Thursday night. In this accident five police officers also died but the main focus of media went on killer robot used by Dallas police. This robot (Northrop Grumman Remotec – Andros) is used earlier also but for bomb disposal purpose. This is the first time when police have used lethal weapons mounted on robot to kill a man. Dallas Police used Andros robot to throw an explosive that killed suspect.

The using of a robot in such manner is opened a debate – “Is It Ethically Correct to kill human using a Robot?”

Dallas Police Chief David Brown defended decision to use of robot to kill shooter. “We saw no other option but to use our bomb robot,” said Police Chief in a press conference. He said “We don’t know what materials were immediately available to the Dallas police during the standoff. Using the robot kept officers safe, other options would have exposed our officers to grave danger.”

What is the Issue?

  • Use of lethal weapons on robot

Although police chief defended and we are not here to say they have done right or wrong but they could be used non-lethal weapons to arrest shooter instead of killing with lethal weapon on robot.


Related Articles:


Robotic weapons, which are unmanned, are often divided into two categories based on the human involvement in their actions:

  1. Human-in-the-Loop Weapons: Robots that can select targets and deliver force only with a human command – Remotely operated Robots;
  2. Human-out-of-the-Loop Weapons: Robots that are capable of selecting targets and delivering force without any human input or interaction – Fully Autonomous Robots.

Using of lethal weapons on fully autonomous robot is strictly prohibited due to violation of fundamental human rights to life. International Human Rights Clinic (IHRC) that defends the rights of people worldwide mentioned these issues in  “Shaking the foundation: The Human Rights Implications of Killer Robots”. Shaking the foundation raises serious concerns about the use of fully autonomous weapons in law enforcement operations, in which human rights law would apply. Such weapons would be unable to evaluate the need for and proportionality of using deadly force the way human beings do. They could not be pre-programmed to handle all law enforcement scenarios. And they would lack human qualities, such as judgment and empathy that enable police to avoid unlawful arbitrary killings in unforeseen situations.

Dallas police Robot

But Dallas police used a remotely operated Robot that means operator is in the loop of using weapons. If the robot operates on police officer’s commands then it is just like a gun in hand (But keep in mind that gun is so reliable that is can’t shoot itself but robots with complex programming & large number of integrated modules are not so reliable). Here human (Police officers) is taking decisions and robot is just following the given commands.

Then what is the problem to use such robots?

Suppose that Human Rights Organization found the violation of fundamental human rights to life in such accident, who will be punished? Who will take accountability? Robots could not be punished, operator will tell due to some fault in robot it happened and superior officers, programmers, and manufacturers would all be likely to escape accountability.

The inability to uphold this underlying principle of human rights raises serious moral questions about the prospect of allowing a robot to take a human life.

The only thing they could be improved that is use of non-lethal weapon instead of lethal weapon.


 Image: Robot used by Dallas Ploice


Leave a Reply

Your email address will not be published. Required fields are marked *