Menu Close

Rights group calls for urgent regulation of AI ‘killer robots’

Human Rights Watch issued an report on “killer robots,” automated AI weapons which can kill people without human instruction or oversight, ahead of the first United Nations General Assembly meeting on autonomous weapons systems in New York. HRW noted “the call, supported by at least 129 countries, for the urgent negotiation and adoption of a legally binding instrument to prohibit and regulate autonomous weapons systems.”

Human Rights Watch outlines six primary human rights risks posed by killer robots. Amongst these risks, the greatest danger posed is to the right not to be arbitrarily deprived of life, which requires the use of force to be necessary to achieve a legitimate aim proportionately.

Human Rights Watch said:

[autonomous] weapons systems would face serious difficulties in meeting this three-part test…. Autonomous weapons systems could not identify subtle cues of human behavior to interpret the necessity of an attack, would lack the human judgment to weigh proportionality, and could not communicate effectively with an individual to defuse a situation and ensure that lethal force is a last option.

Another right placed at risk by AI systems is the right to peaceful assembly. Human Rights Watch raises concerns that killer robots will lack the capacity to adequately differentiate between violent and peaceful protests, thus being used far too often in interference with this right. HRW believes a “chilling effect” may be posed by killer robots, and thar people will stop exercising their right to peaceful assembly to protect themselves. Furthermore, killer robots pose a risk to the principle of non-discrimination, as “algorithmic bias can disproportionately and negatively affect already marginalized groups.”

As with all AI systems, issues with automated weaponry stem from a lack of explicability and transparency regarding the decision-making process. This is known as the ‘black box problem’, which explains how the complexity and several data factors that go into AI decisions result in an “inability for us to see how deep learning systems make their decisions.” The black box problem, in turn, poses risks to human rights.

The dangers of AI regarding human rights have been a prevalent issue for consideration by the UN, as earlier this month, UN experts called for the UN regulation of AI to ensure respect for human rights. As Arnoud Valli stressed to the UN, regulating AI on the battlefield is a pressing need to prevent deadly consequences.

Article: Rights group calls for urgent regulation of AI ‘killer robots’

Leave a Reply

Your email address will not be published. Required fields are marked *