Earlier this week, United Nations Secretary-General Antonio Guterres delivered a statement urging a ban on autonomous machines in war.
“Machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law,” said Guterres to a group of governmental experts tasked with assessing technological advances in warfighting. According to the U.N. chief, no country or armed force is in favor of such “fully autonomous” weapon systems that can take human life.
The prospect of artificially intelligent machines playing a role in war has been a concerning issue for both governmental bodies and private industry. These concerns were best highlighted in an open letter to the U.N. Convention on Certain Conventional Weapons, signed by the CEOs of 100 international tech companies. The letter brought up the dangers of “Lethal Autonomous Weapon Systems.” Those who signed the letter requested the Conference’s member countries find “means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.”
The fact that this issue is being brought up by the head of the U.N. —as well as other national governments— is a clear indication it has come to the forefront of public discourse. But in addition to mere awareness, the U.N. wants action. Guterres asked his panel audience “to deliver” on legislation proposals to be put forth as international law. “It is your task now to narrow these differences and find the most effective way forward… The world is watching, the clock is ticking and others are less sanguine. I hope you prove them wrong.”