The sixth meeting of the Convention on Conventional Arms (CCAC) ended without any binding agreement. The meeting concluded in the early hours of Saturday, September 1, after the states adopted a report containing more than 20 non-binding principles and recommendations.
A clear majority of the states proposed however to start negotiations in 2019 on a new treaty that establishes the preventive prohibition of the development and use of lethal autonomous weapons systems. Countries such as Colombia, Iraq, Pakistan, Panama, a group of African states and the group of states of the Movement of Non-Aligned Countries, Austria, Brazil and Chile recommended a new mandate of the CCW “to negotiate a legally binding instrument to guarantee control significant human on the critical functions of the armed systems”. The 88 participating states recommended continuing the deliberations next year, but did not agree on how to proceed to achieve this goal. But, although there was a significant convergence of views on the need for some form of human control over armed systems and the use of force, the United States and Russia rejected the mandate to negotiate.
Scientist and tech experts underlined over and over again that it is impossible to have total control over fully autonomous weapons. Professional coder Ellen Ullman explained: “When programs pass into code and code passes into algorithms and then algorithms start to create new algorithms, it gets farther and farther from human agency. Software is released into a code universe which no one can fully understand.” Additionally, earlier this year, a major world congress of leading AI researchers was held with over 200 technology companies and organizations from more than 36 countries and 2,600 individuals. They all pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons” underlining the fact that “the decision to take a human life should never be delegated to a machine.” How would they preventing any system errors? How can we deal with unpredictability of machines and the environments in which they may operate? How can we ensure the security of AWS against cyber-attacks? And what if any errors happen? Will they give direct fault to the programmer? To the commander who will surely receive instructions from actual states?
Some states suggested that the CCW should focus on future deliberations on other measures, such as a non-legally binding political statement proposed by France and Germany to delineate principles such as the need for human control over the use of force and the importance of human responsibility. While steps to stop the development of these weapons are welcome, these kinds of statements simply will not go far enough to protect humanity from these weapons. A new international law is needed.
Entities and campaigns such as Stop Killer Robots are strongly opposed to allowing the development of weapon systems that, once activated, could select and attack targets without human intervention. To do so would be abhorrent, immoral, an affront to the concept of human dignity and the principles of humanity, with unpredictable consequences for civilian populations throughout the world. We object the simple thought to program a machine to kill human beings, and having a further debate about legalizing it is even more unconceivable. GCOMS supports the Stop Killer Robots Campaign, being also dismayed that a small number of murderous pro-Robot states actively prevented progress towards this goal at this last meeting on lethal autonomous weapon systems.