Send to a friend
[SYDNEY] Founders of leading robotics and artificial intelligence (AI) companies from 26 countries have, in an open letter to the United Nations Convention on Certain Conventional Weapons (CCW), called for an international treaty to ban killer robots.
Killer robots or autonomous weapons can identify and attack a target with no human intervention. They include armed quadcopters and drones where humans are not making the decisions, but do not include cruise missiles or remotely piloted drones.
The 2017 letter, signed by 116 top AI founders, is the brainchild of Toby Walsh, professor of AI at the University of New South Wales in Sydney. Released during the opening of the 3-day (21-23 August) International Joint Conference on Artificial Intelligence (IJCAI 2017), the letter was to have coincided with the first meeting of the UN Group of Governmental Experts on Lethal Autonomous Weapon Systems, now rescheduled for November.
“If these arms are manufactured, some of them will invariably fall into the hands of people who will have no qualms about using it for evil intent.”
The letter is a joint stance against the “third revolution” after gunpowder and nuclear arms. Among those who signed include Elon Musk, founder of Tesla, SpaceX and OpenAI; and Mustafa Syleyman, founder and head of Applied AI at Google’s DeepMind.
The letter states: ‘As companies building the technologies in AI and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. Once this Pandora’s box is opened, it will be hard to close.’
Walsh, one of the persons behind the 2015 letter to the UN signed by AI and robotics researchers and endorsed by British physicist Stephen Hawking and Apple co-founder Steve Wozniak, among others, says that while the earlier letter had already warned of an arms race, the current letter aims to give impetus to the deliberations on this topic. “I am expecting a very positive impact as the first letter had pushed this item up the agenda at the UN.”
In December 2016, it was unanimously agreed to begin formal discussions on autonomous weapons by 123-member nation UN Review Conference of the CCW.
“It is a more pressing problem now,” Walsh tells SciDev.Net. “This new letter demonstrates quite clearly that it is not just researchers and academia, people like myself, but that the industry is behind it too. We are living in an increasingly unstable world where rogue nations and terrorist organisations are playing a more dangerous role. If these arms are manufactured, some of them will invariably fall into the hands of people who will have no qualms about using it for evil intent.”
“AI can help tackle many of the pressing global problems, such as inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. But the downside is that the same technology can also be used in autonomous weapons to industrialise war. That is the reason I am calling for a UN ban on such weapons similar to bans on chemical and other weapons,” Walsh adds.But, Mary-Anne Williams, director, Disruptive Innovation, University of Technology Sydney, says, “A killer robot ban alone will not work against rogue states and terrorist groups because they do not observe bans or adhere to International law. The nature of destructive weapons is changing; they are increasingly DIY (do-it-yourself). One can 3D print a gun, launch a bomb from an off-the-shelf drone or turn ordinary cars into weapons.”
Most experts agree on greater regulation.
Michael Harre, lecturer in the complex systems group at the University of Sydney, notes: “An equally important question is the potential for non-military autonomous systems to be dangerous, such as trading bots in financial markets that put at risk billions of dollars. Soon we will also have autonomous AIs that have a basic psychology, an awareness of the world similar to that of animals. These AIs may learn to be dangerous just as Tay, IBM's chat-bot, learned to be anti-social on Twitter.”
This piece was produced by SciDev.Net’s Asia & Pacific desk.