advertisement
Tech leaders including Elon Musk and Google's AI subsidiary co-founders recently signed a pledge to not develop 'lethal autonomous weapons'. The pledge was published on 18 July at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm organised by the Future of Life Institute, a research institute that aims to support research and initiatives for safeguarding life and developing visions of the future.
Apart from this pledge, 116 industry leaders from 26 countries in the UN endorsed the call for a ban on lethal autonomous weapons systems last year.
"Once developed, autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend," The Verge quoted them as saying last year.
The idea behind the pledge was mainly that machines should not be left with the decision to kill.
The tech leaders who signed the pledge described autonomous weapons as the third revolution in warfare, after gunpowder and nuclear arms.
The main concern is that Artificial Intelligence is still in the development stages and has a very real potential to cause significant harm to innocent people or worse - global instability.
To put it in perspective, here is a short film, Slaughterbots, that shows a dramatised near-future scenario where swarms of microdrones use AI to assassinate political opponents based on preprogrammed criteria:
As of now, the only area where AI is being used widely is intelligence and logistics. However, Russian president Vladimir Putin in March announced that his country was developing an autonomous nuclear-powered torpedo.
Apart from Russia, the United States is going by the perception that AI has the potential for national security and the military has to speed up its AI weaponry development and acquisition. Globesec reported that in 2015, the Department of Defense (DoD) launched the Defense Innovation Unit in order to build partnerships with private AI companies.
A US department of defence report cited by the Financial Times urges increased investment in autonomous weapon technology so that the US can stay ahead of its rivals who will also exploit its benefits.
Also, this arms race kind of environment is exactly why the leaders in AI and robotics leaders signed the pledge.
Tech leaders have, in an open letter from AI and Robotics leaders on Autonomous Weapons said: “Unlike nuclear weapons, AI integrated/autonomous weapons require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”
However, where this is dangerous is that apart from malfunctioning and not following the specific task it has been programmed to undertake, autonomous weapons also threaten to destabilise the strategic balance among world's major superpowers as the technology can be used to find and destroy retaliatory nuclear weapons which are held in reserve.
In the future, as AI improves to be able to recognise patterns and play games better, it may be used as an aid in decision making - taking us back to the fact that leaving the decision to kill with a machine and without human intervention is morally wrong. It will also be telling humans how to fight a war - a war that may escalate to a nuclear exchange. Apart from that, an AI system could, in the future, advise policymakers that the proper response would be to place troops in certain cities and place bombers on alert. The computer could also lay out that the enemy would retaliate and how escalation would play out.
Autonomous weapons will act on the basis of a specific input which might not always be precise enough to distinguish between targets and civilian outlets.
It is high time that there are regulations in place when it comes to use of AI in weaponry. The regulations could be to the extent AI can be used in weapons or to be used in intelligence, logistics, scouting etc. only or regulating the sale or purchase of AI integrated weaponry. It is high time that the UN recognises the concern of these tech leaders and puts a cap on the use of artificial intelligence. You never know how much destruction one wrong line of code might lead to.
(With inputs from The Verge, Financial Times, Globesec and Future of Life website.)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)