Vasil Samardzhiev
POLICY PERSPECTIVES #14, June 2024
Are Lethal Autonomous Weapons (LAWs) the defence strategy of the future? Ask someone from the 1950s and you would likely be called a wishful thinker: weapons with the ability to fire independently, with no intervention or involvement by humans, seemed as unreachable as flying cars.
70 years later, the only wishful thinkers are those who still think this way. Technological innovations, facilitated by non-stop investment into artificial intelligence by wealthy states and non-state actors, have allowed for the gradual incorporation of LAWs into the warfare strategies of militaries globally. However, the role of LAWs in Australian defense and collaborative strategies remains vague and requires greater transparency moving forward.
Modern development of LAWs
Modern conflicts are already seeing the use of artificial intelligence (AI)-powered LAWs, and the outcome of such endeavours. Turkey’s attack on Libya in 2020, considered to be the first recorded usage of a fully-autonomous LAW, saw a portable rotary wing loitering munition target without command members of the Libyan National Army forces using real-time image processing and artificial intelligence. The 2022 Russia-Ukraine Conflict saw small Ukrainian forces use an American-provided Javelin Anti-Tank Guided Weapon to “fire and forget” a vessel straight into the air and retreat, allowing it to target independently using infrared technology.
Israel, a long-term leader of autonomous weapons manufacturing, has been using a sentry gun system to monitor and protect the Gaza border fence, acquiring targets and suggesting action independently. Most recently, the Israeli Defence Forces have been utilising artificial intelligence tools to identify suspected militants in the Gaza Strip as bombing targets. The system, named “Habsora”, is considered to be generating targets independently from humans at a rate much higher than manual identification.
It is clear that, to those that make the call, LAWs are the defence strategy of the future. The United States has prominently invested in autonomous weapons since producing its first target tracking unit, the Phalanx CWIS, in the 1970s. Both South Korea and Israel have developed sentry guns that identify and target humans autonomously. Investment into the autonomous navigation capabilities of unmanned surface vehicles and helicopters has been steadily increasing in Israel, China and Russia.
The level of support that LAWs enjoy has improved the efficiency, affordability and accessibility of the production process, while also significantly undermining efforts to regulate or ban such systems on the international stage. If countries continue to invest in LAWs developments, it is unlikely that such weapons will not dominate at least the immediate future of warfare.
This does not absolve us of skepticism as to whether such developments are a good idea.
Are LAWs a good idea?
For starters, history has shown us that any machine should allow humans to intervene in its operations, in order to prevent grave conflict escalations. In 1983, a Soviet automatic target detection system had falsely identified high-altitude clouds as American intercontinental ballistic missiles – an identification that required a compulsory nuclear counter-attack. The ultimate decision not to pursue an attack without corroborating evidence prevented a potential Cold War escalation. It is easy to imagine such a situation happening again with AI-powered weaponry.
Many countries have turned to a “human-in-the-loop” policy, requiring lethal systems to operate only under the oversight of a human. However, would humans be able to exercise effective control over such weapons if they cannot keep up with it? AI-powered decision-making processes “currently provide limited human-understandable explanation for [their] output”, meaning humans cannot understand how an autonomous weapon has reached a conclusion. The speed at which AI processes information further means a human cannot meaningfully oversee LAWs, “simply because AI is doing things precisely because humans cannot”. This significantly hinders a human’s ability to intervene.
The uses of LAWs raise further question marks. How accurate are they really? Can models really represent human life? Can they make a cost-benefit analysis of a military offensive with human lives at stake? Deployment of LAWs is often hidden from public knowledge, so any accuracy is difficult to prove and track. In such instances, who is responsible for providing this information – manufacturers, states, military personnel? Who remains responsible for a misfire? The NGO Human Rights Watch has affirmed “an accountability gap” that[IC1] exists surrounding the use of LAWS, where “neither criminal law nor civil law guarantees adequate accountability” for countries involved in the use of autonomous system design or command.
Where does, and where should, Australia sit on lethal autonomous weapons?
Australia does not openly engage in LAWs development. We do not know whether the announcement of the AUKUS defence collaboration between Australia, the United States, and the United Kingdom will change that, with its aims to “accelerate technological integration” with two nations who are active investors in LAWs development. AUKUS does look like an opportunity for Australia to clarify its approach towards the use and development of LAWs.
What would that national policy look like? On the global stage, Australia has advocated for implementing global accountability networks to ensure responsible handling of such weapons. I see this as the best step forward, both for Australia and for global LAWs proliferation. International regulations regarding autonomous weapons have been lagging behind recent developments, leaving a gaping legal conundrum for the world to grapple with. “Clarity on who may be responsible for which tasks or requirements, and at what stages of the [decision-making] cycle” may be what is needed for the international community to truly examine, and account for, the impact of lethal autonomous weapons.
Vasil Samardzhiev is an undergraduate student, pursuing a Bachelor of International Relations and Political Science at Flinders University.
Vasil’s academic interests include socio-cultural policy-making, ethnic identity and conflicts, and counter-terrorism, with a particular focus on the Middle East and Northern Africa region. He has undertaken cross-institutional study at the University of Adelaide and Hebrew University of Jerusalem on his research focus topics, and has interned at the Jerusalem Center for Public Affairs.
Vasil works at the Jeff Bleich Centre on a casual basis as a Project Support Officer.