ID | 197119 |
Title Proper | Before algorithmic Armageddon |
Other Title Information | anticipating immediate risks to restraint when AI infiltrates decisions to wage war |
Language | ENG |
Author | Erskine, Toni |
Summary / Abstract (Note) | AI-enabled systems will steadily infiltrate resort-to-force decision making. This will likely include decision-support systems recruited to assist with crucial deliberations over the permissibility of waging war. Potential benefits abound in terms of enhancing individual and institutional capacities for cognition, analysis, and foresight. Yet, I argue that we have reason to worry. Our interaction with these systems – as citizens, political and military leaders, states, and formal organisation of states – would also court significant risks. Specifically, reliance on decision-support systems that employ machine-learning techniques would threaten to undermine our adherence to international norms of restraint in two distinct ways: (i) by creating the reassuring allusion that these AI-driven tools are able to replace us as responsible agents; and (ii) by inserting unwarranted certainty and singularity into complex jus ad bellum judgements. I will refer to these challenges as the ‘risk of misplaced responsibility’ and the ‘risk of predicted permissibility’, respectively. If unaddressed, each proposed risk would make the initiation of war appear more permissible in particular cases and, collaterally, contribute to the erosion of hard-won international norms of restraint. |
`In' analytical Note | Australian Journal of International Affairs Vol. 78, No.2; Apr 2024: p.175-190 |
Journal Source | Australian Journal of International Affairs Vol: 78 No 2 |
Key Words | War ; International Norms ; Restraint ; Jus ad Bellum ; Moral Responsibility ; Artificial Intelligence (AI) |