ID | 197127 |
Title Proper | Algorithmic war and the dangers of in-visibility, anonymity, and fragmentation |
Language | ENG |
Author | Baggiarini, Bianca |
Summary / Abstract (Note) | AI-enabled systems are likely to inform future decisions to initiate war. They are well placed to manage data and deliver recommendations at speeds that far surpass human abilities. Yet, AI-enabled vision and knowledge, which inform military intelligence, surveillance, and reconnaissance practices, curiously sustain both exposure and opacity. Machine learning algorithms are famously called black boxes even as they are in practice widening what we can see and know. While many call for greater algorithmic transparency to combat this technological opacity, I argue that this desire is misguided because it overlooks how algorithmic reason, which promises more precise knowledge and more efficient decision making, naturally conceals through political and socio-technical practices of in-visibility, anonymity, and fragmentation. Given how these practices will likely come to shape AI-enabled resort-to-force decision making, this article concludes with the suggestion that AI-enabled decisions are likely to undermine democratic legitimacy. |
`In' analytical Note | Australian Journal of International Affairs Vol. 78, No.2; Apr 2024: p.257-265 |
Journal Source | Australian Journal of International Affairs Vol: 78 No 2 |
Key Words | War ; Democracy ; Decision Making ; Legitimacy ; Transparency ; Algorithmic reason |