Item Details
Skip Navigation Links
   ActiveUsers:728Hits:35701942Skip Navigation Links
Show My Basket
Contact Us
IDSA Web Site
Ask Us
Today's News
HelpExpand Help
Advanced search

In Basket
  Journal Article   Journal Article
 

ID197127
Title ProperAlgorithmic war and the dangers of in-visibility, anonymity, and fragmentation
LanguageENG
AuthorBaggiarini, Bianca
Summary / Abstract (Note)AI-enabled systems are likely to inform future decisions to initiate war. They are well placed to manage data and deliver recommendations at speeds that far surpass human abilities. Yet, AI-enabled vision and knowledge, which inform military intelligence, surveillance, and reconnaissance practices, curiously sustain both exposure and opacity. Machine learning algorithms are famously called black boxes even as they are in practice widening what we can see and know. While many call for greater algorithmic transparency to combat this technological opacity, I argue that this desire is misguided because it overlooks how algorithmic reason, which promises more precise knowledge and more efficient decision making, naturally conceals through political and socio-technical practices of in-visibility, anonymity, and fragmentation. Given how these practices will likely come to shape AI-enabled resort-to-force decision making, this article concludes with the suggestion that AI-enabled decisions are likely to undermine democratic legitimacy.
`In' analytical NoteAustralian Journal of International Affairs Vol. 78, No.2; Apr 2024: p.257-265
Journal SourceAustralian Journal of International Affairs Vol: 78 No 2
Key WordsWar ;  Democracy ;  Decision Making ;  Legitimacy ;  Transparency ;  Algorithmic reason


 
 
Media / Other Links  Full Text