Item Details
Skip Navigation Links
   ActiveUsers:774Hits:36683280Skip Navigation Links
Show My Basket
Contact Us
IDSA Web Site
Ask Us
Today's News
HelpExpand Help
Advanced search

In Basket
  Journal Article   Journal Article
 

ID197121
Title ProperElevating humanism in high-stakes automation
Other Title Informationexperts-in-the-loop and resort-to-force decision making
LanguageENG
AuthorDavis, Jenny L
Summary / Abstract (Note)Artificial intelligence (AI) technologies pervade myriad decision systems, mobilising data at a scale, speed, and scope that far exceed human capacities. Although it may be tempting to displace humans with these automated decision systems, doing so in high-stakes settings would be a mistake. Anchored by the example of states’ resort to force, I argue that human expertise should be elevated—not relegated—within high-stakes decision contexts that incorporate AI tools. This argument builds from an empirical reality in which defence institutions increasingly rely on and invest in AI capabilities, an active debate about how (and if) humans should figure into automated decision loops, and a socio-technical landscape marked by both promise and peril. The argument proceeds through a primary claim about the amplified relevance of expert humans in light of AI, underpinned by the assumed risks of omitting human experts, together motivating a tripartite call to action. The position presented herein speaks directly to the military domain, but also generalises to a broader worldbuilding project that preserves humanism amidst suffusive AI.
`In' analytical NoteAustralian Journal of International Affairs Vol. 78, No.2; Apr 2024: p.200-209
Journal SourceAustralian Journal of International Affairs Vol: 78 No 2
Key WordsExpertise ;  Artificial Intelligence (AI) ;  resort-to-force ;  human-in-the-loop ;  Expert-in-the-loop ;  AI ethics


 
 
Media / Other Links  Full Text