Query Result Set
Skip Navigation Links
   ActiveUsers:664Hits:21518034Skip Navigation Links
Show My Basket
Contact Us
IDSA Web Site
Ask Us
Today's News
HelpExpand Help
Advanced search

  Hide Options
Sort Order Items / Page
HUMAN-ROBOT TRUST (1) answer(s).
 
SrlItem
1
ID:   172573


Friend or frenemy? the role of trust in human-machine teaming and lethal autonomous weapons systems / Warren, Aiden; Hillas, Alek   Journal Article
Warren, Aiden Journal Article
0 Rating(s) & 0 Review(s)
Summary/Abstract This article explores the imprecise boundary between Lethal Autonomous Weapons Systems (LAWS) and Human-Machine Teaming – as a subset of Human-Machine Interaction – and the extent both are emerging as a point of concern (and option) in military and security policy debates. As the development of Human-Machine Teaming relates to artificial intelligence (AI) capabilities there also exists an area of concern pertaining to reliability and confidence, particularly in the heat of battle. Also known as Manned-Unmanned Teaming, Human-Machine Teaming attempts to engender trust and collaborative partnerships with robots and algorithms. Clearly the prospect of LAWS in recent times, or so-called ‘killer robots,’ has raised questions relating to the degree such devices can be trusted to select and engage targets without further human intervention. Aside from examining the ‘trust factor,’ the article also considers security threats posed by both state and non-state actors and the complicit yet inadvertent role multinational corporations play in such developments where civilian technology is modified for dual-purposes. The effectiveness of government regulation over AI, including whether AI can be ‘nationalised’ for national security reasons, will also be examined as part of AI non-proliferation.
        Export Export