Srl | Item |
1 |
ID:
166253
|
|
|
2 |
ID:
151371
|
|
|
Summary/Abstract |
Before an autonomous machine kills the first human on the battlefield, the U.S. military must have an ethical framework for employing such technology.
|
|
|
|
|
|
|
|
|
|
3 |
ID:
164349
|
|
|
4 |
ID:
179455
|
|
|
5 |
ID:
156506
|
|
|
6 |
ID:
101913
|
|
|
Publication |
2011.
|
Summary/Abstract |
Post-genocide Rwanda has become a 'donor darling', despite being a dictatorship with a dismal human rights record and a source of regional instability. In order to understand international tolerance, this article studies the regime's practices. It analyses the ways in which it dealt with external and internal critical voices, the instruments and strategies it devised to silence them, and its information management. It looks into the way the international community fell prey to the RPF's spin by allowing itself to be manipulated, focusing on Rwanda's decent technocratic governance while ignoring its deeply flawed political governance. This tolerance has allowed the development of a considerable degree of structural violence, thus exposing Rwanda to the risk of renewed violence.
|
|
|
|
|
|
|
|
|
|
7 |
ID:
162199
|
|
|
8 |
ID:
165129
|
|
|
Summary/Abstract |
Many observers anticipate “arms races” between states seeking to deploy artificial intelligence (AI) in diverse military applications, some of which raise concerns on ethical and legal grounds, or from the perspective of strategic stability or accident risk. How viable are arms control regimes for military AI? This article draws a parallel with the experience in controlling nuclear weapons, to examine the opportunities and pitfalls of efforts to prevent, channel, or contain the militarization of AI. It applies three analytical lenses to argue that (1) norm institutionalization can counter or slow proliferation; (2) organized “epistemic communities” of experts can effectively catalyze arms control; (3) many military AI applications will remain susceptible to “normal accidents,” such that assurances of “meaningful human control” are largely inadequate. I conclude that while there are key differences, understanding these lessons remains essential to those seeking to pursue or study the next chapter in global arms control.
|
|
|
|
|
|
|
|
|
|
9 |
ID:
166221
|
|
|
10 |
ID:
183648
|
|
|
Summary/Abstract |
Recent scholarship on artificial intelligence (AI) and international security focuses on the political and ethical consequences of replacing human warriors with machines. Yet AI is not a simple substitute for human decision-making. The advances in commercial machine learning that are reducing the costs of statistical prediction are simultaneously increasing the value of data (which enable prediction) and judgment (which determines why prediction matters). But these key complements—quality data and clear judgment—may not be present, or present to the same degree, in the uncertain and conflictual business of war. This has two important strategic implications. First, military organizations that adopt AI will tend to become more complex to accommodate the challenges of data and judgment across a variety of decision-making tasks. Second, data and judgment will tend to become attractive targets in strategic competition. As a result, conflicts involving AI complements are likely to unfold very differently than visions of AI substitution would suggest. Rather than rapid robotic wars and decisive shifts in military power, AI-enabled conflict will likely involve significant uncertainty, organizational friction, and chronic controversy. Greater military reliance on AI will therefore make the human element in war even more important, not less.
|
|
|
|
|
|
|
|
|
|
11 |
ID:
173067
|
|
|
12 |
ID:
184688
|
|
|