Srl | Item |
1 |
ID:
140269
|
|
|
Summary/Abstract |
The expansive use of armed unmanned combat aerial vehicles (UCAV), or ‘drones’, by the United States over the past decade has occurred within a particular strategic context characterized by irregular warfare operations in permissive environments. Ongoing strategic, ethical and moral debates regarding specific uses of drones may well be overtaken by a new generation of armed combat drones able to survive and operate in contested airspace with design elements such as stealth and greater levels of machine autonomy. These design parameters, and the likely strategic context within which second generation UCAVs will be deployed, suggest a fundamentally different set of missions from those performed by the current generation of drones. The most beneficial characteristic of current unmanned systems has been the ability to combine persistent surveillance with the delivery of small precision-guided munitions. With a shift to more contested environments, this type of armed surveillance mission may become less practical and second generation UCAVs will instead focus on high intensity warfare operations. These new systems may have significant implications for deterrence, force doctrine and the conduct of warfare.
|
|
|
|
|
|
|
|
|
|
2 |
ID:
142975
|
|
|
Summary/Abstract |
Strategic uncertainty remains a significant challenge for defence planners and national leaders responsible for developing and acquiring future military capabilities. Concerns in the United States over missile proliferation during the 1990s were embodied in the 1999 National Intelligence Estimate (NIE), which predicted the emergence of a long-range ballistic missile threat by 2015. Comparing the NIEs predictions with the actual evolution of the threat reveals not only an overly pessimistic evaluation but also politicization of the intelligence assessment. At the time of its release, the report added to a growing sense of vulnerability and further encouraged policy-makers to make suboptimal missile defence acquisition decisions. In particular, the worst-case scenario thinking exemplified by the 1999 NIE contributed to an unnecessarily rushed testing programme, the premature deployment of homeland missile defence and an oversized system in Europe. Although missile defence may have strategic value, the rush to deploy has resulted in a costly and less effective military capability due to unfounded fears of strategic surprise and entrenched notions of the ballistic missile threat. Strategic uncertainty will always be a challenge, but focusing solely on developing capabilities with less regard to the actual threat environment is a recipe for costly and ineffective weapons systems.
|
|
|
|
|
|
|
|
|
|
3 |
ID:
165101
|
|
|
Summary/Abstract |
The United States has repeatedly intervened militarily in situations where tactical success on the battlefield did not translate into meaningful political resolution of the issues triggering the introduction of military force. Although US military interventions are hardly a recent phenomenon, a series of systemic, political and institutional developments over the past several decades have been particularly conducive to the limited use of force as a policy option. These factors have reduced the costs and risks of military intervention, incentivising the use of force in situations when it may not be the optimal policy response.
|
|
|
|
|
|
|
|
|
|
4 |
ID:
195267
|
|
|
Summary/Abstract |
Continuous advances in artificial intelligence has enabled higher levels of autonomy in military systems. As the role of machine-intelligence expands, effective co-operation between humans and autonomous systems will become an increasingly relevant aspect of future military operations. Successful human-autonomy teaming (HAT) requires establishing appropriate levels of trust in machine-intelligence, which can vary according to the context in which HAT occurs. The expansive body of literature on trust and automation, combined with newer contributions focused on autonomy in military systems, forms the basis of this study. Various aspects of trust within three general categories of machine intelligence applications are examined. These include data integration and analysis, autonomous systems in all domains, and decision-support applications. The issues related to appropriately calibrating trust levels varies within each category, as do the consequences of poorly aligned trust and potential mitigation measures.
|
|
|
|
|
|
|
|
|
|
5 |
ID:
152994
|
|
|
Publication |
Colorado, FirstForum Press, 2015.
|
Description |
ix, 235p.hbk
|
Standard Number |
9781626371507
|
|
|
|
|
|
|
Copies: C:1/I:0,R:0,Q:0
Circulation
Accession# | Call# | Current Location | Status | Policy | Location |
059076 | 358.1740973/MAY 059076 | Main | On Shelf | General | |
|
|
|
|