Warbot Ethics

cropped-aofw_homepage_with_corrected_watermark.jpg

Brian M. Michelson is a US Army Officer and currently serves as a Chief of Staff of the Army Senior Fellow at the Atlantic Council. His short story “Warbot 1.0: The Death of Homer” explores the role of robots and AI in future land warfare. He has written for Military Review, Small Wars Journal, and the Institute of Land Warfare. The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of the United States Army War College, the United States Army, the Department of Defense, or the United States Government.

Rapid improvements in robotic technologies are presenting both civilian policy makers and military leaders with uncomfortable ethical choices. The pace of change is even quicker than many imagine: California recently issued its 30th corporate permit to test autonomous vehicles on public roads. Emerging artificial intelligence (AI) technologies offers impressive gains in military effectiveness, yet how do we balance their use with accountability for inevitable errors? Considering the history of major technological and conceptual advances, many of these tensions and choices are neither new, unique, nor conceptually unapproachable. The choices fall broadly into three categories: How much autonomy do we provide to autonomous weapons to maximize their military effectiveness? Who makes that decision? And perhaps most critically, who is held accountable when something inevitably goes wrong? Fortunately, current military thought, doctrine, and regulations already provide an effective and adaptable conceptual framework for these challenges.

In its directive on Autonomy in Weapon Systems, the Department of Defense (DoD) describes an autonomous weapon system as one that “once activated, can select and engage targets without further intervention by a human operator.” The DoD currently allows fully autonomous systems to operate in a defensive mode against “time-critical or saturation attacks” but places additional limitations on them such as prohibitions on targeting humans and on selecting new targets in the event of communications outages. Technology has advanced rapidly since the initial 2012 publication of this document and even with the May 2017 update, the policy seems to accept autonomy in defensive and non-lethal systems but place additional restrictions on offensive systems. For purposes of this discussion, a warbot goes a step further and is defined as an offensive robotic system that can “detect, identify, and apply lethal force to enemy combatants within prescribed parameters and without immediate human intervention.” This expanded use of technology and autonomy come with significant risk but the cardinal issue remains: can we operationalize the tremendous benefits offered by warbots while continuing to constrain our war making within current ethical standards and accountability methods? This may not be as difficult of an issue as we might initially suspect.

Read the full essay at The Bridge.