Article first published online: 21 MAR 2007
Journal of Applied Philosophy
Volume 24, Issue 1, pages 62–77, February 2007
How to Cite
SPARROW, R. (2007), Killer Robots. Journal of Applied Philosophy, 24: 62–77. doi: 10.1111/j.1468-5930.2007.00346.x
- Issue published online: 21 MAR 2007
- Article first published online: 21 MAR 2007
abstract The United States Army's Future Combat Systems Project, which aims to manufacture a ‘robot army’ to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of the decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime. A number of possible loci of responsibility for robot war crimes are canvassed: the persons who designed or programmed the system, the commanding officer who ordered its use, the machine itself. I argue that in fact none of these are ultimately satisfactory. Yet it is a necessary condition for fighting a just war, under the principle of jus in bellum, that someone can be justly held responsible for deaths that occur in the course of the war. As this condition cannot be met in relation to deaths caused by an autonomous weapon system it would therefore be unethical to deploy such systems in warfare.