Skip to main content

Appendix 7

Monday 9 – Wednesday 11 October 2023 | WP3219

shutterstock_2203611307-scaled (1)

Working Group 7: Ethics and Legal

Chair: Professor Nigel Biggar (UK), Regius Professor, Moral and Pastoral Theology, University of Oxford

Co-Chair/Rapporteur: Professor Paul Schulte (UK), University of Birmingham (TAG)

The Ethics and Legal Working Group addressed how international rules and codes governing the conduct and limits of warfare, including international humanitarian law and rules of engagement, might have evolved by 2035? What principles of responsible use should NATO adjust or develop to account for the future employment of emerging and disruptive technologies (e.g., AI, autonomy, big data and ICT, hypersonic systems, space-based technologies, quantum technologies biotechnology and human enhancement)? How can NATO become a leader in the development of ethical concepts related to future technology and warfare? What other bodies and stakeholders (industry, academia, civil society) should NATO engage in the development and adjustment of principles of responsible use and other ethical and legal concepts? How should policymakers, capability developers, military commanders, staffs, and operators be trained and educated to ensure employment of future capabilities according to principles of responsible use and other ethical and legal concepts?  

David Whetham (UK), Professor of Ethics and the Military Profession, King’s College London, London, United Kingdom

Marina Miron (Germany), Post-doctoral researcher at the War Studies Department, King’s College London,

Rachel Kerr (UK), Professor of War Studies and Society, King’s College London

THE REPORT

Core message

NATO has good reasons to agree and promote ethical norms and guidelines about the military uses of new technologies. However, the present prospect of winning consent from Russia, China, and the Global South to new international law is not promising. So, while seeing what can be done to create diplomatic common ground for a new international treaty and considering how the new technologies might enhance its own compliance with current International Humanitarian Law, NATO should focus on strengthening agreement on norms among its member-states and using its market-power and liaison with professional bodies to promote them worldwide. In addition, it should support the relevant education of senior decision-makers.

General considerations

  1. Military ethics is not about preventing or hamstringing the use of lethal force. It’s about controlling it. Why should NATO want to control it? So that it serves our purpose, rather than subverts it. What is our purpose? To defend a humane and liberal way of life. We cannot do that if we choose to defend it by military means that make us inhumane and tyrannical. So, we need to control our use of lethal force.
  2. Moreover, the call for a statement of ethical norms and guidelines about the use of novel technologies is coming from within the military itself. The publication of norms by military institutions serves to relieve the uncertainty of individuals, and thereby to enhance military efficiency and alacrity.
  3. Further still, NATO will be held to stringent ethical account by critics at home and abroad. How well NATO answers its critics will help to determine how much political opposition its actions arouse. Political opposition can create military problems.
  4. A basic issue is that NATO’s adversaries seek to extend and promote a less humane and illiberal way of life, and that they are therefore less inclined to constrain their military means. That may put NATO at a disadvantage in the deployment of the new technologies. Does that destine to defeat? Not necessarily. Technological superiority does not always prevail in war. Sometimes its advantages are overwhelmed by non-technological—say, political—weaknesses. Thus, the technologically superior West lost the war in Afghanistan. And it is not clear that Russia’s present atrocious lack of restraint in Ukraine has given it a significant military advantage. It might have cost it diplomatically in causing its failure to get returned to membership of the UN Human Rights Council. Besides, suffering humane constraints, and cleaving to the moral high ground, will be necessary for NATO to maintain the support of domestic electorates and technology company workforces—although popular scruples may relax in an existential conflict, as they did among the British in the Second World War.
  5. Of course, if it were to persuade the rest of the world to sign up to an international law that subjects them to the same constraints, NATO might not suffer any disadvantage. Diplomatic efforts should be made, therefore, to try and build common ground on which a new law might be agreed. However, right now the prospect of persuading Russia and China, or even the Global South, to agree to a treaty seems bleak. And even if they did sign up, we might well find their practical interpretation of the law dramatically different from our own.
  6. However, even if NATO cannot expect to lead the world by causing the rest of the world to follow it—it has internal reasons to agree among its members humane and liberal ethical rules to control its own use of military technologies. What should these be? This is not the place to work through each of the new technologies, identify the ethical issues it raises, and decide on suitable rules. However, we can say that there seems no need to develop novel ethical and legal criteria. The familiar, long-tested ones will suffice: for example, moral responsibility and accountability regarding Artificial Intelligence (AI) and autonomous weaponry; proportionality and discrimination regarding responses to cyber-aggression; informed consent regarding human augmentation; and open-eyed, realistic prudence regarding the limitations and risks of bright new shiny technology in general.

Outlying thought

While there is often very good reason to want to respond faster than the enemy—say, to an incoming missile attack—we should be careful not to assume that maximum speed is always desirable and confers a military advantage. After all, doing something stupid faster than your opponent does not advantage you.

Recommendations

  1. At least for the sake of its own military cohesion, NATO’s member states should continue to strive toward greater agreement on the rules governing the use of the new technologies. An obvious, straightforward first step would be a collective reaffirmation that all new technology adapted by NATO should conform to international law.
  2. Second, NATO should consider how the new technologies might enhance its own compliance with current International Humanitarian Law—for example, through more precise targeting and better intelligence.
  3. Lots of unilateral work is already going on within member-states. A third obvious step would be to collect all this for discussion, perhaps agreement, at NATO level.
  4. While getting agreement on common values is difficult, it is not impossible. Take the case of the military uses of Human Augmentation. National positions are emerging that are not consistent with each other. However, bodies like the Multinational Capability Development Campaign led by the US, which seeks to encourage common standards and approaches among NATO allies and partners, has published a set of agreed ‘common considerations’ that are broadly compatible with the positions of all member states. A fourth step would be to add to these and refine them over time as technologies—and dialogue about them—mature.
  5. Adding in normative requirements late to mature technology is challenging, slow and expensive. However, right now member-states do not disagree very dramatically about the principles and operational rules governing the military uses of Artificial Intelligence. A fifth step would be to formulate these rules as an agreed set of requirements, so that the commercial producers of defence-related technologies would take them into account at an early stage and incorporate them efficiently into their products at minimal expense. Given the need for new technology companies to maximise their market, setting a required minimum specification for the NATO one would probably shape the way technology is developed. In this way, NATO as a whole could use its power as a major global consumer to wield world-wide ethical influence in shaping norms.
  6. This is in fact the approach already being taken by the UK regarding the military uses of AI. The agreed higher-level principles are currently being specified and explained at the operational level, so that there is a clear expectation of what ‘human-centricity’ means in relation to a new sensor-system employing Human Machine Teaming. That expectation will then appear as a requirement in any tender process.
  7. Many emerging military technologies require professionals to be involved with their use or deployment. These include medical personnel, engineers, scientists, some in uniform, some not. But whether military or civilian, all of these are subject to professional codes of conduct. These codes comprise powerful norms that shape behaviour and therefore the application of new technology. A sixth step, therefore, would be for NATO to engage with international professional bodies so that military considerations are taken into account in the development of professional codes and best practice—for example, with regard to neuro-enhancement and human machine-teaming.
  8. For example, the Institute for Electronic and Electrical Engineers is the leading international, non-governmental, professional organisation covering all things related to electrical engineering. In the field of neuroscience, it has sponsored multiple international working bodies seeking to understand the implications of applying a common set of bio-ethical principles in different sectors, ranging from telecommunications and media through to the military. The military working group comprises international scientists with direct experience of working with military organisations around the world. While direct input might not be appropriate, NATO could still influence this enterprise in professional norm-generation indirectly, since some of the IEEE scientists also work with some of its own institutions such as the Defense Advanced Research Projects Agency (DARPA). In the long run, shaping the professional expectations of best practice is likely to affect actual behaviour.
  9. A seventh step concerns not the development and promotion of international norms regarding the military uses of the new technologies, but the education of political leaders about them. At least senior civil servants, and perhaps even their political masters, could benefit from taking short Continuing Professional Development courses designed to bring senior decision-makers up-to-date on the new technologies and their military applications. For example, King’s College London, the University of New South Wales, and Arizona State University are developing just such courses in the specific area of AUKUS technologies (nuclear, submarine, AI, hypersonics, etc.) and their implications—ethical, social, and political.

Summary

In sum, our recommendations are that NATO should: (1) affirm its commitment to comply with international law; (2) consider how the new technologies might enhance its own compliance with current International Humanitarian Law; (3) gather together unilateral work for collective discussion and perhaps agreement; (4) develop the MCDC’s ‘common considerations’; (5) formulate agreed rules re. AI as a set of NATO requirements of technology producers; (6) liaise with professional bodies about the formulation of codes of conduct vis-à-vis the military uses of the new technologies; and (7) support the education of senior decision-makers about such uses.

Previous

Appendix 6

Next

Appendix 8

Want to find out more?

Sign up to our newsletter