Exterior photo of the United Nations building with flags blowing in the wind
United Nations headquarters in New York City. Credit: Getty Images

The United Nations’ Focal Diplomatic Role on Emerging Technologies

emerging technologies United Nations
https://doi.org/10.1126/scidip.ade6799

The focus on emerging technologies and cyberspace as a novel domain of international relations marks a noteworthy evolution of the role of the United Nations (UN) in promoting much-needed norms for common behavior in diplomacy, not anticipated by the drafters of the original Charter. I argue that the United Nations is the premier, inclusive, and ideal forum where universally agreed-upon norms in the areas of emerging technologies can be created, and that UN Secretary-General António Guterres is already leading a concerted effort towards this goal. In May 2021, the United Nations Security Council met for the first time to discuss the role of emerging technologies such as artificial intelligence (AI) in peace and security. The following month, the Security Council met to discuss how to keep peace in cyberspace, also for the first time, ushering emerging technologies into the highest level of diplomatic efforts at the UN.1 According to the United Nations Charter, the Council has custodianship of decisions on peace, security, the protection of civilians, and the use of force in international relations. Member states spent US 1 trillion in 2020 to restore networks that had been breached or to combat malicious uses of technology.2 The need to create a global cooperative framework on cyberspace where states can increase their capacities and assist those who lack them is critical.

The UN Secretary-General’s role is critical. He created a High-Level Panel for Digital Cooperation that met in 2018–2019. In March 2019, he called for a prohibition of autonomous weapons: “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant, and should be prohibited by international law.”3 The UN Secretary-General has called on states to set limits on autonomous weapons, motivated by the belief that the use of these systems will profoundly transform war and drive humanity into a morally indefensible realm.4 Based upon the High-Level Panel’s recommendations, and after consultations with academics, the private sector, governments, and civil society, Guterres recommended a Road Map for Digital Cooperation in time for the UN’s 75th anniversary.5 The Road Map aims to bridge the digital divide between developing and developed countries, create transparency by curbing the spread of misinformation, protect critical digital infrastructure, and protect people’s dignity. The Road Map also seeks to restrain the weaponization of emerging technologies in general and instead require that such technologies be used solely for the common good of humanity.6 There is admittedly much work ahead to carry out the Road Map for Digital Cooperation, especially in the areas of misinformation, the spread of hate speech, and the digital divide between developed and developing countries. However, there is one area within emerging technologies that has gained sustained attention in the last five years: the first formal meeting of the Group of Governmental Experts (GGE) related to emerging technologies in the area of lethal autonomous weapons systems was held in Geneva from November 13 to 17, 2017.7

The proactive and preventive action-oriented role that the UN Secretary-General assumed has firmly placed the United Nations as the cornerstone for global action in emerging technologies. For Guterres, there are four significant threats to global security today: mounting geopolitical tensions, the climate crisis, global mistrust, and the dark side of technology to commit abuses and crimes, spread hate and misinformation, and oppress people in an increasing number of countries. Technological advancements are quickly outpacing diplomatic efforts to regulate their use, and the world is not prepared for the impact of the Fourth Industrial Revolution. In the high-level segment of the UN General Assembly in September 2021, Guterres launched the Common Agenda, a comprehensive path forward to tackle these four threats, using the existing platform of the UN Sustainable Development Goals. The Common Agenda resulted from a two-years-long crowd-sourced consultation with thousands of people worldwide and represents a pivot towards protecting future generations and including youth in policy making.8

The GGE was established within the remit of the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons (CCW), which is considered an instrument of international humanitarian law (IHL), setting limits on what is lawful and unlawful during war and conflict. The initial discussions of the moral, legal, and ethical implications of autonomous weapons were brought to the Human Rights Council in 2013. A year later, France and Germany decided to continue these discussions within the framework of the CCW, which led to the formation of the GGE. Since then, the GGE has produced a set of eleven principles on autonomous weapons that establish the applicability of IHL to all new systems and which prohibit the delegation of human responsibility to autonomous systems. Most states view this outcome as inadequate to meet the challenges presented by autonomous systems. In May 2021, the International Committee of the Red Cross (ICRC)—the legal guardian of IHL—published its position on the matter and called upon states to negotiate a new legally binding instrument on autonomous weapons.9 Their call promises to be a milestone because the ICRC has commanding clout as the ICRC has an authoritative place in any discussion on weapons use and control. The ICRC’s view is that the use of such systems poses a significant risk to civilians and combatants, which is compounded by the unpredictable nature of outcomes generated by AI-enabled algorithms, which furthermore may not be able to comply with IHL.

At the end of the day, who lives or who dies should not be relegated to sensor data and software programming. Three types of limits should therefore form the basis of a new international treaty: first, limits for the use of autonomous weapons solely on military targets, such as incoming missiles and in situations where civilians are not present; second, even when a machine learning algorithm supplies the target, the targeting should be limited in duration and geographical scale, enough to allow a human to oversee an operation; third, human control and supervision should be required to allow for timely intervention if needed.10

Zooming out from this close examination of autonomous weapons regulation, it is important to mention the obstacles that make the feasibility of new international regulations to combat threats posed by AI and the malicious use of cyber technologies hotly contested. It is well understood that certain states in the international community have tremendous incentive to weaponize AI and continue to engage malevolently within cyberspace. They will take advantage of continued technological developments to promote their military strength.11 These states and others, including Russia and China, also support “patriotic hackers” informally, though they deny such behavior formally.12 Not only do different countries have varying incentives and potential benefits to developing their domestic cyber infrastructure, but they also hold contrasting ideas of sovereignty and vary as to how much they are willing to regulate an infrastructure that is evolving largely in the private sector. As the United States showed in its refusal to sign the 2012 treaty on internet regulation created by the International Telecommunications Union (ITU), some states might outwardly refuse to support a treaty that gives an international governmental body regulatory power over the private sector.13 These are just a few issues that might make coming to a concrete international agreement on AI and cyber regulation difficult. Nonetheless, an international treaty remains the best solution.14

In the light of these obstacles to regulation, and zooming in again on the regulation of autonomous weapons as a central aspect of the regulation of emerging technologies in the area of international security, key questions remain: which UN member states on board, and what are the prospects of a new international treaty regulating all aspects of the development and deployment of autonomous weapons? The answer to the first part of the question is promising: most member states, along with AI scientists and civil society, would like to see all-inclusive international law on human-machine interactions, including a combination of prohibitions and regulations. This new treaty would be innovative and would not fit the mold of existing disarmament and arms control regulations. This new treaty would address how humans remain present in overseeing new technology deployment in existing systems. It would also have to be future-proof and remain relevant vis-à-vis new technological innovations. However, the CCW includes the leading producers of autonomous technologies—Australia, India, Israel, Japan, Republic of Korea, Turkey, Russia, the United Kingdom, and the United States—and these states remain and continue to act as stumbling blocks to any progress. In the last round of negotiations, in December 2021, India, Israel, Russia and the United States have blocked progress. They justified their positions by saying that existing IHL suffices to confront the challenges. They also argue that weapons testing before deployment would ensure respect with international law. These justifications have been debunked by most observers including by the ICRC position calls for clarity in existing international law and the establishment of new prohibitions.

However, the deliberations at the UN have come a long way, and it seems that these countries are now a minority. The continuation of the logic of militarizing yet another technology, AI, is seen by most as an utter betrayal in the efforts to address all the other pressing challenges faced by humanity, as highlighted by the UN Secretary-General. AI has the potential to assist with tackling diseases and solving the climate crisis; it should not be weaponized as nuclear technology was. All in all, the UN has a central role to play in all aspects of emerging technologies, and in being the all-inclusive forum that will deny a few countries their march towards amplifying the dark side of technology.15

DOI: https://doi.org/10.1126/scidip.ade6799

Endnotes

  1. “Arria-Formula Meeting on the Impact of Emerging Technologies on International Peace and Security,” Security Council Report, May 14, 2021, www.securitycouncilreport.org/whatsinblue/2021/05/arria-formula-meeting-on-the-impact-of-emerging-technologies-on-international-peace-and-security.php.
  2. United Nations Security Council, SC/14563, June 29, 2021, “‘Explosive’ Growth of Digital Technologies Creating New Potential for Conflict, Disarmament Chief Tells Security Council in First-Ever Debate on Cyberthreats,” www.un.org/press/en/2021/sc14563.doc.htm.
  3. “Machines Capable of Taking Lives without Human Involvement Are Unacceptable, Secretary-General Tells Experts on Autonomous Weapons Systems,” SG/SM/19512-DC/3797, March 25, 2019, www.un.org/press/en/2019/sgsm19512.doc.htm.
  4. Denise Garcia, “Future Arms, Technologies, and International Law: Preventive Security Governance,” European Journal of International Security 1, no. 1 (2016): 94–111.
  5. “UN Secretary-General’s Strategy on New Technologies,” September 2018, www.un.org/en/newtechnologies/images/pdf/SGs-Strategy-on-New-Technologie....
  6. Eugenio V. Garcia, “Multilateralism and Artificial Intelligence: What Role for the United Nations?,” December 12, 2020, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3779866.
  7. Eugenio V. Garcia, “AI & Global Governance: When Autonomous Weapons Meet Diplomacy,” United Nations University, Centre for Policy Research, August 21, 2019, https://cpr.unu.edu/publications/articles/ai-global-governance-when-autonomous-weapons-meet-diplomacy.html.
  8. United Nations General Assembly, “Follow-up to the Report of the Secretary-General Entitled ‘Our Common Agenda,’” A/RES/76/6, November 15, 2021, https://undocs.org/en/A/RES/76/6.
  9. ICRC, “Position on Autonomous Weapons Systems,” May 12, 2021, www.icrc.org/en/document/icrc-position-autonomous-weapon-systems.
  10. Daniele Amoroso and Guglielmo Tamburrini, “Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues,” Current Robotics Reports (2020): 1–8.
  11. Michael Kolton, “Interpreting China’s Pursuit of Cyber Sovereignty and Its Views on Cyber Deterrence,” The Cyber Defense Review 2, no. 1 (2017): 119–54. www.jstor.org/stable/26267405.
  12. Robert Litwak and Meg King, “Cybersecurity Treaties May Be Nice, But It’s Every Country For Itself,” Wilson Center, November 11, 2015, www.wilsoncenter.org/article/cybersecurity-treaties-may-be-nice-its-really-every-country-for-itself.
  13. John Mello, Jr. “Opponents Say ITU Treaty Threatens Internet Freedom,” PCWorld, December 14, 2012, www.pcworld.com/article/456055/opponents-say-itu-treaty-threatens-internet-freedom.html.
  14. Oona A. Hathaway, Rebecca Crootof, Philip Levitz, Haley Nix, Aileen Nowlan, William Perdue, and Julia Spiegel, “The Law of Cyber-Attack,” California Law Review 100, no. 4 (2012): 817–85.
  15. Denise Garcia, “Stop the Emerging AI Cold War,” Nature 593, no. 7858 (2021): 169.

     

Mechanisms February 2022: Special Issue