Printer-friendly version
About the Authors

Amanda M. Buch is a neuroscientist and postdoctoral fellow at Weill Cornell Medicine who received her Ph.D. from Weill Cornell Medicine of Cornell University and has studied science diplomacy at Rockefeller University as part of the Hurford Science Diplomacy Initiative. Her areas of research include neuroimaging, machine learning, behavioral neuromodulation, ultrasonics, and bioinformatics. Her research and science communication have been featured by the Dana Foundation and Story Collider, she has published in top academic journals including Neuropsychopharmacology and Nature, and she has founded multiple seminar series including the Frontiers in Neuropsychiatry seminars (FINS) at Weill Cornell Medicine. Beyond her science endeavors, she is a visual artist and studies drawing and painting in the classical art tradition at Grand Central Atelier in NYC. She received her B.A. in Biophysics from Columbia University, U.S.

David M. Eagleman is a neuroscientist at Stanford University and Guggenheim Fellow. His areas of research include sensory substitution, time perception, vision, and synesthesia. He also studies the intersection of neuroscience with the legal system, and in that capacity, he directs the nonprofit Center for Science and Law. He has published over 120 academic publications including in Science and Nature and is an internationally bestselling author of books about the brain and literary fiction, with his works translated into thirty-three languages and two operas. He has launched several neuroscience companies from his research, including Neosensory and BrainCheck, and is the creator and host of PBS’s Emmy-nominated series “The Brain with David Eagleman.” He received his Ph.D. in Neuroscience from Baylor College of Medicine, U.S.
(Contact: Assistant to David Eagleman at

Logan Grosenick is an assistant professor of neuroscience in psychiatry at Weill Cornell Medicine of Cornell University and Chief Science Officer at Responsive AI. His areas of research include artificial intelligence in medicine, biological and artificial neural networks, neuroimaging, computational microscopy, and behavioral neuroscience. He has published in top academic journals including Cell, Nature, and Science and is an inventor on multiple US patents. He received his M.Sc. in Statistics and Ph.D. in Neuroscience from Stanford University, U.S.


Engineering Diplomacy: How AI and Human Augmentation Could Remake the Art of Foreign Relations

Over the last two decades, the pace of innovation in technologies that alter the human experience has been rapidly accelerating while at the same time our world becomes increasingly interconnected. Smartphones, virtual conferencing, wearable technology, virtual and augmented reality, neurotechnology, and artificial intelligence are emerging as widely available technologies that—if used appropriately—could provide significant advantages for the practice of diplomacy. By responsibly deploying and adopting these technologies, international negotiators and policy makers will be able to automate and outsource more sensory and cognitive tasks to technology, smoothly leverage big data, and make decisions more efficiently and effectively. Through collaborations among diplomats, scientists, and engineers, along with organizational adaptation that incorporates training in, integration of, and distribution of resources for AI,1 we envision artificial intelligence (AI) paired with emerging human augmentation technologies significantly improving the bandwidth, speed, and optimality of diplomacy.2

Decades of scientific research have revealed how the limitations of human senses and human information processing bandwidth3 affect decision making4 and problem solving,5 lead to cognitive biases,6 and undercut so-called “expertise”7 which negatively impacts interpersonal negotiations in diplomacy.8 For example, during the COVID-19 pandemic, these human limitations have biased global health diplomacy and policy making, and thus contributed to inequitable vaccine distribution9 and counterproductive international border restrictions.10 Managing the vast amount of data and fast-changing dynamics and eliminating the cultural biases that together lead to such suboptimal decisions is precisely how AI could enhance decision-making in diplomacy, as AI can digest more data than any individual human can. As a result, it can systematically gather and filter information, game out (i.e., simulate or forecast) possible outcomes, and make recommendations. Thus, by synthesizing and summarizing vast amounts of data and possible outcomes, AI can enhance our decision and policy making.

While cognitive tasks can be outsourced to AI to reduce data and decision complexity, new hardware technologies that enhance human sensory systems can increase the bandwidth transfer rate between AI and humans. Sensory augmentation and embodied computing devices are being developed that can extend our perceptual experience by feeding information to humans through new or underutilized input channels such as vibrotactile feedback and augmented reality via wearables or neural implants.11 Wearable human-in-the-loop AI12 could increase sensory bandwidth by enabling the delivery through augmented reality of filtered, relevant data and real-time notification of events such as shifts in sentiment. This live exchange between AI and diplomats via wearables could enhance decision-making capacity during negotiations, similar to how driving apps now provide real-time traffic and accident updates and options for rerouting along better optimized paths. Finally, AI deployed for diplomacy could synthesize the viewpoints of many actors into multi-objective goals and, by formalizing these goals, could recommend decisions that yield certifiably optimal (or bounded suboptimal) outcomes. For example, it could optimize solutions to several problems simultaneously in multilateral diplomacy involving multiple goals (e.g., negotiations to address concurrent impasses on energy, immigration, and trade across multiple borders, such as is currently being simulated with AI by Google DeepMind for the Diplomacy board game that requires mixed-motive, many-agent interactions13). Integrating human augmentation devices with AI could therefore create a high bandwidth link between rich streaming data, augmented decision making, and the human diplomatic exchange.

While human augmentation technology for AI lives mostly in the lab for now, its acceleration in real-world diplomacy might not be far off. Indeed, AI is increasingly being developed for mediating intra- and inter-state conflicts and is already influencing domestic and foreign policies. At the metropolitan level, smart city initiatives around the globe (e.g., in NYC, Amsterdam, Dubai, and Singapore) are leveraging data, sensors, and analytics to implement policy changes related to traffic patterns, energy delivery, and public-government communication that can increase connectedness, efficiency, and residents’ quality of life.14 As one example, in 2019 the artificially intelligent IBM Project Debater used natural language processing to capture and summarize the collective narrative of more than 60,000 citizens of Lugano, Switzerland on the topic of developing autonomous vehicles, and then used these summaries to inform Swiss policymakers.15

On a global scale, diplomats can use AI as a tool for collecting and analyzing data. AI can be used in near-real-time tracking of objects using high-resolution satellite imaging, for instance, automatically detecting when troops or refugees are amassing at borders.16 It will be important for the international community to collaborate on AI policies and tools that mitigate the risks of AI deployed for techno-authoritarianism. In fact, AI has already served as an early warning system to avert international humanitarian crises17 and to resolve military conflicts such as the 2020 China-India skirmish at the Galwan Valley.18  Similar applications to public health have demonstrated the potential for detection, modeling, and prediction of infectious disease outbreaks, necessary to intercept future pandemics.19 For example, UN Global Pulse is simulating how the spread of COVID-19 impacts refugee and IDP settlements to aid in global health policy making20 as well as the spread and impact of misinformation through social media during the pandemic in coordination with the World Health Organization.21 These and many other applications in diplomacy and foreign relations in fields such as health and (tele)medicine, transportation, energy and environment, public safety, and cybersecurity have been examined in some greater detail by the Commission on the Geopolitical Impacts of New Technologies and Data.22

In fact, over 200 applications of AI are being developed by the United Nations alone23 that could influence diplomatic relations and policy making. One interesting example is the UN Global Pulse implementing machine learning to study how transcription and translation inaccuracies in radio shows in Uganda lead to misconceptions and in turn cognitive biases that can spark social tensions.24 This case study in Uganda has implications for diplomatic communications at the global level—there are over 6,000 languages in use worldwide and translation errors have frequently impeded diplomatic communication. For example, during the 2021 Anchorage meetings aimed at stabilizing US-China foreign relations, inaccuracies in translation by human translators increased the aggressiveness of exchanges between US and Chinese state leaders.25 The adoption of real-time AI-assisted translation/transliteration (transmitted through wearable technology) by diplomats could be implemented to supplement human translation in international exchanges such as this as well as for summarizing unreadably large volumes of foreign text.26 By leveraging the recent major advances in natural language processing,27 diplomats could mitigate the risk of mistranslations and miscommunications that disadvantage and risk obstructing their negotiations.

In all these cases, AI can aid policy makers in surfacing meaning from a deluge of data, allowing them to coordinate shared objectives and prevent crises. However, these technologies also present significant ethical and practical challenges that may require negotiating international agreements for AI diplomacy standards. Data/model quality and security, technological asymmetries between nations, privacy concerns, and the potential for espionage must all be considered. As an example of data quality, recent research has found that significant undesirable bias (e.g., along racial, gender, or religious lines) is introduced into AI algorithms when unfiltered data corpora are used.28 This has led to calls for improved data treatment standards (see, e.g., Andrew Ng’s data-centric AI movement29) and protocols for the evaluation of models that should be considered when developing AI tools for diplomacy and policy making.

More generally, a growing body of research on the ethics of AI points to at least four recommendations for the adoption of emerging AI and human augmentation technologies: (1) create a secure framework for the global coordination, curation, and exchange of data and models with secure multi-party computation that protects sensitive data (see, e.g., “secure oblivious” algorithms that enable joint computation in distributed systems between different parties without directly sharing the input data30), (2) form an international committee with scientists, engineers, and diplomats to jointly coordinate AI standards and establish fairness benchmarks, (3) initiate formal international collaborations that designate international R&D funding and establish material transfer agreements, and (4) use human augmentation technologies to enhance diplomats’ engagement with, understanding of, and adaptation of AI technology.

​​With the tremendous volume of information generated and captured by emerging technologies, AI and human augmentation will be increasingly important in overcoming the natural limits of our capabilities. Outsourcing information tasks that we struggle with to AI and enhancing our sensory systems will enable us to make better and faster decisions and predictions. Diplomats often see the use of big data and AI as “nice to have” rather than a “must have”; however, this is likely to change as AI is adopted in adjacent professions—and as AI creates diplomatic crises of its own.31 If well deployed, these technologies could decrease response time, reduce decision-making errors, and optimize multi-state interactions to enhance coordination, decrease conflict and disease, and maintain global stability. As the world faces increasingly global challenges including viral pandemics32 and climate change,33 AI-augmented diplomacy is likely to play an important role in avoiding future catastrophes. These efforts will also require confidence building in a community steeped in old-school traditions of building personal relationships who may be reluctant to embrace such new methods (organizational adaptation is always required for emerging technologies to change international relations34). Scientists and engineers should therefore work shoulder-to-shoulder with diplomats to integrate these technological tools into policymaking and modernize the art of diplomacy.



  1. Ryan Dukeman, “Winning the AI Revolution for American Diplomacy,” November 25, 2020, Eric Schmidt et al., “National Security Commission on Artificial Intelligence: 2021 Final Report,” 2021,
  2.  As experts in neuroengineering and AI, we highlight the cognitive motivation and positive potential of such adoption in this short article. However, it is clear that increasing use of such technologies in diplomacy will not be strictly out of beneficence or technological fascination, but will likely, in part, be a necessary response to ongoing realities of geopolitical competition such as the aggressive adoption of AI for warfare, AI-generated misinformation, and growing AI-enabled techno-authoritarianism.
  3. Bastien Blain, Guillaume Hollard, and Mathias Pessiglione, “Neural Mechanisms Underlying the Impact of Daylong Cognitive Work on Economic Decisions,” Proceedings of the National Academy of Sciences of the United States of America 113, no. 25 (June 21, 2016): 6967–72,; Weizhen Xie, Stephen Campbell, and Weiwei Zhang, “Working Memory Capacity Predicts Individual Differences in Social-Distancing Compliance During the COVID-19 Pandemic in the United States,” Proceedings of the National Academy of Sciences of the United States of America 117, no. 30 (July 28, 2020): 17667–74,
  4. Shai Danziger, Jonathan Levav, and Liora Avnaim-Pesso, “Extraneous Factors in Judicial Decisions,” Proceedings of the National Academy of Sciences of the United States of America 108, no. 17 (April 26, 2011): 6889–92,; Kathleen D. Vohs et al., “Making Choices Impairs Subsequent Self-Control: A Limited-Resource Account of Decision Making, Self-Regulation, and Active Initiative.” Journal of Personality and Social Psychology 94, no. 5 (2008): 883–898,; Grant A. Pignatiello, Richard J. Martin, and Ronald L. Hickman Jr., “Decision Fatigue: A Conceptual Analysis,” Journal of Health Psychology 25, no. 1 (January 2020): 123–35,
  5. Tom Meyvis and Heeyoung Yoon, “Adding Is Favoured Over Subtracting in Problem Solving,” Nature 592 no. 7853 (2021): 189–190,; Gabrielle S. Adams et al., “People Systematically Overlook Subtractive Changes,” Nature 592, no. 7853 (April 2021): 258–61,
  6. Emily Pronin, Daniel Y. Lin, and Lee Ross, “The Bias Blind Spot: Perceptions of Bias in Self Versus Others,” Personality & Social Psychology Bulletin 28, no. 3 (March 1, 2002): 369–81,; Eric Luis Uhlmann and Geoffrey L. Cohen, “‘I Think It, Therefore It’s True’: Effects of Self-Perceived Objectivity on Hiring Discrimination,” Organizational Behavior and Human Decision Processes 104, no. 2 (November 1, 2007): 207–23,; Irene Scopelliti et al., “Bias Blind Spot: Structure, Measurement, and Consequences,” Management Science 61, no. 10 (October 1, 2015): 2468–86,; Helen Brown, Michael J. Proulx, and Danaë Stanton Fraser, “Hunger Bias or Gut Instinct? Responses to Judgments of Harm Depending on Visceral State Versus Intuitive Decision-Making,” Frontiers in Psychology 11 (September 18, 2020): 2261,
  7. K. A. Ericsson and A. C. Lehmann, “Expert and Exceptional Performance: Evidence of Maximal Adaptation to Task Constraints,” Annual Review of Psychology 47 (1996): 273–305,; Otto Lappi, “The Racer’s Brain - How Domain Expertise Is Reflected in the Neural Substrates of Driving,” Frontiers in Human Neuroscience 9 (November 24, 2015): 635,; Merim Bilalić, The Neuroscience of Expertise (Cambridge, UK: Cambridge University Press, 2017),
  8. Mauro Galluccio and Aaron Tim Beck, “Scientists Meet Diplomats: A Cognitive Insight on Interpersonal Negotiation,” in Science and Diplomacy: Negotiating Essential Alliances, ed. Mauro Galluccio (Cham: Springer International Publishing, 2021), 177–88,
  9. Santiago Zabala, “The Hidden Ideological Obstacles to Vaccination,” Al Jazeera, March 24, 2021,
  10. Ida Takanori, “Economics Knowledge for COVID-19 Measures: Applying Cognitive Bias to Policymaking,” Discuss Japan - Japan Foreign Policy Forum, September 2, 2021,; Kent E. Calder, “Opinion: COVID-19 and New ‘Broken Dialogue’ with Japan,” December 16, 2021, Mireya Solís, “In Vying for Economic Preeminence in Asia, Openness Is Essential,” Brookings, January 14, 2022,
  11. Pranay Agrawal and John Mark Dolan, Sensory Augmentation for Increased Awareness of Driving Environment, Technologies for Safe and Efficient Transportation (T-SET) UTC, The Robotics Institute, Carnegie Mellon University, 2014),; Peter B. Shull and Dana D. Damian, “Haptic Wearables as Sensory Replacement, Sensory Augmentation and Trainer - A Review,” Journal of Neuroengineering and Rehabilitation 12, no. 1 (July 20, 2015): 59,; Sabine U. König et al., “Learning New Sensorimotor Contingencies: Effects of Long-Term Use of Sensory Augmentation on the Brain and Conscious Perception,” PloS One 11, no. 12 (December 13, 2016): e0166647,; Michael V. Perrotta, Thorhildur Asgeirsdottir, and David M. Eagleman, “Deciphering Sounds Through Patterns of Vibration on the Skin,” Neuroscience 458 (March 15, 2021): 77–86,
  12. Robert (Munro) Monarch, Human-in-the-Loop Machine Learning: Active Learning and Annotation for Human-Centered AI (Shelter Island, NY: Manning, 2021),
  13. Thomas Anthony et al., “Learning to Play No-Press Diplomacy with Best Response Policy Iteration,” arXiv [cs.LG] (June 8, 2020), arXiv,
  14. Zaheer Allam, “Big Data, Artificial Intelligence and the Rise of Autonomous Smart Cities,” in The Rise of Autonomous Smart Cities: Technology, Economic Performance and Climate Resilience, ed. Zaheer Allam (Cham: Springer International Publishing, 2021), 7–30,
  15. Noam Slonim et al., “An Autonomous Debating System,” Nature 591, no. 7850 (March 2021): 379–84,
  16. Frank D. W. Witmer, “Remote Sensing of Violent Conflict: Eyes From Above,” International Journal of Remote Sensing 36, no. 9 (May 3, 2015): 2326–52,
  17. Ben Wang et al., “Problems from Hell, Solution in the Heavens?: Identifying Obstacles and Opportunities for Employing Geospatial Technologies to Document and Mitigate Mass Atrocities,” Stability: International Journal of Security and Development 2, no. 3 (2013),; John A. Quinn et al., “Humanitarian Applications of Machine Learning with Remote-Sensing Data: Review and Case Study in Refugee Settlement Mapping,” Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences 376, no. 2128 (September 13, 2018),
  18. Carrey A. Chin, “A Study on the Commercialization of Space-Based Remote Sensing in the Twenty-First Century and Its Implications to United States National Security” (Naval Postgraduate School, Monterey, CA, 2011),; Toqeer Ahmed et al., “Face-Off Between India and China in Galwan Valley: An Analysis of Chinese Incursions and Interests” (June 28, 2020),; Ram Avtar et al., “Remote Sensing for International Peace and Security: Its Role and Implications,” Remote Sensing 13, no. 3 (January 27, 2021): 439,
  19. Nistara Randhawa et al., “Fine Scale Infectious Disease Modeling Using Satellite-Derived Data,” Scientific Reports 11, no. 1 (March 25, 2021): 6946,; Daniel Zeng, Zhidong Cao, and Daniel B. Neill, “Chapter 22 – Artificial Intelligence–enabled Public Health Surveillance—from Local Detection to Global Epidemic Monitoring and Control,” in Artificial Intelligence in Medicine, eds. Lei Xing, Maryellen L. Giger, and James K. Min (Cambridge, MA: Academic Press, 2021), 437–53,
  20. Joseph Aylett-Bullock et al., “Operational Response Simulation Tool for Epidemics within Refugee and IDP Settlements,” bioRxiv (medRxiv, January 29, 2021),
  21. Pulse Lab New York, “Understanding the COVID-19 Pandemic in Real-Time,” accessed February 4, 2022,
  22. David Bray, Report of the Commission on the Geopolitical Impacts of New Technologies and Data (Atlantic Council, 2021),
  23. International Telecommunication Union, “United Nations Activities on Artificial Intelligence (AI) Report,” 2021,
  24. René Clausen Nielsen, “Scaling Radio Analysis with Data Science for Infodemic Monitoring,” UN Global Pulse, January 12, 2022,
  25. Viola Zhou, “What the American Interpreter Got Wrong at Tense US-China Talks in Alaska,” Vice World News, March 22, 2021,
  26. Jeff Wu et al., “Recursively Summarizing Books with Human Feedback,” arXiv [cs.CL] (September 22, 2021)
  27. Vineet Raina and Srinath Krishnamurthy, “Natural Language Processing,” in Building an Effective Data Science Practice: A Framework to Bootstrap and Manage a Successful Data Science Practice, eds. Vineet Raina and Srinath Krishnamurthy (Berkeley, CA: Apress, 2022), 63–73,; Ivano Lauriola, Alberto Lavelli, and Fabio Aiolli, “An Introduction to Deep Learning in Natural Language Processing: Models, Techniques, and Tools,” Neurocomputing 470 (January 22, 2022): 443–56,
  28. Aylin Caliskan, “Detecting and Mitigating Bias in Natural Language Processing,” AI and Bias (The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative), May 10, 2021, Matthew Hutson, “Robo-Writers: The Rise and Risks of Language-Generating AI,” Nature 591, no. 7848 (March 2021): 22–25,; Eliza Strickland, “OpenAI’s GPT-3 Speaks! (Kindly Disregard Toxic Language),” IEEE Spectrum, February 1, 2021,
  29.  Andrew Ng. "Unbiggen AI: The AI pioneer says it’s time for smart-sized, “data-centric” solutions to big issues," By Eliza Strickland. IEEE Spectrum, February 9, 2022,
  30. Marina Blanton, Aaron Steele, and Mehrdad Alisagari, “Data-Oblivious Graph Algorithms for Secure Computation and Outsourcing,” in Proceedings of the 8th ACM SIGSAC Symposium on Information, Computer and Communications Security, ASIA CCS ’13 (New York, NY, USA: Association for Computing Machinery, 2013), 207–18,; Dan Bogdanov, Sven Laur, and Riivo Talviste, “A Practical Analysis of Oblivious Sorting Algorithms for Secure Multi-Party Computation,” in Secure IT Systems, eds. Karin Bernsmed and Simone Fischer-Hübner, Lecture Notes in Computer Science (Cham: Springer International Publishing, 2014), 59–74,
  31. Dukeman, “Winning the AI Revolution for American Diplomacy.”
  32. Marga Gual Soler, Mandë Holford, and Tolullah Oni, “Twelve Months of COVID-19: Shaping the Next Era of Science Diplomacy,” Science & Diplomacy, January 22, 2021,
  33. “Damage from Climate Change Will Be Widespread and Sometimes Surprising,” The Economist, May 14, 2020,
  34. Michael C. Horowitz, The Diffusion of Military Power (Princeton, NJ: Princeton University Press, 2010),; Michael C. Horowitz, “Do Emerging Military Technologies Matter for International Politics?,” Annual Review of Political Science 23, no. 1 (May 11, 2020): 385–400,