Downloads: 2 available

Available in

Contents

AI and the future of mediation

Revolutionary technological progress in artificial intelligence (AI) is reshaping the dynamics of human-machine interaction and the role of technology in contemporary societies. Amidst the profound implications for global security, AI is certain to exert an influence on peace mediation, a field characterised by person-to-person communication. While the precise nature of this evolution remains uncertain, given the rapid pace of technological advancements and the concerns of conflict resolution practitioners about digital modes of engagement, this article contemplates the potential for AI to transform the realm of peace mediation, along with the associated challenges.

AI and its potential

Recent advancements in AI, including publicly available tools like OpenAI, ChatGPT and Google Bard, have captivated public attention with their remarkable capabilities and transformative potential. As AI reshapes industries and influences decision-making processes, it also holds immense potential as a tool for mediation. Although online dispute resolution has existed for decades, the advent of next-generation AI-powered mediation could mark a substantial leap forward in creating negotiations facilitated by digital third parties that closely mimic human interactions.

AI can serve as a tool for mediators but will not replace them. At best, AI can help process and analyse vast amounts of data, including historical conflict data, socio-political dynamics and cultural nuances, giving mediators a greater understanding of complex conflict dynamics and supporting them in formulating more effective strategies. AI-powered algorithms can also be drawn on to support foresight — helping simulate different scenarios and the prediction of outcomes and allowing mediators and conflict parties to make better informed decisions. Generative AI-based trend foresight tools for market intelligence are booming in the private sector and are beginning to be applied in the public sector as well.

AI can serve as a tool for mediators but will not replace them.
Accord 30

In heavily text-based diplomatic negotiations or processes such as national dialogues, AI could give mediators the capability to extract positions and identify common ground from within the extensive texts of reports, verbatim records, and other data sources. By automatically identifying key themes, stances, and areas of convergence, AI could not only accelerate information processing but also assist mediators in reframing perspectives and establishing mutual understanding. Among others, the Innovation Cell of the UN Department of Political and Peacebuilding Affairs has been experimenting with such novel approaches, following the rollout of AI-assisted consultations in Yemen, Libya, and elsewhere, as described by Julie Hawke in this volume (see Digital inclusion in peacemaking: Practice, promise and perils).

AI can also sift through social media conversations to detect topics of social discussion and help analyse public opinion at scale. AI-powered transcription systems (such as the online speech-to-text transcription function in YouTube, among other available applications) can transcribe discussions on radio or television call-in shows to get a sense of public discourse. Soon, applications of this kind will be more intuitive and user-friendly for non-technology experts in mediation support teams.

An illustration rethinking peace negotiations, as part of the UN Department for Political and Peacebuilding Affairs’ Futuring Peace initiative.
An illustration rethinking peace negotiations, as part of the UN Department for Political and Peacebuilding Affairs’ Futuring Peace initiative. © 2022 DPPA Innovation

AI-charged multilingual translation capabilities could help overcome communication barriers between conflict parties through more accurate interpretation services, especially for uncommon dialects and colloquial speech that have not yet been covered by mainstream translation providers. Although most Large Language Models have been trained in English, there have been efforts to extend them to less resourced languages to avoid the risk that they become cultural assimilators. Meta’s ‘No Language Left Behind’ initiative is trained on over 200 languages; Google has its ‘1,000 Languages Initiative’ to make sure AI becomes more linguistically inclusive.

Enhanced by digital tools, the future will probably remain humanly ordinary and imperfect, and inter-human connections won’t be easily displaced by machines.
Accord 30

Faster and more precise AI-powered translation software includes new features such as emotion detection and body language analysis. Research by the Universities of Manchester and of Helsinki published in May 2023 indicated that ChatGPT can outperform humans in emotional awareness tasks. Although state-of-the-art applications are still not free of error, more sophisticated systems are just around the corner; mediators should watch out for them and consider how they can help improve their support efforts.

In the not-so-distant future, one could even imagine that personalised conflict resolution chatbots or virtual agents could serve a mediation function themselves, providing a safe and neutral space for parties to express their concerns and interests. Acting as a more objective counterpart than humans (who bring emotional baggage), it is imaginable that these digital assistants could independently lead discussions, ask relevant questions, offer potential solutions, or serve as a virtual sparring partner. Scholars from Glasgow, Tel Aviv and Yale Universities have for several years been experimenting with social robots to support post-traumatic stress disorder diagnosis and treatment. Georgia Tech scholars have been working on a robot mediator for fixing delicate patient-caregiver relationships. Different companies have built AI-bots to listen carefully to customer complaints. Such digital helpers, sent out to collect views from conflict parties or the public through interactive conversations, could soon be a reality.

Challenges of ‘cybertopia’

AI might lead to new challenges for mediation. For tech enthusiasts, ‘Cybertopia’ offers hope for unprecedented opportunities to better collect information, make sense of data and broaden mutual understanding through digital means. For cyber-doubters and cynics, however, any attempt to introduce digital means for engagement risks reducing the role of human responsibility and is a potential seed for new human-made problems. And indeed, despite the myriad of technological revolutions we have witnessed, history has shown that, however enhanced by digital tools, the future will probably remain humanly ordinary and imperfect, and inter-human connections won’t be easily displaced by machines.

Institutional cultures and unwillingness to use modern technology in social decision-making processes will impede the more widespread use of AI in peace processes. We do not lack the technology to put it to good use in support of mediation efforts, but the integration of AI into peace processes is not without its challenges and risks. A calibrated approach is therefore necessary — one that not only fosters innovation but also involves careful evaluation, rigorous safeguards, and a readiness to integrate technology incrementally in a way that is sensitive to the nuances of conflict resolution.

Addressing the challenges and risks associated with AI implementation in peace mediation will require a balanced approach that values human judgment and ethics while leveraging AI’s capabilities for efficiency and precision.
Accord 30

Cost is another factor that will keep machines away from the negotiation table. Even if digital applications have become more affordable, AI-powered systems are enormously computer-power hungry, and hence costly. Dedicated budgets will be needed: to resource internal capacities in mediation support entities to scale technology and overcome resistance; to build tailored applications that serve a purpose instead of picking a commercial application not made for peace mediation; and to maintain digital applications whose innovation glamour wears off quickly after the first, second and third use but need further fine tuning to improve. Moreover, measures will be needed to counter the digital divide by which certain populations lack access to technology or digital literacy, exacerbating existing inequalities and hindering inclusivity in peace processes. Technology development transfers, such as supported by the UN Technology Bank for the Least Developed Countries, will be needed to level the playing field in the arena of peace mediation.

Finally, AI governance issues, ethical considerations and gender dimensions need to be addressed to retain transparency, accountability and fairness in tech informed mediation processes. The potential for AI to promote peace is clear, yet its deployment must be handled with utmost care to avoid exacerbating biases and inequalities. Confidentiality, especially concerning the sensitive training data AI models require, presents a complex challenge. Security is also critical, as systems must be robust enough to withstand potential breaches and manipulations that could undermine the integrity of a mediation process. Generative AI models, which can produce content or solutions autonomously, run the risk of internalising and magnifying biases present in their training datasets. The discourse on AI implementation in peace processes must address the safeguards and quality controls necessary to ensure AI supports efforts towards more just and peaceful societies without inadvertently causing harm.

Acting Special Representative of the UN Secretary-General for Libya Stephanie Williams participates in an AI-powered chat to engage in a digital dialogue with 1,000 Libyans on the political, security, and economic situation in the country, 17 January 2021.
Acting Special Representative of the UN Secretary-General for Libya Stephanie Williams participates in an AI-powered chat to engage in a digital dialogue with 1,000 Libyans on the political, security, and economic situation in the country, 17 January 2021. © UNSMIL

What lies beyond

In conclusion, as we navigate the evolving landscape of AI in peace mediation and beyond, it is essential to consider the interplay between its advantages and disadvantages. Bridging the gap between AI enthusiasts and sceptics presents an opportunity for collective growth and innovation. Addressing the challenges and risks associated with AI implementation in peace mediation will require a balanced approach that values human judgment and ethics while leveraging AI’s capabilities for efficiency and precision.

The future of the relationship between humans and AI should be characterised by synergy and cooperation, where AI serves as a valuable tool to enhance human decision-making rather than replace it entirely. As we move forward, fostering harmonious collaboration between human expertise and AI-driven insights will be the key to harnessing the full potential of this groundbreaking technology while safeguarding against its pitfalls. In taking this path, we can envision a future where AI augments our capacity for peace mediation and empowers us to address global conflicts with greater wisdom and effectiveness.

In the more distant future, some decades from now, conflicts will persist. Alongside these conflicts, there will be a cadre of mediators and their teams who will step in to facilitate dialogue and help shape peace settlements between conflicting parties. Quantum computers supercharged by collective wisdom and best practice in human conflict resolution might take over as assistants. But while machines will be useful helpers, the longing for human-to-human connection will remain.