Artificial Intelligence (AI) is rapidly transforming how peacebuilding actors understand, anticipate, and engage in complex conflict dynamics—from early warning systems to dialogue facilitation, and from inclusion strategies to impact analysis. Yet, with its promise come risks, ethical dilemmas, and the urgent need for responsible frameworks that center inclusion, accountability, and conflict sensitivity.
This interactive workshop offers a space for reflection, co-creation, and exchange on the safe and effective use of AI in peace work. Rather than promoting a one-size-fits-all approach, the session invites practitioners, researchers, and policymakers to critically assess:
● What opportunities can AI realistically offer for peacebuilding?
● Where are the boundaries—and how do we avoid overreliance or misuse?
● How can we ensure AI tools are inclusive, ethically grounded, and context-sensitive?
● What risks—such as bias, exclusion, or surveillance—must be actively mitigated?
● What safeguards, norms, or participatory practices have proven useful in real-world applications?
Building on concrete examples, from AI-supported digital consultations with youth to localized early warning systems, the session will surface diverse perspectives and generate shared insights. Participants are encouraged to bring their own experiences, challenges, and aspirations to the table.
Outcomes from the workshop will feed into the development of a practical guide or code of conduct for AI in peace work—co-developed with peacebuilders globally. All levels of familiarity with AI, from curiosity to active implementation, are welcome.
Participants will leave with new perspectives, a deeper understanding of risks and possibilities, and a shared digital map of promising practices, dilemmas, and lessons learned.