Ben Harack
Studying international AI governance
Published papers I played a key role in creating:
- Robert Trager, Ben Harack, Anka Reuel, et al. International Governance of Civilian AI: A Jurisdictional Certification Approach. Oxford Martin AI Governance Initiative, 2023.
Details
See also this blog post which summarizes the key points.
Other published papers I’ve helped with:
- In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?, Oxford Martin AI Governance Initiative, 2025.
Details
Summary: Even rival states may want to collaborate on specific challenges posed by AI. This paper reviews the incentives for such collaboration and some of the areas most conducive to cooperation within technical AI safety.
Authors: Ben Bucknall, Saad Siddiqui, Lara Thurnherr, Conor McGurk, Ben Harack, Anka Reuel, Patricia Paskov, Casey Mahoney, Sören Mindermann, Scott Singer, Vinay Hiremath, Charbel-Raphaël Segerie, Oscar Delaney, Alessandro Abate, Fazl Barez, Michael K Cohen, Philip Torr, Ferenc Huszár, Anisoara Calinescu, Gabriel Davis Jones, Yoshua Bengio, Robert Trager. - Examining AI Safety as a Global Public Good: Implications, Challenges, and Research Priorities, Oxford Martin AI Governance Initiative, 2025.
Details
Excerpt from Abstract: As artificial intelligence (AI) systems become more powerful and integrated into daily life and global infrastructure, ensuring their safe development and deployment has emerged as one of the most pressing governance challenges of our time. While current narrow AI systems already have significant impacts in specific domains, advanced AI systems could fundamentally transform life through their potential for recursive self-improvement and general problem-solving capabilities, making their development and governance a uniquely critical challenge for humanity’s future. Drawing on lessons from climate change, nuclear safety, and global health governance, this analysis examines whether and how applying the framework of a “public good” could help us better understand and address the challenges posed by advanced AI systems.
Authors: Kayla Blomquist, Elisabeth Siegel, Ben Harack, Kwan Yee Ng, Tom David, Brian Tse, Charles Martinet, Matt Sheehan, Scott Singer, Imane Bello, Zakariyau Yusuf, Robert Trager, Fadi Salem, Seán Ó hÉigeartaigh, Jing Zhao, Kai Jia. - The Future of the AI Summit Series, Oxford Martin AI Governance Initiative, 2025.
Details
Summary of abstract: The AI Summit series – initiated at Bletchley Park in 2023 and continuing through Seoul in 2024 and Paris in 2025 – has become a distinct forum for international collaboration on AI governance. Its early achievements, including the Bletchley Declaration, the Frontier AI Safety Commitments, and the International Scientific Report on the Safety of Advanced AI, are a result of its unique format, regular schedule, and ability to secure concrete commitments from governments and industry.
To ensure its continuing impact, the Summit series must now transition from an improvised sequence of summits towards a more formalized structure. For this evolution to succeed, organizers must carefully examine past successes and realistically assess future challenges. This report examines both, with particular attention to a set of core summit design elements: hosting arrangement, secretariat format, participant selection, agenda setting, and summit frequency. Based on this analysis, we present six recommendations to strengthen the summit series’ impact.
The paper draws on existing international governance models to offer recommendations for each design element, addressing challenges such as a crowded summit landscape, geopolitical shifts, and rapid technological change.
Authors: Lucia Velasco, Charles Martinet, Henry de Zoete, Robert Trager, Duncan Snidal, Ben Garfinkel, Kwan Yee Ng, Haydn Belfield, Don Wallace, Yoshua Bengio, Benjamin Prud’homme, Brian Tse, Roxana Radu, Ranjit Lall, Ben Harack, Julia Morse, Nicolas Miailhe, Scott Singer, Matt Sheehan, Max Stauffer, Yi Zeng, Joslyn Barnhart, Imane Bello, Xue Lan, Oliver Guest, Duncan Cass-Beggs, Lu Chuanying, Sumaya Nur Adan, Markus Anderljung, Claire Dennis
Working papers:
- Verification for International AI Governance
Details
Examining the potential international agreements that can be made over AI and some of the ways that they can be verified. Verification here is proving that the other side is adhering to their commitments.
- The mobility of power: How growing technological superiority can allow war to be triggered by predicted arms transfers
Details
This paper examines the potentially destabilizing effects of increased power mobility—the ability for leading technological states to transfer meaningful amounts of military power to another state in a short amount of time. In particular, it explores the potential for increased power mobility to trigger a set of interacting commitment problems between authoritarian states and their neighbors which can lead to war. This theoretical framework potentially helps clarify why Vladimir Putin chose to make increasingly extreme demands and then launch his full-scale invasion of Ukraine in 2021-2022. The similar strategic situation between Taiwan and China does not appear to be susceptible to the same instabilities at the present time, but near-term technological change may exacerbate this problem, potentially opening a path to war. Lastly, the paper discusses international arrangements with the potential to mitigate some of the most important future problems of this kind.
- Perceptions of existential risk contributed to the end of the Cold War nuclear arms race
Details
Examining whether the advent of truly “existential” concerns during the Cold War (due to the idea of nuclear winter, etc.) led to a shift in rhetoric, behavior, and policy for the superpowers that differed substantially from their behavior under mutually-assured destruction.
- Existential risk and cooperation in indefinitely iterated social dilemmas
Details
Understanding how “social dilemmas” in game theory (such as the prisoner’s dilemma) are different under existential risk than under other kinds of risk. This work models races toward transformative technologies as an indefinitely iterated social dilemma, where defection by either player leads to existential risk - a permanent loss of a portion of all future payoffs for all players. Modeled in this way, the problem of existential risk also brings the seeds of its own solution. Stable cooperation becomes possible in not only the indefinitely iterated Prisoner’s Dilemma, but also in indefinitely iterated Deadlock.
- Guns, butter, and interdependence
Details
Synthesizing and challenging a set of realist and liberal claims about the interplay between economic interdependence and war. This work provides a theoretical and empirical basis for seeing international politics as an evolving system driven toward new equilibria by technological change with its details shaped by both war and enduring policies. Authors: Ben Harack, Samuel Seitz, and Claas Mertens