Research

Our research

Our research explores the technical, political, and institutional challenges of building safer AI systems. We aim to inform policy, support practitioners, and contribute to a growing body of knowledge that bridges theory and real-world impact.

featured
  • 26.05.2025
  • Paper
G7 Hiroshima AI Process Code of Conduct and EU AI Act GPAI – Commonality Analysis
James Gealy, Daniel Kossack
This report contains an analysis of the commonalities and differences between the G7 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems and the EU AI Act text on general-purpose AI models. There is substantial commonality between the texts, though each h...
  • 07.03.2025
  • Paper
Mapping AI Benchmark Data to Quantitative Risk Estimates Through Expert Elicitation
Malcolm Murray, Henry Papadatos, Otter Quarks, Pierre-François Gimenez, Simeon Campos
The literature and multiple experts point to many potential risks from large language models (LLMs), but there are still very few direct measurements of the actual harms posed. AI risk assessment has so far focused on measuring the models' capabilities, but the capabilities of models are only indica...
  • 11.02.2025
A Frontier AI Risk Management Framework
Siméon Campos, Henry Papadatos, Fabien Roger, Chloé Touzet, Otter Quarks, Malcolm Murray
  • 01.09.2024
A Framework to Rate AI Developers’ Risk Management Maturity
Siméon Campos, Henry Papadatos, Fabien Roger, Chloé Touzet, Malcolm Murray
  • 01.09.2023
How Can Nuclear Safety Inform AI Safety?
Siméon Campos, James Gealy
Recent
type
  • report
  • paper
subject