Striving to ensure that
AI technologies are safe.

Our Mission

Incentivising the development of safe AI systems through better risk management.

SaferAI is a governance and research non-profit focused on AI risk management. Our organisation, based in France, works to incentivize responsible AI practices through policy recommendations, research, and innovative risk assessment tools.

Our focus areas
standards & governance

With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems. We are writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act. Additionally, we are participating in all four working groups of the Code of Practice for general-purpose AI models.

We are also doing comparable work at NIST US AISIC and in the OECD G7 Hiroshima Process taskforce.

ratings

We rate frontier AI companies' risk management practices.

Our objective is to enhance the accountability of private actors shaping the development of AI when developing and deploying their systems.

You can find here our website with the complete results.

Research

With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems. We are writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act. Additionally, we are participating in all four working groups of the Code of Practice for general-purpose AI models.

We are also doing comparable work at NIST US AISIC and in the OECD G7 Hiroshima Process taskforce.

our ratings

We rate frontier AI companies' risk management practices.

At a high-level, an AI company's risk management practices should be assessed across the following 4 dimensions:

We are seeking individuals to collaborate with us on our inspiring endeavour to diminish the risks associated with AI on a societal level.

Featured research
  • 07.03.2025
  • Paper
Mapping AI Benchmark Data to Quantitative Risk Estimates Through Expert Elicitation
Malcolm Murray, Henry Papadatos, Otter Quarks, Pierre-François Gimenez, Simeon Campos
  • 11.02.2025
A Frontier AI Risk Management Framework
Siméon Campos, Henry Papadatos, Fabien Roger, Chloé Touzet, Otter Quarks, Malcolm Murray
Support us

We rely on donations to help us incentivize responsible AI practices through policy recommendations, research, and innovative risk assessment tools.

Key Partnerships
featured in