About

About us

SaferAI is a non-profit organisation aiming to incentivize the development and deployment of safer AI systems through better risk management.

The organisation focuses on doing research to advance the state of the art in AI risk assessment and developing methodologies and standards for risk management of these AI systems⁠. Among others, we are actively contributing in the relevant working groups of the OECD, the EU (Code of Practice and JTC 21) and the US NIST AI Safety Institute Consortium.

If you want to learn more about our activities, read our publications, follow our LinkedIn or get in touch with us. We're deeply thankful to our funding supporters whose financial backing enables our work on this critical challenge.

As a registered French non-profit organisation, our operations are fully transparent and our registration is public. If you share our commitment to reducing AI risks, please consider donating to support our mission.

Our focus areas
standards & governance

With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems. We are writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act. Additionally, we are participating in all four working groups of the Code of Practice for general-purpose AI models.

We are also doing comparable work at NIST US AISIC and in the OECD G7 Hiroshima Process taskforce.

ratings

We rate frontier AI companies' risk management practices.

Our objective is to enhance the accountability of private actors shaping the development of AI when developing and deploying their systems.

You can find here our website with the complete results.

Research

With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems. We are writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act. Additionally, we are participating in all four working groups of the Code of Practice for general-purpose AI models.

We are also doing comparable work at NIST US AISIC and in the OECD G7 Hiroshima Process taskforce.

Employees
Simeon Campos
Founder, Executive Director
Siméon is Executive Director and founder of SaferAI, working on key aspects of SaferAI, ranging from standardization, risk management research to fundraising and external partnerships. With experience co-founding an organization doing training in responsible AI, EffiSciences, Siméon has had the oppo...
Henry Papadatos
Managing Director
Henry is Managing Director, working on key aspects of SaferAI ranging from ratings, AI & benchmark research, risk management, to management and operations. With experience conducting research on AI sycophancy at UC Berkeley, Henry is bringing to the team know-how of LLM research which enables to...
Chloe Touzet
Policy Lead
Chloé is Policy Lead, leading our engagement with external stakeholders (spanning civil society organisations, policymakers and companies) and producing research and policy pieces on adequate AI governance. A researcher on labor, AI and inequalities, Chloé spent 5 years as Policy Researcher at the O...
Malcolm Murray
Research Lead
Malcolm is Research Lead, leading our work on quantitative risk assessment of large language models on risks like cybersecurity and biosecurity. With twenty years of experience in risk and strategy, research, consulting and industry, he has a long track record of running research projects as a Chief...
James Gealy
Standardization Lead
James is Standardization Lead, contributing to the OECD G7 Hiroshima AI Process reporting framework and co-editing the AI risk management standard at CEN-CENELEC JTC21. He has fifteen years of experience as an electrical engineer in spacecraft testing and operations at Northrop Grumman and Airbus, w...
Gabor Szorad
Product Lead
Gábor is Product Advisor, working towards valuing SaferAI’s expertise to increase our financial independence and deliver benefits to companies interested in responsible AI. With twenty years of experience in management, he has a long track record of scaling product and companies, from 0 to 8200 empl...
Otter Quarks
Chief of Staff
Otter Quarks is Chief of Staff, strengthening our operations, supporting our leadership, and ensuring strong coordination across our organisation to ensure quality and focus in our objectives and outputs. With experience as an engineer in fire safety, cybersecurity, and aerospace, he brings an appre...
Lauren Fried
Operations (external)
Steve Barret
Senior Researcher
Steve is a Senior Researcher at SaferAI working on AI risk management. He has experience in assessing confidence in frontier AI safety cases, alongside experience in cybersecurity and safety assurance in the automotive sector. He brings a strong track record in innovation and has spent 20+ years in ...
Alejandro Tlaie Boria
Talos Fellow | Technical Researcher
Alejandro is a Talos Fellow, developing better quantification methods for risk estimation of large language models on cybersecurity. He has almost ten years of research experience, as he has a PhD in Complex Systems, held two postdoctoral positions (Information Theory and Computational Neuroscience)...
Daniel Kossack
Consultant - EU standards and CoP
Advisors
Cornelia Kutterer
Senior Advisor
Cornelia is a Senior Advisor, advising the team on institutional engagement and governance research. With twenty years of experience in research, tech and AI policy, she has a long track record of institutional engagement, law, and research management as Senior Director of EU Government Affairs at M...
Fabien Roger
Technical Advisor
Fabien is a Technical Advisor, providing crucial inputs to our technical projects such as our ratings and our standardization work in international institutions. Member of Technical Staff at Anthropic and formerly at Redwood Research, he has significant technical expertise in AI research, especially...
Duncan Cass-Beggs
Senior Advisor
Duncan Cass-Beggs is a Senior Advisor, providing guidance on AI governance and strategic foresight. With over 25 years of experience in public policy, including as OECD's head of strategic foresight and executive director of CIGI's Global AI Risks Initiative, he brings deep expertise in anticipating...
donate

The senior staff in SaferAI team provides us with high returns to additional funding.

On the research end, we intend to grow Malcolm Murray’s team to accelerate our research on quantitative risk assessment of cyberoffence and accidental risks of LLMs. As former Managing VP and Chief of Research at Gartner, Malcolm is used to delivering complex projects managing large teams.

On the governance side, we have promising plans to build upon the unique expertise of our senior advisor Cornelia Kutterer.

As partners with institutions such as the NIST US AI Safety Institute consortium, the UK AI Safety Institute or the OECD, we are confident we can make a good use of marginal funding. You can donate straight away below or reach out to us at simeon[at]safer-ai.org.

Your donation helps us incentivizing responsible AI practices through policy recommendations, research, and innovative risk assessment tools.

You can read more about us on our homepage.