G7 Hiroshima AI Process Code of Conduct and EU AI Act GPAI – Commonality Analysis

Publication date
May 26, 2025
authors
James Gealy, Daniel Kossack
tags
  • Analysis
  • Research
share
Abstract

This report contains an analysis of the commonalities and differences between the G7 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems and the EU AI Act text on general-purpose AI models. There is substantial commonality between the texts, though each has requirements/recommendations not found in the other. In essence, their commonality can be thought of as fitting a Venn diagram, with approximately 30% high or complete commonality, 50% moderate commonality, and 20% not overlapping where requirements or recommendations from one are not found in the other.

read full paper

Introduction

This report contains an analysis of the commonalities and differences between the G7 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems—herein referred to as the Hiroshima AI Process Code of Conduct (CoC)—and the EU AI Act (“Act” or “AIA”) text on general-purpose AI (GPAI) models. SaferAI has been heavily involved in both the Code of Conduct Reporting Framework and the drafting of the AI Act’s Code of Practice for GPAI models. We leveraged this experience with both the CoC and Act to produce the following report. 

There is substantial commonality between the texts, though each has requirements/recommendations not found in the other. In essence, their commonality can be thought of as fitting a Venn diagram, with approximately 30% high or complete commonality, 50% moderate commonality, and 20% not overlapping where requirements or recommendations from one are not found in the other.

For example, regarding copyright and intellectual property law, the Act has a specific focus on providers complying with EU law on copyright. With regards to public disclosure and reporting to regulators, the CoC intends for public reporting (e.g., Action 3) while the Act intends for documentation to be provided to the AI Office upon request, as well as to organisations further downstream on the value chain. That said, many points are the same or very similar, such as risk assessment, risk mitigation and cybersecurity.

The CoC Actions tend to be more detailed than the requirements in the Act’s Articles and give specific examples and expectations. If the Act’s Recitals are included though, then the level of detail is more comparable to the CoC. The Act is more detailed in certain ways, such as the documentation and transparency requirements in the Annexes. And many of the CoC requirements that are more detailed can be inferred from the Act’s text (e.g., Action 1’s secure testing environments requirement can be reasonably inferred from the Act’s cybersecurity and evaluations requirements).

Our analysis is based on three assumptions. Firstly, we include the AI Act’s Recitals and Article 56 requirements given their additional detail, e.g., 56(2)(d). Secondly, all CoC “shoulds” are considered mandatory in the sense that they are all assumed to be fulfilled. Thirdly, “Advanced AI systems” and GPAI “models with systemic risk” are assumed to be equivalent.


Summary Table

The following table shows the number of points of comparison between the Code of Conduct and the EU AI Act, per Code of Conduct Action:

  • 86 points of comparison in total
  • 31% of the comparisons have high or complete commonality
  • Just over 80% have at least some commonality
Subject of the Code of Conduct Action Action High
or complete
commonality
Some commonality Little or no commonality Total
General and Introduction General and Introduction 1 2 3
Risk management and evaluations Action 1 7 5 5 17
Identify and mitigate vulnerabilities Action 2 2 7 2 11
Transparency and documentation Action 3 1 7 8
Incident reporting and information sharing Action 4 3 7 2 12
Risk management framework Action 5 1 3 2 6
Cybersecurity Action 6 6 4 10
Content Authentication and Provenance Mechanisms Action 7 3 1 1 5
Investments in Research and Mitigation Measures Action 8 3 3
Developing AI for the Benefit of the Public Action 9 1 3 1 5
Development and Adoption of Technical Standards Action 10 2 2
Data input measures & protections for personal data and intellectual property Action 11 1 2 1 4
Total 27 43 16 86
31.4% 50% 18.6%

Back to top


acknowledgements

Sorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam eu turpis molestie, dictum est a, mattis tellus. Sed dignissim, metus nec fringilla accumsan, risus sem sollicitudin lacus, ut interdum tellus elit sed risus. Maecenas eget condimentum velit, sit amet feugiat lectus

Publication date
May 26, 2025
authors
James Gealy, Daniel Kossack
tags
  • Analysis
  • Research
share
Related Content
  • 14.02.2025
  • research paper
Anthropic’s Responsible Scaling Policy Update Makes a Step Backwards
  • Daniel Kossack
  • James Gealy