Print version

Enhancing Global AI Governance through Compute Resource Management

24/05/2024

In October 2023, the G7 countries published the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems, in which they suggest the implementation of measures to “identify, evaluate, and mitigate risks across the AI lifecycle”.[1] As artificial intelligence (AI) technology becomes increasingly powerful and integrated into society, safety measures and protocols that ensure these systems function safely and predictably become increasingly important. Many current AI governance efforts, such as the UK’s AI Safety Institute,[2] chiefly address the deployment and post-deployment stages of the AI lifecycle through model evaluations, leaving out the pre-development and development phases. But these safeguards are not enough; current AI safety evaluation regimes lack rigorous methodologies to predict and mitigate risks due to persisting gaps in expert’s understanding of the inner workings of AI models, making it difficult to generalise experimental results.[3]

Nonetheless, ever-more powerful advanced AI systems are released every few months, while AI experts and developers are sounding the alarm bells about very advanced AI systems posing extreme risks on a global scale. Their concerns range from the large-scale deployment of lethal autonomous weapons or malicious actors destabilising governments through advanced AI-driven misinformation campaigns, to smarter-than-human, out-of-control AI accidentally causing human extinction in the future.[4] Considering these high stakes, we need transparent, robust governance mechanisms that address the pre-development stage of the AI lifecycle to safeguard against the development of high-risk advanced AI systems.

Developing advanced AI systems requires three key components: algorithms, massive high-quality datasets and access to compute resources (powerful microchips). Of the three, managing access to and use of compute resources is the most convenient lever for pre-development AI governance. Empirical data suggests that the larger and more powerful an AI model is, the more compute was used to train it,[5] with the most powerful models requiring tens of millions of dollars worth of cutting-edge microchips to train.[6] Conveniently, compute is also detectable, quantifiable, is produced via a highly concentrated supply chain, and access can be granted or limited physically.[7]

Therefore, we propose that the G7 countries back an international institution that designs international standards for the responsible management of compute resources. Such standards would help expand AI governance to cover the entire AI lifecycle, as appropriate for such a high-risk technology, and as proposed in the 2023 Hiroshima Guiding Principles.

The International Compute Governance Consortium (ICGC)

We suggest that the G7 countries support the establishment of an International Compute Governance Consortium (ICGC) tasked with developing standards for the responsible use and distribution of compute resources in AI research and development.

To design well-informed standards, this new institution would initially focus on gathering information on present compute ownership and use by the public and private sectors within the jurisdiction of its member states, tracking compute use and assessing its impact. By collecting such data, the ICGC would create transparency on the questions of who controls the compute, and who has access to it, fostering accountability and informed policymaking. This process would also lay the groundwork for a potential future multilateral organisation that manages the access to compute to ensure malicious actors or those following unsafe practices do not have access to enough compute to cause significant damage.

While the G7 countries would support and aid with the founding of the ICGC, it would be an international institution open to all countries and co-governed by its member states. The internationalisation of compute governance is necessary because advanced AI systems pose extreme risks on a global scale, making AI safety a global challenge. Thus, internationally cohesive action will be impossible without the participation of some major non-G7 stakeholders like China.

However, national AI governance interests and priorities vary. France for example plans to massively invest in domestic AI innovation to unlock economic growth, and to become a global leader in AI.[8] Other countries, like the US and China,[9] are introducing laws and policies requiring developers to disclose information about the training of their advanced AI models to mitigate risks. The challenge in building an international compute governance framework will be finding a solution that respects national interests while remaining effective.

Creating transparency: The Global Compute Registry

To fulfil its mission, the ICGC would create a Global Compute Registry, which would track the ownership and use of compute resources. Any entity possessing large-scale computing clusters located or operating in the member states would be required to report such possessions, including its location and compute capacity. Changes of possession should also be reported, especially if some part of the cluster is transferred to a non-member state. This idea is not without precedent; the 2023 US Executive Order on AI already introduced some reporting requirements on location and total capacity for owners of large compute clusters in the US.[10]

Furthermore, owners would be required to report provisions of access to these clusters to any domestic or foreign entity, including the type of use (for instance, training specific or general AI models, foreseen use cases and risks, etc.) and verification of the user’s identity. This approach mirrors Know Your Customer policies in the financial sector, which require companies to verify the identity of their clients to prevent illegal activities. The Global Compute Registry should also publish an annual report presenting data on the amount of compute resources globally available and project its growth.

This would allow the ICGC to record and make transparent large concentrations of computing power and gather information about their use for the development of safety-first compute use standards. The vast majority of AI models and their applications would remain untouched by such standards since they do not require high compute concentrations for training,[11] nor pose extreme risks.

Evaluating the impact: The Compute Resource Impact Assessment

A second important function of the ICGC would be assessing the impacts of compute use by establishing a Compute Resource Impact Assessment protocol. This protocol would evaluate the economic, environmental and societal impacts and risks of compute resource allocation, providing crucial context for the ICGC’s development of compute use standards.

The protocol would define compute thresholds above which training a powerful AI system would be considered high-risk as the EU did in its AI Act,[12] and re-adjust them regularly. Since skewed compute distribution can limit beneficial, low-risk AI innovation and research in under-resourced regions, exacerbating disparities in economic growth, education, and employment, the protocol would also assess the societal impact of compute distribution, access and use by analysing the allocation of compute resources across various sectors, populations and geographical regions. Finally, under this protocol, the ICGC would examine environmental effects, such as the carbon footprint of compute clusters and chip manufacturing.

Integrating existing compute governance frameworks

The proposed International Compute Governance Consortium would not only help expand AI governance to cover the entire AI lifecycle, as proposed in the 2023 Hiroshima Guiding Principles. It would also build upon and integrate other existing efforts to establish information disclosure regimes for high-risk AI systems, such as those in the US and the EU, as well as international cooperation efforts like the Bletchley Declaration.[13] The 2023 US Executive Order on the safe development and use of AI requires reporting ownership of large-scale computing clusters and provisions of access to any foreign entities. Similarly, the EU AI Act considers models trained on compute resources above a given threshold to potentially pose systemic risk. Developers of such models must notify the Commission and comply with several safety precautions. Furthermore, the ICGC would complement the OECD’s AI Principles on Robustness, Security and Safety,[14] promoting international cooperation on AI governance.

To increase participation in the ICGC, the G7 countries could cooperate with other international forums, such as the AI Safety Summit series started by the UK in 2023 and the G20. The G20 could be a fitting partner; the G20 AI Principles state that “AI systems should be robust, secure and safe throughout their entire lifecycle”,[15] which matches the G7’s Hiroshima Principles, and its membership includes some major countries not represented in the G7, such as China and India.

By supporting ongoing efforts of international cooperation on AI governance, the ICGC would enhance existing national AI governance frameworks, standardising compute data collection and impact assessments. As such, it would increase the transparency of compute use and lay the groundwork for the design of international compute resource management standards that ensure the development of safe, beneficial AI.


Eva Behrens and David Janků are Advanced AI Researchers at the International Center for Future Generations (ICFG). Bengüsu Özcan is an Advanced AI Research Assistant at ICFG. Max Reddel is the Advanced AI Program Lead at ICFG.

[1] G7, Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems, 30 October 2023, https://www.mofa.go.jp/files/100573473.pdf.

[2] UK Government, Introducing the AI Safety Institute, updated on 17 January 2024, https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute.

[3] Will Henshall, “Nobody Knows How to Safety-Test AI”, in Time, 21 March 2024, https://time.com/6958868.

[4] Center for AI Safety, Statement on AI Risk, May 2023, https://www.safe.ai/work/statement-on-ai-risk.

[5] Jared Kaplan et al., “Scaling Laws for Neural Language Models”, in arXiv, 23 January 2020, https://doi.org/10.48550/arXiv.2001.08361.

[6] Will Knight, “OpenAI’s CEO Says the Age of Giant AI Models Is Already Over”, in Wired, 17 April 2023, https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over.

[7] Girish Sastry et al., “Computing Power and the Governance of Artificial Intelligence”, in arXiv, 13 February 2024, https://doi.org/10.48550/arXiv.2402.08797.

[8] France Artificial Intelligence Commission, Our AI: Our Ambition for France, March 2024, https://www.info.gouv.fr/actualite/25-recommandations-pour-lia-en-france.

[9] Matt Sheehan, “China’s AI Regulations and How They Get Made”, in CSIS Working Papers, July 2023, https://carnegieendowment.org/publi90117.

[10] See Section 4.2(b) of the Executive Order No. 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 30 October 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.

[11] Nicolas Moës and Frank Ryan, Heavy is the Head that Wears the Crown. A Risk-based Tiered Approach to Governing General-Purpose AI, The Future Society, September 2023, https://thefuturesociety.org/heavy-is-the-head-that-wears-the-crown.

[12] Council of the EU, Artificial Intelligence (AI) Act: Council Gives Final Green Light to the First Worldwide Rules on AI, 21 May 2024, https://europa.eu/!3Gf3QN.

[13] UK Government, The Bletchley Declaration by Countries Attending the AI Safety Summit, 1 November 2023, https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration.

[14] OECD.AI Policy Observatory, OECD AI Principles: Principle on Robustness, Security and Safety (Principle 1.4), https://oecd.ai/en/dashboards/ai-principles/P8.

[15] G20, G20 Ministerial Statement on Trade and Digital Economy, 9 June 2019, https://www.mofa.go.jp/files/000486596.pdf.