Summary

Computing power – compute for short – is a key driver of AI progress. Over the past thirteen years, the amount of compute used to train leading AI systems has increased by a factor of 350 million. This has enabled the major AI advances that have recently gained global attention.

‍Governments have taken notice. They are increasingly engaged in compute governance: using compute as a lever to pursue AI policy goals, such as limiting misuse risks, supporting domestic industries, or engaging in geopolitical competition.

There are at least three ways compute can be used to govern AI. Governments can:

Compute’s detectability and excludability are further enhanced by the highly concentrated structure of the AI supply chain: very few companies are capable of producing the tools needed to design advanced chips, the machines needed to make them, or the data centers that house them.

‍However, just because compute can be used as a tool to govern AI doesn’t mean that it should be used in all cases. Compute governance is a double-edged sword, with both potential benefits and the risk of negative consequences: it can support widely shared goals like safety, but it can also be used to infringe on civil liberties, perpetuate existing power structures, and entrench authoritarian regimes. Indeed, some things are better ungoverned.

Compute plays a crucial role in AI

Across large-language models, Go, protein folding, and autonomous vehicles, the greatest breakthroughs have involved developers successfully leveraging huge amounts of computing power to train models on vast datasets to independently learn how to solve a problem, rather than hard-coding such knowledge.

Compute governance is feasible

Compute is easier to govern than other inputs to AI. As such, compute can be used as a tool for AI governance.

Four features contribute to compute’s governability:‍