Summary and Introduction

People who want to improve the trajectory of AI think their options for object-level work are:

But there is a whole other category of options: technical work in AI governance. This is technical work that mainly boosts AI governance interventions, such as norms, regulations, laws, and international agreements that promote positive outcomes from AI.

What I mean by “technical work in AI governance”

I’m talking about work that:

  1. Is technical (e.g. hardware/ML engineering) or draws heavily on technical expertise.
  2. Contributes to AI’s trajectory mainly by improving the chances that AI governance interventions succeed (as opposed to by making progress on technical safety problems or building up the communities concerned with these problems).

Types of technical work in AI governance

Engineering technical levers to make AI coordination/regulation enforceable

To help ensure AI goes well, we may need good coordination and/or regulation. To bring about good coordination/regulation on AI, we need politically acceptable methods of enforcing them (i.e. catching and penalizing/stopping violators). And to design politically acceptable methods of enforcement, we need various kinds of engineers, as discussed in the next several sections.

Hardware engineering for enabling AI coordination/regulation

To help enforce AI coordination/regulation, it might be possible to create certain on-chip devices for AI-specialized chips or other devices at data centers. As a non-exhaustive list of speculative examples:

Software/ML engineering for enabling AI coordination/regulation