What is AI Governance?
The rules, policies, and frameworks that determine who gets to build AI, how it's used, and what happens when it goes wrong.
7 min read
In March 2023, over a thousand tech leaders signed an open letter calling for a 6-month pause on training AI systems more powerful than GPT-4. Nobody paused.
That letter captured a fundamental problem: AI is moving faster than our ability to govern it. And the question of who governs AI, and how, might be the most important policy question of the decade.
AI governance is the collection of rules, policies, and frameworks that guide the development and use of AI systems.
Why governance matters now
For most of AI's history, governance wasn't urgent. AI systems were narrow ā chess engines, spam filters, product recommenders. If they broke, the damage was limited.
That changed. Modern AI systems can:
- Write convincing disinformation at scale
- Generate photorealistic fake images of real people
- Make decisions about loans, hiring, and criminal sentencing
- Write code that finds security vulnerabilities
- Impersonate anyone's voice from a few seconds of audio
The stakes went from "your spam filter missed an email" to "democracy might be undermined." Governance became urgent.
The three levels of AI governance
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā ā ā LEVEL 1: COMPANY SELF-GOVERNANCE ā ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā ā Internal policies, safety teams, responsible AI boards ā ā "We'll govern ourselves" ā ā ā ā LEVEL 2: INDUSTRY STANDARDS ā ā āāāāāāāāāāāāāāāāāāāāāāāāāā ā ā Voluntary commitments, best practices, certifications ā ā "We'll agree on shared standards" ā ā ā ā LEVEL 3: GOVERNMENT REGULATION ā ā āāāāāāāāāāāāāāāāāāāāāāāāāāā ā ā Laws, enforcement, mandatory requirements ā ā "You'll follow these rules or face consequences" ā ā ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Most of what exists today is Level 1 and 2. Level 3 is catching up fast.
What's actually happening around the world
The EU AI Act (2024)
The most comprehensive AI law in the world. It categorizes AI systems by risk:
Unacceptable risk (banned):
- Social scoring by governments
- Real-time facial recognition in public spaces (with exceptions)
- AI that manipulates people's behavior to cause harm
High risk (heavy regulation):
- AI used in hiring decisions
- AI in medical devices
- AI in credit scoring
- AI in law enforcement
Limited risk (transparency requirements):
- Chatbots must disclose they're AI
- Deepfakes must be labeled
Minimal risk (no regulation):
- Spam filters, video game AI, etc.
What the EU AI Act means in practice: If you build an AI hiring tool and sell it in Europe, you need to:
- Conduct a risk assessment before deployment
- Ensure the training data is representative and unbiased
- Provide transparency to applicants about how decisions are made
- Allow human oversight of automated decisions
- Register in an EU database of high-risk AI systems
- Submit to regular audits
Violation? Fines up to ā¬35 million or 7% of global revenue, whichever is higher.
US Executive Order on AI (2023)
Less prescriptive than the EU, more focused on:
- Requiring safety testing for powerful AI models
- Establishing standards through NIST
- Addressing AI's impact on workers
- Promoting responsible government use of AI
The US approach is more industry-friendly, relying more on voluntary commitments than hard regulation.
China's AI regulations
China has been surprisingly active:
- Deepfake regulations (2023): Synthetic content must be labeled
- Generative AI rules (2023): AI-generated content must reflect "core socialist values"
- Algorithmic recommendation rules (2022): Users can opt out of recommendation algorithms
China's approach: regulate quickly, enforce selectively, maintain state control.
UK's "pro-innovation" approach
The UK deliberately chose not to create a single AI regulator. Instead, existing regulators (financial, health, competition) apply AI governance within their domains. The goal: don't slow innovation.
The hard questions
AI governance has to answer questions that don't have easy answers:
Who's liable when AI causes harm? If a self-driving car kills someone, is it the manufacturer? The software company? The person who was "supervising"? The training data provider?
How do you regulate something you don't understand? Most legislators can't explain how a neural networkNeural NetworkA computing system inspired by biological brains, made of interconnected nodes that learn patterns from data.Click to learn more ā works. How do they write laws about it?
How do you balance innovation and safety? Too much regulation kills innovation. Too little enables harm. Every country is gambling on where to draw the line.
How do you govern something global? AI doesn't respect borders. A model trained in the US can be deployed in Europe via an API in Singapore. Whose rules apply?
How do you govern open source? If Meta releases Llama as open source, anyone can download it. You can't un-release a model. How do you govern that?
Corporate governance: What companies actually do
The major AI companies all have some form of internal governance:
Safety teams: Dedicated teams that test models for dangerous capabilities before release. Red-teaming, where people try to break the model.
Usage policies: Rules about what you can and can't do with the API. No generating CSAM, no mass surveillance, no weapons design.
Access controls: Powerful models are sometimes released in stages ā first to researchers, then to developers, then to the public.
Model cards: Documentation about what a model can do, its limitations, and its known biases.
The problem: corporate self-governance is voluntary. When safety conflicts with competitive pressure, safety doesn't always win.
The OpenAI board crisis (November 2023): OpenAI's board fired CEO Sam Altman, partly over safety disagreements. Within days, employee pressure and investor threats forced the board to reverse course and bring him back. The incident revealed a tension: OpenAI's governance structure was supposed to prioritize safety over profit, but when tested, commercial interests won.
What good governance looks like
Based on emerging consensus:
Transparency. Disclose what your AI can do, how it was trained, and what its limitations are.
Accountability. Someone is responsible when things go wrong. Clear chains of liability.
Fairness. AI systems should not discriminate based on race, gender, age, or other protected characteristics.
Privacy. Training data should be collected ethically. Users should know when AI is making decisions about them.
Safety. Test for dangerous capabilities before deployment. Monitor for misuse after deployment.
Human oversight. Humans should remain in the loop for high-stakes decisions.
The governance gap
Here's the uncomfortable truth: governance is losing the race.
GPT-4 was released in March 2023. The EU AI Act was finalized in March 2024. That's a year gap. And GPT-4 was already old news by then.
AI labs are shipping new capabilities every few months. Regulatory cycles take years. By the time a law is passed, the technology it was designed to govern has already been superseded.
This is why many experts argue for:
- Adaptive regulation: Laws that define principles, not specific technical requirements
- Regulatory sandboxes: Safe spaces where companies can test AI under government supervision
- International coordination: Because AI doesn't care about borders
The future
AI governance is evolving fast:
- International AI safety treaty: Multiple countries are discussing a global framework, similar to nuclear non-proliferation
- Compute governance: Controlling access to the massive GPU clusters needed to train frontier models
- Model evaluation standards: Standardized tests that determine what safety level a model has
- AI auditing industry: Third-party firms that audit AI systems for compliance, bias, and safety ā a new professional category
The bottom line: AI governance isn't about stopping AI. It's about ensuring AI develops in ways that benefit humanity while minimizing harm. We haven't figured it out yet ā not even close. But the conversation has moved from "should we govern AI?" to "how do we govern AI?" And that shift matters.
Governance sets the rules. But how do you actually build safe AI? What is AI Alignment? explores the technical challenge of making AI do what we want.
Keep reading
What are AI Benchmarks?
How we measure AI progress. From standardized tests to real-world challenges, benchmarks help us compare AI capabilities and track advancement.
6 min read
What are Synthetic Datasets?
When real data is too expensive, too private, or too rare ā AI generates its own training data. Here's how and why.
7 min read
What are AI Guardrails?
Safety mechanisms that prevent AI from causing harm. How guardrails control AI behavior, enforce ethical guidelines, and protect users.
8 min read
Get new explanations in your inbox
Every Tuesday and Friday. No spam, just AI clarity.
Powered by AutoSend