AI Governance Framework Analysis
Overview
This project maps the regulatory landscape for AI systems across the EU, US, UK, and China — identifying structural gaps, comparing approaches, and developing a framework for evaluating governance effectiveness.
The goal is not to advocate for a specific regime, but to build a comparative analytical tool that can be used to assess proposals as they emerge.
Problem
AI governance is developing in fragments. Individual jurisdictions are producing rules in isolation, with minimal coordination. The result is a patchwork that multinational technology companies can navigate strategically — choosing favourable jurisdictions, exploiting gaps between regimes, and shaping rules before civil society has the capacity to respond.
There is no agreed taxonomy of AI risk, no shared audit standard, and no mechanism for cross-border enforcement. These are not details. They are the load-bearing elements of any serious governance system.
Approach
The project proceeds in three phases:
Phase 1 — Mapping (complete)
- Systematic review of primary regulatory documents: EU AI Act, US Executive Orders on AI, UK AI Safety Institute framework, China’s interim measures
- Coding of provisions against a standardised taxonomy: scope, definitions, prohibited uses, high-risk classifications, audit requirements, enforcement mechanisms, extraterritorial reach
Phase 2 — Gap Analysis (current)
- Identifying classes of AI system not adequately covered by any existing framework
- Mapping enforcement gaps: where jurisdictional claims conflict or fail to cover cross-border harms
- Comparing definitions: how “high-risk AI” is defined across regimes and what falls through the cracks
Phase 3 — Framework Development (planned)
- Developing evaluation criteria for governance proposals
- Producing a comparative scorecard across five dimensions: scope, enforceability, adaptability, international coordination, public accountability
Output
Deliverables to date:
- Regulatory mapping document (internal, 40pp)
- Comparative definitions table across four jurisdictions
- Preliminary gap taxonomy (12 identified gap classes)
Planned:
- Public summary paper (3,000 words)
- Annotated bibliography
- Framework scorecard with documentation
Insights
Regulation is structurally reactive — but the speed of the gap between AI capability and governance capacity is qualitatively different from previous technological transitions.
The most significant finding so far: existing frameworks are better at regulating AI products than AI capabilities. When a general-purpose model can be fine-tuned into almost any application, product-level regulation creates perverse incentives toward abstraction — companies move regulatory risk upstream, leaving downstream deployers in under-governed territory.
The EU AI Act’s foundation model provisions are an attempt to address this, but enforcement mechanisms remain underspecified.
Next Steps
- Complete Phase 2 gap analysis by end of Q2 2026
- Begin drafting public summary paper
- Seek feedback from practitioners in regulatory policy
- Explore whether the framework could be extended to cover non-state governance actors (industry bodies, technical standards organisations)