AI regulations are rapidly evolving globally. China, Europe, and the United States are adopting different approaches, raising a fundamental question: is it more effective to regulate AI models or their use?
Why model-centric regulation fails
Attempting to control AI models, for example through licenses or restrictions on open weights, proves ineffective. Once released, models replicate at near-zero cost, making it impossible to prevent their spread. This approach risks penalizing compliant companies, while less scrupulous actors circumvent the rules.
A practical alternative: Regulate use, proportionate to risk
A use-based regime classifies deployments by risk and scales obligations accordingly. Here is a workable template:
- General-purpose consumer interaction: Transparency, acceptable use policies, mechanisms for flagging problematic outputs.
- Low-risk assistance: Simple disclosure, baseline data hygiene.
- Moderate-risk decision support: Documented risk assessment, meaningful human oversight, an "AI bill of materials" (model lineage, key evaluations, mitigations).
- High-impact uses in safety-critical contexts: Rigorous pre-deployment testing, continuous monitoring, incident reporting, authorization based on validated performance.
- Hazardous dual-use functions: Confine to licensed facilities and verified operators; prohibit capabilities whose primary purpose is unlawful.
Close the loop at real-world chokepoints
AI-enabled systems become real when theyโre connected to users, money, infrastructure, and institutions. Thatโs where regulators should focus enforcement: app stores, cloud platforms, payment systems, and insurers. For high-risk uses, we need to require identity binding for operators, capability gating aligned to the risk tier, and tamper-evident logging for audits and post-incident review.
Alignment with the EU and China
This approach aligns with the EU AI Act, which centers risk at the point of impact. However, it differs in the need to ensure constitutional compliance in the United States, focusing on what AI operators can do in sensitive settings. Practical ideas can be taken from China, such as verifiable provenance for synthetic media and registration of methods and risk controls for high-risk services.
๐ฌ Commenti (0)
๐ Accedi o registrati per commentare gli articoli.
Nessun commento ancora. Sii il primo a commentare!