Document Type
Article
Publication Date
Fall 2025
Publication Title
Dickinson Law Review
Abstract
Artificial intelligence (AI) is advancing at an unprecedented pace, with generative systems exerting growing influence over social, economic, and political life. While Al offers opportunities for innovation and efficiency, it also poses risks ranging from misinformation and job displacement to existential threats if highly autonomous systems evade human control. Across industry, government, and civil society, there is broad consensus that Al requires oversight.
Yet traditional U.S. regulatory approaches face six significant barriers: (1) technology outpacing legislation, (2) limited Al expertise among policymakers, (3) regulatory capture, (4) political gridlock, (5) outdated governance structures, and (6) the inherent complexity of Al. Combined with AI's global reach, these obstacles make effective U.S. regulation unlikely in the near term, underscoring the need for alternatives to government action.
Soft law-industry standards, voluntary compliance, and multi-stakeholder initiatives-has successfully guided governance in other rapidly evolving fields. It offers agility, cross-border cooperation, and the ability to adapt quickly without legislative delays. Soft law can encourage innovation while minimizing jurisdictional conflicts. Its drawbacks include non-binding authority and the risk of industry dominance, but when carefully designed, it can provide meaningful oversight in areas where governments cannot act swiftly or effectively.
Given AI's complexity and international scope, I propose establishing an International Council on AI Risk (ICAR)--a multi-stakeholder body focused on AI safety and global standard- setting. An ICAR would unite governments, industry safeguards, create compliance mechanisms, and develop adaptive frameworks. Unlike rigid government structures, it could evolve alongside AI while providing enforceable soft law mechanisms to foster responsible development.
Challenges-such as defining scope, securing industry compliance, and avoiding bureaucracy-would require careful governance design and broad participation. Nonetheless, the stakes of AI development demand proactive solutions. With government regulation lagging and risks escalating, an ICAR represents the most pragmatic and scalable model for AI governance. By lever- aging international cooperation, industry incentives, and adaptive oversight, it could address AI's global risks far more effectively than traditional regulatory approaches.
Volume
130
Issue
1
First Page
157
Last Page
220
Recommended Citation
Tracy Hresko Pearl, Governance in the Absence of Government, 130 Dickinson L. Rev. 157 (2025).
Included in
Artificial Intelligence and Robotics Commons, Science and Technology Law Commons, Transportation Law Commons