TL;DR

AI Regulation Needs a Light Touch

Background: Artificial intelligence—or “AI”—is everywhere these days. It powers our smartphones, cars, homes, and entertainment. It helps us diagnose diseases, teach children, and create art. It promises to revolutionize every aspect of our lives, for better or worse. 

But … How should public policy respond to this powerful and rapidly evolving force? How should we ensure that AI serves our interests and values, rather than undermining or subverting them?

Some observers and policymakers fear that AI could pose existential threats to humanity, such as unleashing rogue superintelligences, triggering mass job losses, or sparking global wars. They argue that governments should take a prescriptive approach to AI regulation to preempt speculated threats.

Some argue that we need to impose strict and specific rules on AI development and deployment, before it is too late. In a recent U.S. Senate Judiciary Committee hearing, OpenAI CEO Sam Altman suggested that the United States needs a central regulator for AI. 

However … This approach is likely to be both misguided and counterproductive. Overregulation could stifle innovation and competition, depriving us of the benefits and opportunities that AI offers. It could put some countries at a disadvantage relative to those that pursue AI openly and aggressively. It could also stifle learning from AI and developing better AI.

ADOPT AN ADAPTIVE APPROACH

A more sensible and effective approach to oversight is to pursue an adaptive framework that relies on existing laws and institutions, rather than creating new regulations, agencies, and enforcement mechanisms.

There are already laws, policies, agencies, and courts in place to address actual harms and risks, rather than hypothetical or speculative ones. This is what we’ve done with earlier transformative technologies like biotech, nanotech, and the internet. Each has been regulated by applying existing laws and principles, such as antitrust, torts, contracts, and consumer protection. 

In addition, an adaptive approach would foster international dialogue and cooperation, which have been essential for establishing norms and standards for emerging technologies.

AN ADAPTIVE APPROACH DOES NOT MEAN COMPLACENCY

Pursuing an adaptive approach does not mean that we should be complacent or naive about AI. Where the technology is misused or causes harm, there should be actionable legal consequences. For example, if a real-estate developer intentionally used AI tools to screen out individuals from purchasing homes on the basis of protected characteristics, that should be actionable. If a criminal found a novel way to use ChatGPT to commit fraud, that should be actionable. If generative AI is used to create “deep fakes” that amounts to libel, that should be actionable. But in each of these cases, it is not the AI itself that is the relevant unit of legal analysis, but the actions of criminals and the harms they cause.

Ultimately, it would be fruitless to try to build a regulatory framework that would make it impossible for bad actors to misuse AI. Bad actors will always find ways to misuse tools, and heavy-handed regulatory requirements would chill the development of the very AI tools that could combat misuse.

DON’T NEGLECT THE BENEFITS

If history is any guide, it is likely that AI tools will allow firms and individuals to do more with less, expanding their productivity and improving their incomes.

By freeing capital from easily automated tasks, existing firms and new entrepreneurs could better focus on their core business missions. For example, investments in marketing or HR could be redeployed to R&D. At this point, we have little idea how AI will be used by people and firms. And more importantly, neither do politicians, policymakers, or regulators.

OVER-REGULATION WOULD INCREASE MARKET POWER

Overly burdensome AI regulation would likely hinder the entry and growth of new AI firms. For example, as an established player in the AI market, it should be no surprise that OpenAI’s CEO would favor a strong central regulator that can impose entry barriers on newcomers.  It is well-known in both law and economics that incumbent firms can profit from raising their rivals’ regulatory costs.

This dynamic can create strong strategic incentives for industry incumbents to promote regulation and can lead to a cozy relationship between agencies and incumbent firms in a process known as “regulatory capture.”

CONCLUSION

The key challenge confronting policymakers lies in navigating the dichotomy of mitigating actual risks posed by AI, while simultaneously fostering the substantial benefits it offers. 

To be sure, AI will bring about disruption and may provide a conduit for bad actors, just as technologies like the printing press and the internet have done in the past. This does not, however, merit taking an overly cautious stance that would suppress many of the potential benefits of AI.

Policymakers must eschew dystopian science-fiction narratives and instead base policy on realistic scenarios. Moreover, they should recognize the laws, policies, and agencies that already have enormous authority and power to find and punish those who misuse AI.

For more on this issue, see the International Center for Law & Economics’ (ICLE) response to the National Telecommunications and Information Administration’s AI Accountability Policy, as well as ICLE’s response to the similar inquiry from the White House Office of Science and Technology Policy.