What are you looking for?

Showing 9 of 96 Results in Innovation

ICLE Comments to NTIA on Dual-Use Foundation AI Models with Widely Available Model Weights

Regulatory Comments I. Introduction We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual . . .

I. Introduction

We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights” proceeding. In these comments, we endeavor to offer recommendations to foster the innovative and responsible production of artificial intelligence (AI), encompassing both open-source and proprietary models. Our comments are guided by a belief in the transformative potential of AI, while recognizing NTIA’s critical role in guiding the development of regulations that not only protect consumers but also enable this dynamic field to flourish. The agency should seek to champion a balanced and forward-looking approach toward AI technologies that allows them to evolve in ways that maximize their social benefits, while navigating the complexities and challenges inherent in their deployment.

NTIA’s question “How should [the] potentially competing interests of innovation, competition, and security be addressed or balanced?”[1] gets to the heart of ongoing debates about AI regulation. There is no panacea to be discovered, as all regulatory choices require balancing tradeoffs. It is crucial to bear this in mind when evaluating, e.g., regulatory proposals that implicitly treat AI as inherently dangerous and regard as obvious that stringent regulation is the only effective strategy to mitigate such risks.[2] Such presumptions discount AI’s unknown but potentially enormous capacity to produce innovation, and inadequately account for other tradeoffs inherent to imposing a risk-based framework (e.g., requiring disclosure of trade secrets or particular kinds of transparency that could yield new cybersecurity attack vectors). Adopting an overly cautious stance risks not only stifling AI’s evolution, but may also preclude a fulsome exploration of its potential to foster social, economic, and technological advancement. A more restrictive regulatory environment may also render AI technologies more homogenous and smother development of the kinds of diverse AI applications needed to foster robust competition and innovation.

We observe this problematic framing in the executive order (EO) that serves as the provenance of this RFC.[3] The EO repeatedly proclaims the importance of “[t]he responsible development and use of AI” in order to “mitigate[e] its substantial risks.”[4] Specifically, the order highlights concerns over “dual-use foundation models”—i.e., AI systems that, while beneficial, could pose serious risks to national security, national economic security, national public health, or public safety.[5] Concerningly, one of the categories the EO flags as illicit “dual use” are systems “permitting the evasion of human control or oversight through means of deception or obfuscation.”[6] This open-ended category could be interpreted so broadly that essentially any general-purpose generative-AI system would classify.

The EO also repeatedly distinguishes “open” versus “closed” approaches to AI development, while calling for “responsible” innovation and competition.[7] On our reading, the emphasis the EO places on this distinction raises alarm bells about the administration’s inclination to stifle innovation through overly prescriptive regulatory frameworks, diminishment of the intellectual property rights that offer incentives for innovation, and regulatory capture that favors incumbents over new entrants. In favoring one model of AI development over another, the EO’s prescriptions could inadvertently hamper the dynamic competitive processes that are crucial both for technological progress and for the discovery of solutions to the challenges that AI technology poses.

Given the inchoate nature of AI technology—much less the uncertain markets in which that technology will ultimately be deployed and commercialized—NTIA has an important role to play in elucidating for policymakers the nuances that might lead innovators to choose an open or closed development model, without presuming that one model is inherently better than the other—or that either is necessarily “dangerous.” Ultimately, the preponderance of AI risks will almost certainly emerge idiosyncratically. It will be incumbent on policymakers to address such risks in an iterative fashion as they become apparent. For now, it is critical to resist the urge to enshrine crude and blunt categories for the heterogeneous suite of technologies currently gathered under the broad banner of  “AI.”

Section II of these comments highlights the importance of grounding AI regulation in actual harms, rather than speculative risks, while outlining the diversity of existing AI technologies and the need for tailored approaches. Section III starts with discussion of some of the benefits and challenges posed by both open and closed approaches to AI development, while cautioning against overly prescriptive definitions of “openness” and advocating flexibility in regulatory frameworks. It proceeds to examine the EO’s prescription to regulate so-called “dual-use” foundation models, underscoring some potential unintended consequences for open-source AI development and international collaboration. Section IV offers some principles to craft an effective regulatory model for AI, including distinguishing between low-risk and high-risk applications, avoiding static regulatory approaches, and adopting adaptive mechanisms like regulatory sandboxes and iterative rulemaking. Section V concludes.

II. Risk Versus Harm in AI Regulation

In many of the debates surrounding AI regulation, disproportionate focus is placed on the need to mitigate risks, without sufficient consideration of the immense benefits that AI technologies could yield. Moreover, because these putative risks remain largely hypothetical, proposals to regulate AI descend quickly into an exercise in shadowboxing.

Indeed, there is no single coherent definition of what even constitutes “AI.” The term encompasses a wide array of technologies, methodologies, and applications, each with distinct characteristics, capabilities, and implications for society. From foundational models that can generate human-like text, to algorithms capable of diagnosing diseases with greater accuracy than human doctors, to “simple” algorithms that facilitate a more tailored online experience, AI applications and their underlying technologies are as varied as they are transformative.

This diversity has profound implications for the regulation and development of AI. Very different regulatory considerations are relevant to AI systems designed for autonomous vehicles than for those used in financial algorithms or creative-content generation. Each application domain comes with its own set of risks, benefits, ethical dilemmas, and potential social impacts, necessitating tailored approaches to each use case. And none of these properties of AI map clearly onto the “open” and “closed” designations highlighted by the EO and this RFC. This counsels for focus on specific domains and specific harms, rather than how such technologies are developed.[8]

As in prior episodes of fast-evolving technologies, what is considered cutting-edge AI today may be obsolete tomorrow. This rapid pace of innovation further complicates the task of crafting policies and regulations that will be both effective and enduring. Policymakers and regulators must navigate this terrain with a nuanced understanding of AI’s multifaceted nature, including by embracing flexible and adaptive regulatory frameworks that can accommodate AI’s continuing evolution.[9] A one-size-fits-all approach could inadvertently stifle innovation or entrench the dominance of a few large players by imposing barriers that disproportionately affect smaller entities or emerging technologies.

Experts in law and economics have long scrutinized both market conduct and regulatory rent seeking that serve to enhance or consolidate market power by disadvantaging competitors, particularly through increasing the costs incurred by rivals.[10] Various tactics may be employed to undermine competitors or exclude them from the market that do not involve direct price competition. It is widely recognized that “engaging with legislative bodies or regulatory authorities to enact regulations that negatively impact competitors” produces analogous outcomes.[11] It is therefore critical that the emerging markets for AI technologies not engender opportunities for firms to acquire regulatory leverage over rivals. Instead, recognizing the plurality of AI technologies and encouraging a multitude of approaches to AI development could help to cultivate a more vibrant and competitive ecosystem, driving technological progress forward and maximizing AI’s potential social benefits.

This overarching approach counsels skepticism about risk-based regulatory frameworks that fail to acknowledge how the theoretical harms of one type of AI system may be entirely different from those of another. Obviously, the regulation of autonomous drones is a very different sort of problem than the regulation of predictive policing or automated homework tutors. Even within a single circumscribed domain of generative AI—such as “smart chatbots” like ChatGPT or Claude—different applications may present entirely different kinds of challenges. A highly purpose-built version of such a system might be employed by government researchers to develop new materiel for the U.S. Armed Forces, while a general-purpose commercial chatbot would employ layers of protection to ensure that ordinary users couldn’t learn how to make advanced weaponry. Rather treating “chatbots” as possible vectors for weapons development, a more appropriate focus would target high-capability systems designed to assist in developing such systems. Were it the case that a general-purpose chatbot inadvertently revealed some information on building weapons, all incentives would direct that AI’s creators to treat that as a bug to fix, not a feature to expand.

Take, for example, the recent public response to the much less problematic AI-system malfunctions that accompanied Google’s release of its Gemini program.[12] Gemini was found to generate historically inaccurate images, such as ethnically diverse U.S. senators from the 1800s, including women.[13] Google quickly acknowledged that it did not intend for Gemini to create inaccurate historical images and turned off the image-generation feature to allow time for the company to work on significant improvements before re-enabling it.[14] While Google blundered in its initial release, it had every incentive to discover and remedy the problem. The market response provided further incentive for Google to get it right in the future.[15] Placing the development of such systems under regulatory scrutiny because some users might be able to jailbreak a model and generate some undesirable material would create disincentives to the production of AI systems more generally, with little gained in terms of public safety.

Rather than focus on the speculative risks of AI, it is essential to ground regulation in the need to address tangible harms that stem from the observed impacts of AI technologies on society. Moreover, focusing on realistic harms would facilitate a more dynamic and responsive regulatory approach. As AI technologies evolve and new applications emerge, so too will the  potential harms. A regulatory framework that prioritizes actual harms can adapt more readily to these changes, enabling regulators to update or modify policies in response to new evidence or social impacts. This flexibility is particularly important for a field like AI, where technological advancements could quickly outpace regulation, creating gaps in oversight that may leave individuals and communities vulnerable to harm.

Furthermore, like any other body of regulatory law, AI regulation must be grounded in empirical evidence and data-driven decision making. Demanding a solid evidentiary basis as a threshold for intervention would help policymakers to avoid the pitfalls of reacting to sensationalized or unfounded AI fears. This would not only enhance regulators’ credibility with stakeholders, but would also ensure that resources are dedicated to addressing the most pressing and substantial issues arising from the development of AI.

III. The Regulation of Foundation Models

NTIA is right to highlight the tremendous promise that attends the open development of AI technologies:

Dual use foundation models with widely available weights (referred to here as open foundation models) could play a key role in fostering growth among less resourced actors, helping to widely share access to AI’s benefits…. Open foundation models can be readily adapted and fine-tuned to specific tasks and possibly make it easier for system developers to scrutinize the role foundation models play in larger AI systems, which is important for rights- and safety-impacting AI systems (e.g. healthcare, education, housing, criminal justice, online platforms etc.)

…Historically, widely available programming libraries have given researchers the ability to simultaneously run and understand algorithms created by other programmers. Researchers and journals have supported the movement towards open science, which includes sharing research artifacts like the data and code required to reproduce results.[16]

The RFC proceeds to seek input on how to define “open” and “widely available.”[17] These, however, are the wrong questions. NTIA should instead proceed from the assumption that there are no harms inherent to either “open” or “closed” development models; it should be seeking input on anything that might give rise to discrete harms in either open or closed systems.

NTIA can play a valuable role by recommending useful alterations to existing law where gaps currently exist, regardless of the business or distribution model employed by the AI developer. In short, there is nothing necessarily more or less harmful about adopting an “open” or a “closed” approach to software systems. The decision to pursue one path over the other will be made based on the relevant tradeoffs that particular firms face. Embedding such distinctions in regulation is arbitrary, at best, and counterproductive to the fruitful development of AI, at worst.

A. ‘Open’ or ‘Widely Available’ Model Weights

To the extent that NTIA is committed to drawing distinctions between “open” and “closed” approaches to developing foundation models, it should avoid overly prescriptive definitions of what constitutes “open” or “widely available” model weights that could significantly hamper the progress and utility of AI technologies.

Imposing narrow definitions risks creating artificial boundaries that fail to accurately reflect AI’s technical and operational realities. They could also inadvertently exclude or marginalize innovative AI models that fall outside those rigid parameters, despite their potential to contribute positively to technological advancement and social well-being. For instance, a definition of “open” that requires complete public accessibility without any form of control or restriction might discourage organizations from sharing their models, fearing misuse or loss of intellectual property.

Moreover, prescriptive definitions could stifle the organic growth and evolution of AI technologies. The AI field is characterized by its rapid pace of change, where today’s cutting-edge models may become tomorrow’s basic tools. Prescribing fixed criteria for what constitutes “openness” or “widely available” risks anchoring the regulatory landscape to this specific moment in time, leaving the regulatory framework less able to adapt to future developments and innovations.

Given AI developers’ vast array of applications, methodologies, and goals, it is imperative that any definitions of “open” or “widely available” model weights embrace flexibility. A flexible approach would acknowledge how the various stakeholders within the AI ecosystem have differing needs, resources, and objectives, from individual developers and academic researchers to startups and large enterprises. A one-size-fits-all definition of “openness” would fail to accommodate this diversity, potentially privileging certain forms of innovation over others and skewing the development of AI technologies in ways that may not align with broader social needs.

Moreover, flexibility in defining “open” and “widely available” must allow for nuanced understandings of accessibility and control. There can, for example, be legitimate reasons to limit openness, such as protecting sensitive data, ensuring security, and respecting intellectual-property rights, while still promoting a culture of collaboration and knowledge sharing. A flexible regulatory approach would seek a balanced ecosystem where the benefits of open AI models are maximized, and potential risks are managed effectively.

B. The Benefits of ‘Open’ vs ‘Closed’ Business Models

NTIA asks:

What benefits do open model weights offer for competition and innovation, both in the AI marketplace and in other areas of the economy? In what ways can open dual-use foundation models enable or enhance scientific research, as well as education/training in computer science and related fields?[18]

An open approach to AI development has obvious benefits, as NTIA has itself acknowledged in other contexts.[19] Open-foundation AI models represent a transformative force, characterized by their accessibility, adaptability, and potential for widespread application across various sectors. The openness of these models may serve to foster an environment conducive to innovation, wherein developers, researchers, and entrepreneurs can build on existing technologies to create novel solutions tailored to diverse needs and challenges.

The inherent flexibility of open-foundation models can also catalyze a competitive market, encouraging a healthy ecosystem where entities ranging from startups to established corporations may all participate on roughly equal footing. By lowering some entry barriers related to access to basic AI technologies, this competitive environment can further drive technological advancements and price efficiencies, ultimately benefiting consumers and society at-large.

But more “closed” approaches can also prove very valuable. As NTIA notes in this RFC, it is rarely the case that a firm pursues a purely open or closed approach. These terms exist along a continuum, and firms blend models as necessary.[20] And just as firms readily mix elements of open and closed business models, a regulator should be agnostic about the precise mix that firms employ, which ultimately must align with the realities of market dynamics and consumer preferences.

Both open and closed approaches offer distinct benefits and potential challenges. For instance, open approaches might excel in fostering a broad and diverse ecosystem of applications, thereby appealing to users and developers who value customization and variety. They can also facilitate a more rapid dissemination of innovation, as they typically impose fewer restrictions on the development and distribution of new applications. Conversely, closed approaches, with their curated ecosystems, often provide enhanced security, privacy, and a more streamlined user experience. This can be particularly attractive to users less inclined to navigate the complexities of open systems. Under the right conditions, closed systems can likewise foster a healthy ecosystem of complementary products.

The experience of modern digital platforms demonstrates that there is no universally optimal approach to structuring business activities, thus illustrating the tradeoffs inherent in choosing among open and closed business models. The optimal choice depends on the specific needs and preferences of the relevant market participants. As Jonathan M. Barnett has noted:

Open systems may yield no net social gain over closed systems, can pose a net social loss under certain circumstances, and . . . can impose a net social gain under yet other circumstances.[21]

Similar considerations apply in the realm of AI development. Closed or semi-closed ecosystems can offer such advantages as enhanced security and curated offerings, which may appeal to certain users and developers. These benefits, however, may come at the cost of potentially limited innovation, as a firm must rely on its own internal processes for research and development. Open models, on the other hand, while fostering greater collaboration and creativity, may also introduce risks related to quality control, intellectual-property protection, and a host of other concerns that may be better controlled in a closed business model. Even along innovation dimensions, closed platforms can in many cases outperform open models.

With respect to digital platforms like the App Store and Google Play Store, there is a “fundamental welfare tradeoff between two-sided proprietary…platforms and two-sided platforms which allow ‘free entry’ on both sides of the market.”[22] Consequently, “it is by no means obvious which type of platform will create higher product variety, consumer adoption and total social welfare.”[23]

To take another example, consider the persistently low adoption rates for consumer versions of the open-source Linux operating system, versus more popular alternatives like Windows or MacOS.[24] A closed model like Apple’s MacOS is able to outcompete open solutions by better leveraging network effects and developing a close relationship with end users.[25] Even in this example, adoption of open versus closed models varies across user types, with, e.g., developers showing a strong preference for Linux over Mac, and only a slight preference for Windows over Linux.[26] This underscores the point that the suitability of an open or closed model varies not only by firm and product, nor even solely by user, but by the unique fit of a particular model for a particular user in a particular context. Many of those Linux-using developers will likely not use it on their home computing device, for example, even if they prefer it for work.

The dynamics among consumers and developers further complicate prevailing preferences for open or closed models. For some users, the security and quality assurance provided by closed ecosystems outweigh the benefits of open systems’ flexibility. On the developer side, the lower barriers to entry in more controlled ecosystems that smooth the transaction costs associated with developing and marketing applications can democratize application development, potentially leading to greater innovation within those ecosystems. Moreover, distinctions between open and closed models can play a critical role in shaping inter-brand competition. A regulator placing its thumb on the business-model scale would push the relevant markets toward less choice and lower overall welfare.[27]

By differentiating themselves through a focus on ease-of-use, quality, security, and user experience, closed systems contribute to a vibrant competitive landscape where consumers have clear choices between differing “brands” of AI. Forcing an AI developer to adopt practices that align with a regulator’s preconceptions about the relative value of “open” and “closed” risks homogenizing the market and diminishing the very competition that spurs innovation and consumer choice.

Consider some of the practical benefits sought by deployers when choosing between open and closed models. For example, it’s not straightforward to say close is inherently better than open when considering issues of data sharing or security; even here, there are tradeoffs. Open innovation in AI—characterized by the sharing of data, algorithms, and methodologies within the research community and beyond—can mitigate many of the risks associated with model development. This openness fosters a culture of transparency and accountability, where AI models and their applications are subject to scrutiny by a broad community of experts, practitioners, and the general public. This collective oversight can help to identify and address potential safety and security concerns early in the development process, thus enhancing AI technologies’ overall trustworthiness.

By contrast, a closed system may implement and enforce standardized security protocols more quickly. A closed system may have a sharper, more centralized focus on providing data security to users, which may perform better along some dimensions. And while the availability of code may provide security in some contexts, in other circumstances, closed systems perform better.[28]

In considering ethical AI development, different types of firms should be free to experiment with different approaches, even blending them where appropriate. For example, Claude’s approach to “Collective Constitutional AI” adopts what is arguably a “semi-open” model, blending proprietary elements with certain aspects of openness to foster innovation, while also maintaining a level of control.[29] This model might strike an appropriate balance, in that it ensures some degree of proprietary innovation and competitive advantage while still benefiting from community feedback and collaboration.

On the other hand, fully open-source development could lead to a different, potentially superior result that meets a broader set of needs through community-driven evolution and iteration. There is no way to determine, ex ante, that either an open or a closed approach to AI development will inherently provide superior results for developing “ethical” AI. Each has its place, and, most likely, the optimal solutions will involve elements of both approaches.

In essence, codifying a regulatory preference for one business model over the other would oversimplify the intricate balance of tradeoffs inherent to platform ecosystems. Economic theory and empirical evidence suggest that both open and closed platforms can drive innovation, serve consumer interests, and stimulate healthy competition, with all of these considerations depending heavily on context. Regulators should therefore aim for flexible policies that support coexistence of diverse business models, fostering an environment where innovation can thrive across the continuum of openness.

C. Dual-Use Foundation Models and Transparency Requirements

The EO and the RFC both focus extensively on so-called “dual-use” foundation models:

Foundation models are typically defined as, “powerful models that can be fine-tuned and used for multiple purposes.” Under the Executive Order, a “dual-use foundation model” is “an AI model that is trained on broad data; generally uses self-supervision, contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters….”[30]

But this framing will likely do more harm than good. As noted above, the terms “AI” or “AI model” are frequently invoked to refer to very different types of systems. Further defining these models as “dual use” is also unhelpful, as virtually any tool in existence can be “dual use” in this sense. Certainly, from a certain perspective, all software—particularly highly automated software—can pose a serious risk to “national security” or “safety.” Encryption and other privacy-protecting tools certainly fit this definition.[31] While it is crucial to mitigate harms associated with the misuse of AI technologies, the blanket treatment of all foundation models under this category is overly simplistic.

The EO identifies certain clear risks, such as the possibility that models could aid in the creation of chemical, biological, or nuclear weaponry. These categories are obvious subjects for regulatory control, but the EO then appears to open a giant definitional loophole that threatens to subsume virtually any useful AI system. It employs expansive terminology to describe a more generalized threat—specifically, that dual-use models could “[permit] the evasion of human control or oversight through means of deception or obfuscation.”[32] Such language could encompass a wide array of general-purpose AI models. Furthermore, by labeling systems capable of bypassing human decision making as “dual use,” the order implicitly suggests that all AI could pose such risk as warrants national-security levels of scrutiny.

Given the EO’s broad definition of AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” numerous software systems not typically even considered AI might be categorized as “dual-use” models.[33] Essentially, any sufficiently sophisticated statistical-analysis tool could qualify under this definition.

A significant repercussion of the EO’s very broad reporting mandates for dual-use systems, and one directly relevant to the RFC’s interest in promoting openness, is that these might chill open-source AI development.[34] Firms dabbling in AI technologies—many of which might not consider their projects to be dual use—might keep their initiatives secret until they are significantly advanced. Faced with the financial burden of adhering to the EO’s reporting obligations, companies that lack a sufficiently robust revenue model to cover both development costs and legal compliance might be motivated to dodge regulatory scrutiny in the initial phases, consequently dampening the prospects for transparency.

It is hard to imagine how open-source AI projects could survive in such an environment. Open-source AI code libraries like TensorFlow[35] and PyTorch[36] foster remarkable innovation by allowing developers to create new applications that use cutting-edge models. How could a paradigmatic startup developer working out of a garage genuinely commit to open-source development if tools like these fall under the EO’s jurisdiction? Restricting access to the weights that models use—let alone avoiding open-source development entirely—may hinder independent researchers’ ability to advance the forefront of AI technology.

Moreover, scientific endeavors typically benefit from the contributions of researchers worldwide, as collaborative efforts on a global scale are known to fast-track innovation. The pressure the EO applies to open-source development of AI tools could curtail international cooperation, thereby distancing American researchers from crucial insights and collaborations. For example, AI’s capacity to propel progress in numerous scientific areas is potentially vast—e.g., utilizing MRI images and deep learning for brain-tumor diagnoses[37] or employing machine learning to push the boundaries of materials science.[38] Such research does not benefit from stringent secrecy, but thrives on collaborative development. Enabling a broader community to contribute to and expand upon AI advancements supports this process.

Individuals respond to incentives. Just as how well-intentioned seatbelt laws paradoxically led to an uptick in risky driving behaviors,[39] ill-considered obligations placed on open-source AI developers could unintentionally stifle the exchange of innovative concepts crucial to maintain the United States’ leadership in AI innovation.

IV. Regulatory Models that Support Innovation While Managing Risks Effectively

In the rapidly evolving landscape of artificial intelligence (AI), it is paramount to establish governance and regulatory frameworks that both encourage innovation and ensure safety and ethical integrity. An effective regulatory model for AI should be adaptive, principles-based, and foster a collaborative environment among regulators, developers, researchers, and the broader community. A number of principles can help in developing this regime.

A. Low-Risk vs High-Risk AI

First, a clear distinction should be made between low-risk AI applications that enhance operational efficiency or consumer experience and high-risk applications that could have significant safety implications. Low-risk applications like search algorithms and chatbots should be governed by a set of baseline ethical guidelines and best practices that encourage innovation, while ensuring basic standards are met. On the other hand, high-risk applications—such as those used by law enforcement or the military—would require more stringent review processes, including impact assessments, ethical reviews, and ongoing monitoring to mitigate potentially adverse effects.

Contrast this with the recently enacted AI Act in the European Union, and its decision to create presumptions of risk for general purpose AI (GPAI) systems, such as large language models (LLMs), that present what the EU has termed so-called “systemic risk.”[40] Article 3(65) of the AI Act defines systemic risk as “a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.”[41]

This definition bears similarities to the “Hand formula” in U.S. tort law, which balances the burden of precautions against the probability and severity of potential harm to determine negligence.[42] The AI Act’s notion of systemic risk, however, is applied more broadly to entire categories of AI systems based on their theoretical potential for widespread harm, rather than on a case-by-case basis.

The designation of LLMs as posing “systemic risk” is problematic for several reasons. It creates a presumption of risk merely based on a GPAI system’s scale of operations, without any consideration of the actual likelihood or severity of harm in specific use cases. This could lead to unwarranted regulatory intervention and unintended consequences that hinder the development and deployment of beneficial AI technologies. And this broad definition of systemic risk gives regulators significant leeway to intervene in how firms develop and release their AI products, potentially blocking access to cutting-edge tools for European citizens, even in the absence of tangible harms.

While it is important to address potential risks associated with AI systems, the AI Act’s approach risks stifling innovation and hindering the development of beneficial AI technologies within the EU.

B. Avoid Static Regulatory Approaches

AI regulators are charged with overseeing a dynamic and rapidly developing market, and should therefore avoid erecting a rigid framework that force new innovations into ill-fitting categories. The “regulatory sandbox” may provide a better model to balance innovation with risk management. By allowing developers to test and refine AI technologies in a controlled environment under regulatory oversight, sandboxes can be used to help identify and address potential issues before wider deployment, all while facilitating dialogue between innovators and regulators. This approach not only accelerates the development of safe and ethical AI solutions, but also builds mutual understanding and trust. Where possible, NTIA should facilitate policy experimentation with regulatory sandboxes in the AI context.

Meta’s Open Loop program is an example of this kind of experimentation.[43] This program is a policy prototyping research project focused on evaluating the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0.[44] The goal is to assess whether the framework is understandable, applicable, and effective in assisting companies to identify and manage risks associated with generative AI. It also provides companies an opportunity to familiarize themselves with the NIST AI RMF and its application in risk-management processes for generative AI systems. Additionally, it aims to collect data on existing practices and offer feedback to NIST, potentially influencing future RMF updates.

1. Regulation as a discovery process

Another key principle is to ensure that regulatory mechanisms are adaptive. Some examples of adaptive mechanisms are iterative rulemaking and feedback loops that allow regulations to be updated continuously in response to new developments and insights. Such mechanisms enable policymakers to respond swiftly to technological breakthroughs, ensuring that regulations remain relevant and effective, without stifling innovation.

Geoffrey Manne & Gus Hurwitz have recently proposed a framework for “regulation as a discovery process” that could be adapted to AI.[45] They argue for a view of regulation not merely as a mechanism for enforcing rules, but as a process for discovering information that can inform and improve regulatory approaches over time. This perspective is particularly pertinent to AI, where the pace of innovation and the complexity of technologies often outstrip regulators’ understanding and ability to predict future developments. This framework:

in its simplest formulation, asks regulators to consider that they might be wrong. That they might be asking the wrong questions, collecting the wrong information, analyzing it the wrong way—or even that Congress has given them the wrong authority or misunderstood the problem that Congress has tasked them to address.[46]

That is to say, an adaptive approach to regulation requires epistemic humility, with the understanding that, particularly for complex, dynamic industries:

there is no amount of information collection or analysis that is guaranteed to be “enough.” As Coase said, the problem of social cost isn’t calculating what those costs are so that we can eliminate them, but ascertaining how much of those social costs society is willing to bear.[47]

In this sense, modern regulators’ core challenge is to develop processes that allow for iterative development of knowledge, which is always in short supply. This requires a shift in how an agency conceptualizes its mission, from one of writing regulations to one of assisting lawmakers to assemble, filter, and focus on the most relevant and pressing information needed to understand a regulatory subject’s changing dynamics.[48]

As Hurwitz & Manne note, existing efforts to position some agencies as information-gathering clearinghouses suffer from a number of shortcomings—most notably, that they tend to operate on an ad hoc basis, reporting to Congress in response to particular exigencies.[49] The key to developing a “discovery process” for AI regulation would instead require setting up ongoing mechanisms to gather and report on data, as well as directing the process toward “specifications for how information should be used, or what the regulator anticipated to find in the information, prior to its collection.”[50]

Embracing regulation as a discovery process means acknowledging the limits of our collective knowledge about AI’s potential risks and benefits. This underscores why regulators should prioritize generating and utilizing new information through regulatory experiments, iterative rulemaking, and feedback loops. A more adaptive regulatory framework could respond to new developments and insights in AI technologies, thereby ensuring that regulations remain relevant and effective, without stifling innovation.

Moreover, Hurwitz & Manne highlight the importance of considering regulation as an information-producing activity.[51] In AI regulation, this could involve setting up mechanisms that allow regulators, innovators, and the public to contribute to and benefit from a shared pool of knowledge about AI’s impacts. This could include public databases of AI incidents, standardized reporting of AI-system performance, or platforms for sharing best practices in AI safety and ethics.

Static regulatory approaches may fail to capture the evolving landscape of AI applications and their societal implications. Instead, a dynamic, information-centric regulatory strategy that embraces the market as a discovery process could better facilitate beneficial innovations, while identifying and mitigating harms.

V. Conclusion

As the NTIA navigates the complex landscape of AI regulation, it is imperative to adopt a nuanced, forward-looking approach that balances the need to foster innovation with the imperatives of ensuring public safety and ethical integrity. The rapid evolution of AI technologies necessitates a regulatory framework that is both adaptive and principles-based, eschewing static snapshots of the current state of the art in favor of flexible mechanisms that could accommodate the dynamic nature of this field.

Central to this approach is to recognize that the field of AI encompasses a diverse array of technologies, methodologies, and applications, each with its distinct characteristics, capabilities, and implications for society. A one-size-fits-all regulatory model would not only be ill-suited to the task at-hand, but would also risk stifling innovation and hindering the United States’ ability to maintain its leadership in the global AI industry. NTIA should focus instead on developing tailored approaches that distinguish between low-risk and high-risk applications, ensuring that regulatory interventions are commensurate with the potential identifiable harms and benefits associated with specific AI use cases.

Moreover, the NTIA must resist the temptation to rely on overly prescriptive definitions of “openness” or to favor particular business models over others. The coexistence of open and closed approaches to AI development is essential to foster a vibrant, competitive ecosystem that drives technological progress and maximizes social benefits. By embracing a flexible regulatory framework that allows for experimentation and iteration, the NTIA can create an environment conducive to innovation while still ensuring that appropriate safeguards are in place to mitigate potential risks.

Ultimately, the success of the U.S. AI industry will depend on the ability of regulators, developers, researchers, and the broader community to collaborate in developing governance frameworks that are both effective and adaptable. By recognizing the importance of open development and diverse business models, the NTIA can play a crucial role in shaping the future of AI in ways that promote innovation, protect public interests, and solidify the United States’ position as a global leader in this transformative field.

[1] Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights, Docket No. 240216-0052, 89 FR 14059, National Telecommunications and Information Administration (Mar. 27, 2024) at 14063, question 8(a) [hereinafter “RFC”].

[2] See, e.g., Kristian Stout, Systemic Risk and Copyright in the EU AI Act, Truth on the Market (Mar. 19, 2024), https://truthonthemarket.com/2024/03/19/systemic-risk-and-copyright-in-the-eu-ai-act.

[3] Exec. Order No. 14110, 88 F.R. 75191 (2023), https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence?_fsi=C0CdBzzA [hereinafter “EO”].

[4] See, e.g., EO at §§ 1; 2(c), 5.2(e)(ii); and § 8(c);

[5] Id. at § 3(k).

[6] Id. at § (k)(iii).

[7] Id. at § 4.6. As NTIA notes, the administration refers to “widely available model weight,” which is equivalent to “open foundation models” in this proceeding. RFC at 14060.

[8] For more on the “open” vs “closed” distinction and its poor fit as a regulatory lens, see, infra, at nn. 19-41 and accompanying text.

[9] Adaptive regulatory frameworks are discussed, infra, at nn. 42-53 and accompanying text.

[10] See Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73:2 Am. Econ. R. 267, 267–71 (1983), http://www.jstor.org/stable/1816853.

[11] See Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36:1 J. Indus. Econ. 19 (1987), https://doi.org/10.2307/2098594.

[12] Cindy Gordon, Google Pauses Gemini AI Model After Latest Debacle, Forbes (Feb. 29, 2024), https://www.forbes.com/sites/cindygordon/2024/02/29/google-latest-debacle-has-paused-gemini-ai-model/?sh=3114d093536c.

[13] Id.

[14] Id.

[15] Breck Dumas, Google Loses $96B in Value on Gemini Fallout as CEO Does Damage Control, Yahoo Finance (Feb. 28, 2024), https://finance.yahoo.com/news/google-loses-96b-value-gemini-233110640.html.

[16] RFC at 14060.

[17] RFC at 14062, question 1.

[18] RFC at 14062, question 3(a).

[19] Department of Commerce, Competition in the Mobile Application Ecosystem (2023), https://www.ntia.gov/report/2023/competition-mobile-app-ecosystem (“While retaining appropriate latitude for legitimate privacy, security, and safety measures, Congress should enact laws and relevant agencies should consider measures (such as rulemaking) designed to open up distribution of lawful apps, by prohibiting… barriers to the direct downloading of applications.”).

[20] RFC at 14061 (“‘openness’ or ‘wide availability’ of model weights are also terms without clear definition or consensus. There are gradients of ‘openness,’ ranging from fully ‘closed’ to fully ‘open’”).

[21] See Jonathan M. Barnett, The Host’s Dilemma: Strategic Forfeiture in Platform Markets for Informational Goods, 124 Harv. L. Rev. 1861, 1927 (2011).

[22] Id. at 2.

[23] Id. at 3.

[24]  Desktop Operating System Market Share Worldwide Feb 2023 – Feb 2024, statcounter, https://gs.statcounter.com/os-market-share/desktop/worldwide (last visited Mar. 27, 2024).

[25]  Andrei Hagiu, Proprietary vs. Open Two-Sided Platforms and Social Efficiency (Harv. Bus. Sch. Strategy Unit, Working Paper No. 09-113, 2006).

[26] Joey Sneddon, More Developers Use Linux than Mac, Report Shows, Omg Linux (Dec. 28, 2022), https://www.omglinux.com/devs-prefer-linux-to-mac-stackoverflow-survey.

[27] See Michael L. Katz & Carl Shapiro, Systems Competition and Network Effects, 8 J. Econ. Persp. 93, 110 (1994), (“[T]he primary cost of standardization is loss of variety: consumers have fewer differentiated products to pick from, especially if standardization prevents the development of promising but unique and incompatible new systems”).

[28] See. e.g., Nokia, Threat Intelligence Report 2020 (2020), https://www.nokia.com/networks/portfolio/cyber-security/threat-intelligence-report-2020; Randal C. Picker, Security Competition and App Stores, Network Law Review (Aug. 23, 2021), https://www.networklawreview.org/picker-app-stores.

[29] Collective Constitutional AI: Aligning a Language Model with Public Input, Anthropic (Oct. 17, 2023), https://www.anthropic.com/news/collective-constitutional-ai-aligning-a-language-model-with-public-input.

[30] RFC at 14061.

[31] Encryption and the “Going Dark” Debate, Congressional Research Service (2017), https://crsreports.congress.gov/product/pdf/R/R44481.

[32] EO at. § 3(k)(iii).

[33] EO at § 3(b).

[34] EO at § 4.2 (requiring companies developing dual-use foundation models to provide ongoing reports to the federal government on their activities, security measures, model weights, and red-team testing results).

[35] An End-to-End Platform for Machine Learning, TensorFlow, https://www.tensorflow.org (last visited Mar. 27, 2024).

[36] Learn the Basics, PyTorch, https://pytorch.org/tutorials/beginner/basics/intro.html (last visited Mar. 27, 2024).

[37] Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov, & Taeg Keun Whangbo, Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging, 15(16) Cancers (Basel) 4172 (2023), available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10453020.

[38] Keith T. Butler, et al., Machine Learning for Molecular and Materials Science, 559 Nature 547 (2018), available at https://www.nature.com/articles/s41586-018-0337-2.

[39] The Peltzman Effect, The Decision Lab, https://thedecisionlab.com/reference-guide/psychology/the-peltzman-effect (last visited Mar. 27, 2024).

[40] European Parliament, European Parliament legislative Resolution of 13 March 2024 on the Proposal for a Regulation of the European Parliament and of the Council on Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206, available at https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html [hereinafter “EU AI Act”].

[41] Id. at Art. 3(65).

[42] See Stephen G. Gilles, On Determining Negligence: Hand Formula Balancing, the Reasonable Person Standard, and the Jury, 54 Vanderbilt L. Rev. 813, 842-49 (2001).

[43] See Open Loop’s First Policy Prototyping Program in the United States, Meta, https://www.usprogram.openloop.org (last visited Mar. 27. 2024).

[44] Id.

[45] Justin (Gus) Hurwitz & Geoffrey A. Manne, Pigou’s Plumber: Regulation as a Discovery Process, SSRN (2024), available at https://laweconcenter.org/resources/pigous-plumber.

[46] Id. at 32.

[47] Id. at 33.

[48] See id. at 28-29

[49] Id. at 37.

[50] Id. at 37-38.

[51] Id.

Continue reading
Innovation & the New Economy

Section 214: Title II’s Trojan Horse

TOTM The Federal Communications Commission (FCC) has proposed classifying broadband internet-access service as a common carrier “telecommunications service” under Title II of the Communications Act. One . . .

The Federal Communications Commission (FCC) has proposed classifying broadband internet-access service as a common carrier “telecommunications service” under Title II of the Communications Act. One major consequence of this reclassification would be subjecting broadband providers to Section 214 regulations that govern the provision, acquisition, and discontinuation of communication “lines.”

In the Trojan War, the Greeks conquered Troy by hiding their soldiers inside a giant wooden horse left as a gift to the besieged Trojans. Section 214 hides a potential takeover of the broadband industry inside the putative gift of improving national security.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

ICLE Comments to European Commission on Competition in Virtual Worlds

Regulatory Comments Executive Summary We welcome the opportunity to comment on the European Commission’s call for contributions on competition in “Virtual Worlds”.[1] The International Center for Law . . .

Executive Summary

We welcome the opportunity to comment on the European Commission’s call for contributions on competition in “Virtual Worlds”.[1] The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

The metaverse is an exciting and rapidly evolving set of virtual worlds. As with any new technology, concerns about the potential risks and negative consequences that the metaverse may bring have moved policymakers to explore how best to regulate this new space.

From the outset, it is important to recognize that simply because the metaverse is new does not mean that competition in this space is unregulated or somehow ineffective. Existing regulations may not explicitly or exclusively target metaverse ecosystems, but a vast regulatory apparatus already covers most aspects of business in virtual worlds. This includes European competition law, the Digital Markets Act (“DMA”), the General Data Protection Act (“GDPR), the Digital Services Act (“DSA”), and many more. Before it intervenes in this space, the commission should carefully consider whether there are any metaverse-specific problems not already addressed by these legal provisions.

This sense that competition intervention would be premature is reinforced by three important factors.

The first is that competition appears particularly intense in this space (Section I). There are currently multiple firms vying to offer compelling virtual worlds. At the time of writing, however, none appears close to dominating the market. In turn, this intense competition will encourage platforms to design services that meet consumers’ demands, notably in terms of safety and privacy. Nor does the market appear likely to fall into the hands of one of the big tech firms that command a sizeable share of more traditional internet services. Meta notoriously has poured more than $3.99 billion into its metaverse offerings during the first quarter of 2023, in addition to $13.72 billion the previous calendar year.[2] Despite these vast investments and a strategic focus on metaverse services, the company has, thus far, struggled to achieve meaningful traction in the space.[3]

Second, the commission’s primary concern appears to be that metaverses will become insufficiently “open and interoperable”.[4] But to the extent that these ecosystems do, indeed, become closed and proprietary, there is no reason to believe this to be a problem. Closed and proprietary ecosystems have several features that may be attractive to consumers and developers (Section II). These include improved product safety, performance, and ease of development. This is certainly not to say that closed ecosystems are always better than more open ones, but rather that it would be wrong to assume that one model or the other is optimal. Instead, the proper balance depends on tradeoffs that markets are better placed to decide.

Finally, timing is of the essence (Section III). Intervening so early in a fledgling industry’s life cycle is like shooting a moving target from a mile away. New rules or competition interventions might end up being irrelevant. Worse, by signaling that metaverses will be subject to heightened regulatory scrutiny for the foreseeable future, the commission may chill investment from the very firms is purports to support. In short, the commission should resist the urge to intervene so long as the industry is not fully mature.

I. Competing for Consumer Trust

The Commission is right to assume, in its call for contributions, that the extent to which metaverse services compete with each other (and continue to do so in the future) will largely determine whether they fulfil consumers’ expectations and meet the safety and trustworthiness requirements to which the commission aspires. As even the left-leaning Lessig put it:

Markets regulate behavior in cyberspace too. Prices structures often constrain access, and if they do not, then busy signals do. (America Online (AOL) learned this lesson when it shifted from an hourly to a flat-rate pricing plan.) Some sites on the web charge for access, as on-line services like AOL have for some time. Advertisers reward popular sites; online services drop unpopular forums. These behaviors are all a function of market constraints and market opportunity, and they all reflect the regulatory role of the market.[5]

Indeed, in a previous call for contributions, the Commission implicitly recognized the important role that competition plays, although it frames the subject primarily in terms of the problems that would arise if competition ceased to operate:

There is a risk of having a small number of big players becoming future gatekeepers of virtual worlds, creating market entry barriers and shutting out EU start-ups and SMEs from this emerging market. Such a closed ecosystem with the prevalence of proprietary systems can negatively affect the protection of personal information and data, the cybersecurity and the freedom and openness of virtual worlds at the same time.[6]

It is thus necessary to ask whether there is robust competition in the market for metaverse services. The short answer is a resounding yes.

A. Competition Without Tipping

While there is no precise definition of what constitutes a metaverse—much less a precise definition of the relevant market—available data suggests the space is highly competitive. This is evident in the fact that even a major global firm like Meta—having invested billions of dollars in its metaverse branch (and having rebranded the company accordingly)—has struggled to gain traction.[7]

Other major players in the space include the likes of Roblox, Fortnite, and Minecraft, which all have somewhere between 70 and 200 million active users.[8] This likely explains why Meta’s much-anticipated virtual world struggled to gain meaningful traction with consumers, stalling at around 300,000 active users.[9] Alongside these traditional players, there are also several decentralized platforms that are underpinned by blockchain technology. While these platforms have attracted massive investments, they are largely peripheral in terms of active users, with numbers often only in the low thousands.[10]

There are several inferences that can be drawn from these limited datasets. For one, it is clear that the metaverse industry is not yet fully mature. There are still multiple paradigms competing for consumer attention: game-based platforms versus social-network platforms; traditional platforms versus blockchain platforms, etc. In the terminology developed by David Teece, the metaverse industry has not yet reached a “paradigmatic” stage. It is fair to assume there is still significant scope for the entry of differentiated firms.[11]

It is also worth noting that metaverse competition does not appear to exhibit the same sort of network effects and tipping that is sometimes associated with more traditional social networks.[12] Despite competing for nearly a decade, no single metaverse project appears to be running away with the market.[13] This lack of tipping might be because these projects are highly differentiated.[14] It may also be due to the ease of multi-homing among them.[15]

More broadly, it is far from clear that competition will lead to a single metaverse for all uses. Different types of metaverse services may benefit from different user interfaces, graphics, and physics engines. This cuts in favor of multiple metaverses coexisting, rather than all services coordinating within a single ecosystem. Competition therefore appears likely lead to the emergence of multiple differentiated metaverses, rather than a single winner.

Ultimately, competition in the metaverse industry is strong and there is little sense these markets are about to tip towards a single firm in the year future.

B. Competing for Consumer Trust

As alluded to in the previous subsection, the world’s largest and most successful metaverse entrants to date are traditional videogaming platforms that have various marketplaces and currencies attached.[16] In other words, decentralized virtual worlds built upon blockchain technology remain marginal.

This has important policy implications. The primary legal issues raised by metaverses are the same as those encountered on other digital marketplaces. This includes issues like minor fraud, scams, and children buying content without their parents’ authorization.[17] To the extent these harms are not adequately deterred by existing laws, metaverse platforms themselves have important incentives to police them. In turn, these incentives may be compounded by strong competition among platforms.

Metaverses are generally multi-sided platforms that bring together distinct groups of users, including consumers and content creators. In order to maximize the value of their ecosystems, platforms have an incentive to balance the interests of these distinct groups.[18] In practice, this will often mean offering consumers various forms of protection against fraud and scams and actively policing platforms’ marketplaces. As David Evans puts it:

But as with any community, there are numerous opportunities for people and businesses to create negative externalities, or engage in other bad behavior, that can reduce economic efficiency and, in the extreme, lead to the tragedy of the commons. Multi-sided platforms, acting selfishly to maximize their own profits, often develop governance mechanisms to reduce harmful behavior. They also develop rules to manage many of the same kinds of problems that beset communities subject to public laws and regulations. They enforce these rules through the exercise of property rights and, most importantly, through the “Bouncer’s Right” to exclude agents from some quantum of the platform, including prohibiting some agents from the platform entirely…[19]

While there is little economic research to suggest that competition directly increases hosts’ incentive to policy their platforms, it stands to reason that doing so effectively can help platforms to expand the appeal of their ecosystems. This is particularly important for metaverse services whose userbases remain just a fraction of the size they could ultimately reach. While 100 or 200 million users already comprises a vast ecosystem, it pales in comparison to the sometimes billions of users that “traditional” online platforms attract.

The bottom line is that the market for metaverses is growing. This likely compounds platforms’ incentives to weed out undesirable behavior, thereby complementing government efforts to achieve the same goal.

II. Opening Platforms or Opening Pandora’s Box?

In its call for contributions, the commission seems concerned that the metaverse competition may lead to closed ecosystems that may be less beneficial to consumers than more open ones. But if this is indeed the commission’s fear, it is largely unfounded.

There are many benefits to closed ecosystems. Choosing the optimal degree of openness entails tradeoffs. At the very least, this suggests that policymakers should be careful not to assume that opening platforms up will systematically provide net benefits to consumers.

A. Antitrust Enforcement and Regulatory Initiatives

To understand why open (and weakly propertized) platforms are not always better for consumers, it is worth looking at past competition enforcement in the online space. Recent interventions by competition authorities have generally attempted (or are attempting) to move platforms toward more openness and less propertization. For their part, these platforms are already tremendously open (as the “platform” terminology implies) and attempt to achieve a delicate balance between centralization and decentralization.

Figure I: Directional Movement of Antitrust Intervention

The Microsoft cases and the Apple investigation both sought or seek to bring more openness and less propertization to those respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and to open its platform to rival media players and web browsers (more openness).[20] The same applies to Apple. Plaintiffs in private antitrust litigation brought in the United States[21] and government enforcement actions in Europe[22] are seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as to ensure that it cannot exclude rival mobile-payments solutions from its platform (more openness).

The various cases that were brought by EU and U.S. authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property.[23] The European Union’s Amazon investigation centers on the ways in which the company uses data from third-party sellers (and, ultimately, the distribution of revenue between those sellers and Amazon).[24] In both cases, authorities are ultimately trying to limit the extent to which firms can propertize their assets.

Finally, both of the EU’s Google cases sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals.[25] The separate Android decision sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing litigation brought by state attorneys general in the United States.[26]

Much of the same can be said of the numerous regulatory initiatives pertaining to digital markets. Indeed, draft regulations being contemplated around the globe mimic the features of the antitrust/competition interventions discussed above. For instance, it is widely accepted that Europe’s DMA effectively transposes and streamlines the enforcement of the theories harm described above.[27] Similarly, several scholars have argued that the proposed American Innovation and Choice Online Act (“AICOA”) in the United States largely mimics European competition policy.[28] The legislation would ultimately require firms to open up their platforms, most notably by forcing them to treat rival services as they would their own and to make their services more interoperable with those rivals.[29]

What is striking about these decisions and investigations is the extent to which authorities are pushing back against the very features that distinguish the platforms they are investigating. Closed (or relatively closed) platforms are forced to open up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

B. The Empty Quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be vanishingly few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems in both the mobile and desktop segments. Most have ended in failure. Ubuntu and other flavors of the Linux operating system remain fringe products. There have been attempts to create open-source search engines, but they have not met with success.[30] The picture is similar in the online retail space. Amazon appears to have beaten eBay, despite the latter being more open and less propertized. Indeed, Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the ways in which they may sell their goods.[31]

This theme is repeated in the standardization space. There have been innumerable attempts to impose open, royalty-free standards. At least in the mobile-internet industry, few (if any) of these have taken off. Instead, proprietary standards such as 5G and WiFi have been far more successful. That pattern is repeated in other highly standardized industries, like digital-video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format.[32]

Figure II: Open and Shared Platforms

This is not to say that there haven’t been any successful examples of open, royalty-free standards. Internet protocols, blockchain, and Wikipedia all come to mind. Nor does it mean that we will not see more decentralized goods in the future. But by and large, firms and consumers have not yet taken to the idea of fully open and shared platforms. Or, at least, those platforms have not yet achieved widespread success in the marketplace (potentially due to supply-side considerations, such as the difficulty of managing open platforms or the potentially lower returns to innovation in weakly propertized ones).[33] And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase in the blockchain space, or Android’s use of Linux).

C. Potential Explanations

The preceding section posited a recurring reality: the digital platforms that competition authorities wish to bring into existence are fundamentally different from those that emerge organically. But why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success?

Three potential explanations come to mind. First, “closed” and “propertized” platforms might systematically—and perhaps anticompetitively—thwart their “open” and “shared” rivals. Second, shared platforms might fail to persist (or grow pervasive) because they are much harder to monetize, and there is thus less incentive to invest in them. This is essentially a supply-side explanation. Finally, consumers might opt for relatively closed systems precisely because they prefer these platforms to marginally more open ones—i.e., a demand-side explanation.

In evaluating the first conjecture, the key question is whether successful “closed” and “propertized” platforms overcame their rivals before or after they achieved some measure of market dominance. If success preceded dominance, then anticompetitive foreclosure alone cannot explain the proliferation of the “closed” and “propertized” model.[34]

Many of today’s dominant platforms, however, often overcame open/shared rivals, well before they achieved their current size. It is thus difficult to make the case that the early success of their business models was due to anticompetitive behavior. This is not to say these business models cannot raise antitrust issues, but rather that anticompetitive behavior is not a good explanation for their emergence.

Both the second and the third conjectures essentially ask whether “closed” and “propertized” might be better adapted to their environment than “open” and “shared” rivals.

In that respect, it is not unreasonable to surmise that highly propertized platforms would generally be easier to monetize than shared ones. For example, to monetize open-source platforms often requires relying on complementarities, which tend to be vulnerable to outside competition and free-riding.[35] There is thus a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platform’s ability to propertize their assets may harm innovation.

Similarly, authorities should reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design. The European Commission, for example, has a long track record of seeking to open digital platforms, notably by requiring that platform owners do not preinstall their own web browsers (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept using the very business model that the commission reprimanded, rather than the “pro-consumer” model it sought to impose on the industry. For example, Apple tied the Safari browser to its iPhones; Google went to some length to ensure that Chrome was preloaded on devices; and Samsung phones come with Samsung Internet as default.[36] Yet this has not ostensibly steered consumers away from those platforms.

Along similar lines, a sizable share of consumers opt for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). In other words, it is hard to claim that opening platforms is inherently good for consumers when those same consumers routinely opt for platforms with the very features that policymakers are trying to eliminate.

Finally, it is worth noting that the remedies imposed by competition authorities have been anything but successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unmitigated flop, selling a paltry 1,787 copies.[37] Likewise, the internet-browser “ballot box” imposed by the commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the commission’s decision.[38]

One potential inference is that consumers do not value competition interventions that make dominant ecosystems marginally more open and less propertized. There are also many reasons why consumers might prefer “closed” systems (at least, relative to the model favored by many policymakers), even when they must pay a premium for them.

Take the example of app stores. Maintaining some control over the apps that can access the store enables platforms to easily weed out bad actors. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. Indeed, it may be that a measure of control facilitates the very innovations that consumers demand. Therefore, “authorities and courts should not underestimate the indispensable role control plays in achieving coordination and coherence in the context of systemic ef?ciencies. Without it, the attempted novelties and strategies might collapse under their own complexity.”[39]

Relatively centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers.[40] This is especially true when consumers will tend to attribute dips in performance to the overall platform, rather than to a particular app.[41] At the same time, they can take advantage of positive externalities to improve the quality of the overall platform.

And it is surely the case that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Absent false information at the time of the initial platform decision, this decision will effectively incorporate expectations about subsequent constraints.[42]

Furthermore, forcing users to make too many “within-platform” choices may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different.[43] In short, contrary to what antitrust authorities appear to believe, closed platforms might give most users exactly what they desire.

All of this suggests that consumers and firms often gravitate spontaneously toward both closed and highly propertized platforms, the opposite of what the commission and other competition authorities tend to favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. Instead, what some regard as “market failures” may in fact be features that explain the rapid emergence of the digital economy.

When considering potential policy reforms targeting the metaverse, policymakers would be wrong to assume openness (notably, in the form of interoperability) and weak propertization are always objectively superior. Instead, these platform designs entail important tradeoffs. Closed metaverse ecosystems may lead to higher consumer safety and better performance, while interoperable systems may reduce the frictions consumers face when moving from one service to another. There is little reason to believe policymakers are in a better position to weigh these tradeoffs than consumers, who vote with their virtual feet.

III. Conclusion: Competition Intervention Would be Premature

A final important argument against intervening today is that the metaverse industry is nowhere near mature. Tomorrow’s competition-related challenges and market failures might not be the same as today’s. This makes it exceedingly difficult for policymakers to design appropriate remedies and increases the risk that intervention might harm innovation.

As of 2023, the entire metaverse industry (both hardware and software) is estimated to be worth somewhere in the vicinity of $80 billion, and projections suggest this could grow by a factor of 10 by 2030.[44] Growth projections of this sort are notoriously unreliable. But in this case, they do suggest there is some consensus that the industry is not fully fledged.

Along similar lines, it remains unclear what types of metaverse services will gain the most traction with consumers, what sorts of hardware consumers will use to access these services, and what technologies will underpin the most successful metaverse platforms. In fact, it is still an open question whether the metaverse industry will foster any services that achieve widespread consumer adoption in the foreseeable future.[45] In other words, it is not exactly clear what metaverse products and services the Commission should focus on in the first place.

Given these uncertainties, competition intervention in the metaverse appears premature. Intervening so early in the industry’s life cycle is like aiming at a moving target. Ensuing remedies might end up being irrelevant before they have any influence on the products that firms develop. More worryingly, acting now signals that the metaverse industry will be subject to heightened regulatory scrutiny for the foreseeable future. In turn, this may deter large platforms from investing in the European market. It also may funnel venture-capital investments away from the European continent.

Competition intervention in burgeoning industries is no free lunch. The best evidence concerning these potential costs comes from the GDPR. While privacy regulation is obviously not the same as competition law, the evidence concerning the GDPR suggests that heavy-handed intervention may, at least in some instances, slow down innovation and reduce competition.

The most-cited empirical evidence concerning the effects of the GDPR comes from a paper by Garrett Johnson and co-authors, who link the GDPR to widespread increases to market concentration, particularly in the short-term:

We show that websites’ vendor use falls after the European Union’s (EU’s) General Data Protection Regulation (GDPR), but that market concentration also increases among technology vendors that provide support services to websites…. The week after the GDPR’s enforcement, website use of web technology vendors falls by 15% for EU residents. Websites are relatively more likely to retain top vendors, which increases the concentration of the vendor market by 17%. Increased concentration predominantly arises among vendors that use personal data, such as cookies, and from the increased relative shares of Facebook and Google-owned vendors, but not from website consent requests. Although the aggregate changes in vendor use and vendor concentration dissipate by the end of 2018, we find that the GDPR impact persists in the advertising vendor category most scrutinized by regulators.[46]

Along similar lines, an NBER working paper by Jian Jia and co-authors finds that enactment of the GDPR markedly reduced venture-capital investments in Europe:

Our findings indicate a negative differential effect on EU ventures after the rollout of GDPR relative to their US counterparts. These negative effects manifest in the overall number of financing rounds, the overall dollar amount raised across rounds, and in the dollar amount raised per individual round. Specifically, our findings suggest a $3.38 million decrease in the aggregate dollars raised by EU ventures per state per crude industry category per week, a 17.6% reduction in the number of weekly venture deals, and a 39.6% decrease in the amount raised in an average deal following the rollout of GDPR.[47]

In another paper, Samuel Goldberg and co-authors find that the GDPR led to a roughly 12% reduction in website pageviews and e-commerce revenue in Europe.[48] Finally, Rebecca Janssen and her co-authors show that the GDPR decreased the number of apps offered on Google’s Play Store between 2016 and 2019:

Using data on 4.1 million apps at the Google Play Store from 2016 to 2019, we document that GDPR induced the exit of about a third of available apps; and in the quarters following implementation, entry of new apps fell by half.[49]

Of course, the body of evidence concerning the GDPR’s effects is not entirely unambiguous. For example, Rajkumar Vekatesean and co-authors find that the GDPR had mixed effects on the returns of different types of firms.[50] Other papers also show similarly mixed effects.[51]

Ultimately, the empirical literature concerning the effects of the GDPR shows that regulation—in this case, privacy protection—is no free lunch. Of course, this does not mean that competition intervention targeting the metaverse would necessarily have these same effects. But in the absence of a clear market failure to solve, it is unclear why policymakers should run such a risk in the first place.

In the end, competition intervention in the metaverse is unlikely to be costless. The metaverse is still in its infancy, regulation could deter essential innovation, and the commission has thus far failed to identify any serious market failures that warrant public intervention. The result is that the commission’s call for contributions appears premature or, in other words, that the commission is putting the meta-cart before the meta-horse.

 

[1] Competition in Virtual Worlds and Generative AI – Calls for contributions, European Commission (Jan. 9, 2024) https://competition-policy.ec.europa.eu/document/download/e727c66a-af77-4014-962a-7c9a36800e2f_en?filename=20240109_call-for-contributions_virtual-worlds_and_generative-AI.pdf (hereafter, “Call for Contributions”).

[2] Jonathan Vaian, Meta’s Reality Labs Records $3.99 Billion Quarterly Loss as Zuckerberg Pumps More Cash into Metaverse, CNBC (Apr. 26, 2023), https://www.cnbc.com/2023/04/26/metas-reality-labs-unit-records-3point99-billion-first-quarter-loss-.html.

[3] Alan Truly, Horizon Worlds Leak: Only 1 in 10 Users Return & Web Launch Is Coming, Mixed News (Mar. 3, 2023), https://mixed-news.com/en/horizon-worlds-leak-only-1-in-10-users-return-web-launch-coming; Kevin Hurler, Hey Fellow Kids: Meta Is Revamping Horizon Worlds to Attract More Teen Users, Gizmodo (Feb. 7, 2023), https://gizmodo.com/meta-metaverse-facebook-horizon-worlds-vr-1850082068; Emma Roth, Meta’s Horizon Worlds VR Platform Is Reportedly Struggling to Keep Users, The Verge (Oct. 15, 2022),
https://www.theverge.com/2022/10/15/23405811/meta-horizon-worlds-losing-users-report; Paul Tassi, Meta’s ‘Horizon Worlds’ Has Somehow Lost 100,000 Players in Eight Months, Forbes, (Oct. 17, 2022), https://www.forbes.com/sites/paultassi/2022/10/17/metas-horizon-worlds-has-somehow-lost-100000-players-in-eight-months/?sh=57242b862a1b.

[4] Call for Contributions, supra note 1. (“6) Do you expect the technology incorporated into Virtual World platforms, enabling technologies of Virtual Worlds and services based on Virtual Worlds to be based mostly on open standards and/or protocols agreed through standard-setting organisations, industry associations or groups of companies, or rather the use of proprietary technology?”).

[5] Less Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 508 (1999).

[6] Virtual Worlds (Metaverses) – A Vision for Openness, Safety and Respect, European Commission, https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/13757-Virtual-worlds-metaverses-a-vision-for-openness-safety-and-respect/feedback_en?p_id=31962299H.

[7] Catherine Thorbecke, What Metaverse? Meta Says Its Single Largest Investment Is Now in ‘Advancing AI’, CNN Business (Mar. 15, 2023), https://www.cnn.com/2023/03/15/tech/meta-ai-investment-priority/index.html; Ben Marlow, Mark Zuckerberg’s Metaverse Is Shattering into a Million Pieces, The Telegraph (Apr. 23, 2023), https://www.telegraph.co.uk/business/2023/04/21/mark-zuckerbergs-metaverse-shattering-million-pieces; Will Gendron, Meta Has Reportedly Stopped Pitching Advertisers on the Metaverse, BusinessInsider (Apr. 18, 2023), https://www.businessinsider.com/meta-zuckerberg-stopped-pitching-advertisers-metaverse-focus-reels-ai-report-2023-4.

[8] Mansoor Iqbal, Fortnite Usage and Revenue Statistics, Business of Apps (Jan. 9, 2023), https://www.businessofapps.com/data/fortnite-statistics; Matija Ferjan, 76 Little-Known Metaverse Statistics & Facts (2023 Data), Headphones Addict (Feb. 13, 2023), https://headphonesaddict.com/metaverse-statistics.

[9] James Batchelor, Meta’s Flagship Metaverse Horizon Worlds Struggling to Attract and Retain Users, Games Industry (Oct. 17, 2022), https://www.gamesindustry.biz/metas-flagship-metaverse-horizon-worlds-struggling-to-attract-and-retain-users; Ferjan, id.

[10] Richard Lawler, Decentraland’s Billion-Dollar ‘Metaverse’ Reportedly Had 38 Active Users in One Day, The Verge (Oct. 13, 2022), https://www.theverge.com/2022/10/13/23402418/decentraland-metaverse-empty-38-users-dappradar-wallet-data; The Sandbox, DappRadar, https://dappradar.com/multichain/games/the-sandbox (last visited May 3, 2023); Decentraland, DappRadar, https://dappradar.com/multichain/social/decentraland (last visited May 3, 2023).

[11] David J. Teece, Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing and Public Policy, 15 Research Policy 285-305 (1986), https://www.sciencedirect.com/science/article/abs/pii/0048733386900272.

[12] Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo. Mason L. Rev. 1279 (2021).

[13] Roblox, Wikipedia, https://en.wikipedia.org/wiki/Roblox (last visited May 3, 2023); Minecraft, Wikipedia, https://en.wikipedia.org/wiki/Minecraft (last visited May 3, 2023); Fortnite, Wikipedia, https://en.wikipedia.org/wiki/Fortnite (last visited May 3, 2023); see Fiza Chowdhury, Minecraft vs Roblox vs Fortnite: Which Is Better?, Metagreats (Feb. 20, 2023), https://www.metagreats.com/minecraft-vs-roblox-vs-fortnite.

[14]  Marc Rysman, The Economics of Two-Sided Markets, 13 J. Econ. Perspectives 134 (2009) (“First, if standards can differentiate from each other, they may be able to successfully coexist (Chou and Shy, 1990; Church and Gandal, 1992). Arguably, Apple and Microsoft operating systems have both survived by specializing in different markets: Microsoft in business and Apple in graphics and education. Magazines are an obvious example of platforms that differentiate in many dimensions and hence coexist.”).

[15] Id. at 134 (“Second, tipping is less likely if agents can easily use multiple standards. Corts and Lederman (forthcoming) show that the fixed cost of producing a video game for one more standard have reduced over time relative to the overall fixed costs of producing a game, which has led to increased distribution of games across multiple game systems (for example, PlayStation, Nintendo, and Xbox) and a less-concentrated game system market.”).

[16] What Are Fortnite, Roblox, Minecraft and Among Us? A Parent’s Guide to the Most Popular Online Games Kids Are Playing, FTC Business (Oct. 5, 2021), https://www.ftc.net/blog/what-are-fortnite-roblox-minecraft-and-among-us-a-parents-guide-to-the-most-popular-online-games-kids-are-playing; Jay Peters, Epic Is Merging Its Digital Asset Stores into One Huge Marketplace, The Verge (Mar. 22, 2023), https://www.theverge.com/2023/3/22/23645601/epic-games-fab-asset-marketplace-state-of-unreal-2023-gdc.

[17] Luke Winkie, Inside Roblox’s Criminal Underworld, Where Kids Are Scamming Kids, IGN (Jan. 2, 2023), https://www.ign.com/articles/inside-robloxs-criminal-underworld-where-kids-are-scamming-kids; Fake Minecraft Updates Pose Threat to Users, Tribune (Sept. 11, 2022), https://tribune.com.pk/story/2376087/fake-minecraft-updates-pose-threat-to-users; Ana Diaz, Roblox and the Wild West of Teenage Scammers, Polygon (Aug. 24, 2019) https://www.polygon.com/2019/8/24/20812218/roblox-teenage-developers-controversy-scammers-prison-roleplay; Rebecca Alter, Fortnite Tries Not to Scam Children and Face $520 Million in FTC Fines Challenge, Vulture (Dec. 19, 2022), https://www.vulture.com/2022/12/fortnite-epic-games-ftc-fines-privacy.html; Leonid Grustniy, Swindle Royale: Fortnite Scammers Get Busy, Kaspersky Daily (Dec. 3, 2020), https://www.kaspersky.com/blog/top-four-fortnite-scams/37896.

[18] See, generally, David Evans & Richard Schmalensee, Matchmakers: The New Economics of Multisided Platforms (Harvard Business Review Press, 2016).

[19] David S. Evans, Governing Bad Behaviour By Users of Multi-Sided Platforms, Berkley Technology Law Journal 27:2 (2012), 1201.

[20] See Case COMP/C-3/37.792, Microsoft, OJ L 32 (May 24, 2004). See also, Case COMP/39.530, Microsoft (Tying), OJ C 120 (Apr. 26, 2013).

[21] See Complaint, Epic Games, Inc. v. Apple Inc., 493 F. Supp. 3d 817 (N.D. Cal. 2020) (4:20-cv-05640-YGR).

[22] See European Commission Press Release IP/20/1073, Antitrust: Commission Opens Investigations into Apple’s App Store Rules (Jun. 16, 2020); European Commission Press Release IP/20/1075, Antitrust: Commission Opens Investigation into Apple Practices Regarding Apple Pay (Jun. 16, 2020).

[23] See European Commission Press Release IP/18/421, Antitrust: Commission Fines Qualcomm €997 Million for Abuse of Dominant Market Position (Jan. 24, 2018); Federal Trade Commission v. Qualcomm Inc., 969 F.3d 974 (9th Cir. 2020).

[24] See European Commission Press Release IP/19/4291, Antitrust: Commission Opens Investigation into Possible Anti-Competitive Conduct of Amazon (Jul. 17, 2019).

[25] See Case AT.39740, Google Search (Shopping), 2017 E.R.C. I-379. See also, Case AT.40099 (Google Android), 2018 E.R.C.

[26] See Complaint, United States v. Google, LLC, (2020), https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws; see also, Complaint, Colorado et al. v. Google, LLC, (2020), available at https://coag.gov/app/uploads/2020/12/Colorado-et-al.-v.-Google-PUBLIC-REDACTED-Complaint.pdf.

[27] See, e.g., Giorgio Monti, The Digital Markets Act: Institutional Design and Suggestions for Improvement, Tillburg L. & Econ. Ctr., Discussion Paper No. 2021-04 (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3797730 (“In sum, the DMA is more than an enhanced and simplified application of Article 102 TFEU: while the obligations may be criticised as being based on existing competition concerns, they are forward-looking in trying to create a regulatory environment where gatekeeper power is contained and perhaps even reduced.”) (Emphasis added).

[28] See, e.g., Aurelien Portuese, “Please, Help Yourself”: Toward a Taxonomy of Self-Preferencing, Information Technology & Innovation Foundation (Oct. 25, 2021), available at https://itif.org/sites/default/files/2021-self-preferencing-taxonomy.pdf. (“The latest example of such weaponization of self-preferencing by antitrust populists is provided by Sens. Amy Klobuchar (D-MN) and Chuck Grassley (R-IA). They introduced legislation in October 2021 aimed at prohibiting the practice.2 However, the legislation would ban self-preferencing only for a handful of designated companies—the so-called “covered platforms,” not the thousands of brick-and-mortar sellers that daily self-preference for the benefit of consumers. Mimicking the European Commission’s Digital Markets Act prohibiting self-preferencing, Senate and the House bills would degrade consumers’ experience and undermine competition, since self-preferencing often benefits consumers and constitutes an integral part, rather than an abnormality, of the process of competition.”).

[29] Efforts to saddle platforms with “non-discrimination” constraints are tantamount to mandating openness. See Geoffrey A. Manne, Against the Vertical Discrimination Presumption, Foreword, Concurrences No. 2-2020 (2020) at 2 (“The notion that platforms should be forced to allow complementors to compete on their own terms, free of constraints or competition from platforms is a species of the idea that platforms are most socially valuable when they are most ‘open.’ But mandating openness is not without costs, most importantly in terms of the effective operation of the platform and its own incentives for innovation.”).

[30] See, e.g., Klint Finley, Your Own Private Google: The Quest for an Open Source Search Engine, Wired (Jul. 12, 2021), https://www.wired.com/2012/12/solar-elasticsearch-google.

[31] See Brian Connolly, Selling on Amazon vs. eBay in 2021: Which Is Better?, JungleScout (Jan. 12, 2021), https://www.junglescout.com/blog/amazon-vs-ebay; Crucial Differences Between Amazon and eBay, SaleHOO, https://www.salehoo.com/educate/selling-on-amazon/crucial-differences-between-amazon-and-ebay (last visited Feb. 8, 2021).

[32] See, e.g., Dolby Vision Is Winning the War Against HDR10 +, It Requires a Single Standard, Tech Smart, https://voonze.com/dolby-vision-is-winning-the-war-against-hdr10-it-requires-a-single-standard (last visited June 6, 2022).

[33] On the importance of managers, see, e.g., Nicolai J Foss & Peter G Klein, Why Managers Still Matter, 56 MIT Sloan Mgmt. Rev., 73 (2014) (“In today’s knowledge-based economy, managerial authority is supposedly in decline. But there is still a strong need for someone to define and implement the organizational rules of the game.”).

[34] It is generally agreed upon that anticompetitive foreclosure is possible only when a firm enjoys some degree of market power. Frank H. Easterbrook, Limits of Antitrust, 63 Tex. L. Rev. 1, 20 (1984) (“Firms that lack power cannot injure competition no matter how hard they try. They may injure a few consumers, or a few rivals, or themselves (see (2) below) by selecting ‘anticompetitive’ tactics. When the firms lack market power, though, they cannot persist in deleterious practices. Rival firms will offer the consumers better deals. Rivals’ better offers will stamp out bad practices faster than the judicial process can. For these and other reasons many lower courts have held that proof of market power is an indispensable first step in any case under the Rule of Reason. The Supreme Court has established a market power hurdle in tying cases, despite the nominally per se character of the tying offense, on the same ground offered here: if the defendant lacks market power, other firms can offer the customer a better deal, and there is no need for judicial intervention.”).

[35] See, e.g., Josh Lerner & Jean Tirole, Some Simple Economics of Open Source, 50 J. Indus. Econ. 197 (2002).

[36] See Matthew Miller, Thanks, Samsung: Android’s Best Mobile Browser Now Available to All, ZDNet (Aug. 11, 2017), https://www.zdnet.com/article/thanks-samsung-androids-best-mobile-browser-now-available-to-all.

[37] FACT SHEET: Windows XP N Sales, RegMedia (Jun. 12, 2009), available at https://regmedia.co.uk/2009/06/12/microsoft_windows_xp_n_fact_sheet.pdf.

[38] See Case COMP/39.530, Microsoft (Tying), OJ C 120 (Apr. 26, 2013).

[39] Konstantinos Stylianou, Systemic Efficiencies in Competition Law: Evidence from the ICT Industry, 12 J. Competition L. & Econ. 557 (2016).

[40] See, e.g., Steven Sinofsky, The App Store Debate: A Story of Ecosystems, Medium (Jun. 21, 2020), https://medium.learningbyshipping.com/the-app-store-debate-a-story-of-ecosystems-938424eeef74.

[41] Id.

[42] See, e.g., Benjamin Klein, Market Power in Aftermarkets, 17 Managerial & Decision Econ. 143 (1996).

[43] See, e.g., Simon Hill, What Is Android Fragmentation, and Can Google Ever Fix It?, DigitalTrends (Oct. 31, 2018), https://www.digitaltrends.com/mobile/what-is-android-fragmentation-and-can-google-ever-fix-it.

[44] Metaverse Market Revenue Worldwide from 2022 to 2030, Statista, https://www.statista.com/statistics/1295784/metaverse-market-size (last visited May 3, 2023); Metaverse Market by Component (Hardware, Software (Extended Reality Software, Gaming Engine, 3D Mapping, Modeling & Reconstruction, Metaverse Platform, Financial Platform), and Professional Services), Vertical and Region – Global Forecast to 2027, Markets and Markets (Apr. 27, 2023), https://www.marketsandmarkets.com/Market-Reports/metaverse-market-166893905.html; see also, Press Release, Metaverse Market Size Worth $ 824.53 Billion, Globally, by 2030 at 39.1% CAGR, Verified Market Research (Jul. 13, 2022), https://www.prnewswire.com/news-releases/metaverse-market-size-worth–824-53-billion-globally-by-2030-at-39-1-cagr-verified-market-research-301585725.html.

[45] See, e.g., Megan Farokhmanesh, Will the Metaverse Live Up to the Hype? Game Developers Aren’t Impressed, Wired (Jan. 19, 2023), https://www.wired.com/story/metaverse-video-games-fortnite-zuckerberg; see also Mitch Wagner, The Metaverse Hype Bubble Has Popped. What Now?, Fierce Electronics (Feb. 24, 2023), https://www.fierceelectronics.com/embedded/metaverse-hype-bubble-has-popped-what-now.

[46] Garret A. Johnson, et al., Privacy and Market Concentration: Intended and Unintended Consequences of the GDPR, Forthcoming Management Science 1 (2023).

[47] Jian Jia, et al., The Short-Run Effects of GDPR on Technology Venture Investment, NBER Working Paper 25248, 4 (2018), available at https://www.nber.org/system/files/working_papers/w25248/w25248.pdf.

[48] Samuel G. Goldberg, Garrett A. Johnson, & Scott K. Shriver, Regulating Privacy Online: An Economic Evaluation of GDPR (2021), available at https://www.ftc.gov/system/files/documents/public_events/1588356/johnsongoldbergshriver.pdf.

[49] Rebecca Janßen, Reinhold Kesler, Michael Kummer, & Joel Waldfogel, GDPR and the Lost Generation of Innovative Apps, Nber Working Paper 30028, 2 (2022), available at https://www.nber.org/system/files/working_papers/w30028/w30028.pdf.

[50] Rajkumar Venkatesan, S. Arunachalam & Kiran Pedada, Short Run Effects of Generalized Data Protection Act on Returns from AI Acquisitions, University of Virginia Working Paper 6 (2022), available at: https://conference.nber.org/conf_papers/f161612.pdf. (“On average, GDPR exposure reduces the ROA of firms. We also find that GDPR exposure increases the ROA of firms that make AI acquisitions for improving customer experience, and cybersecurity. Returns on AI investments in innovation and operational efficiencies are unaffected by GDPR.”)

[51] For a detailed discussion of the empirical literature concerning the GDPR, see Garrett Johnson, Economic Research on Privacy Regulation: Lessons From the GDPR And Beyond, NBER Working Paper 30705 (2022), available at https://www.nber.org/system/files/working_papers/w30705/w30705.pdf.

Continue reading
Antitrust & Consumer Protection

ICLE Amicus in RE: Gilead Tenofovir Cases

Amicus Brief Dear Justice Guerrero and Associate Justices, In accordance with California Rule of Court 8.500(g), we are writing to urge the Court to grant the Petition . . .

Dear Justice Guerrero and Associate Justices,

In accordance with California Rule of Court 8.500(g), we are writing to urge the Court to grant the Petition for Review filed by Petitioner Gilead Sciences, Inc. (“Petitioner” or “Gilead”) on February 21, 2024, in the above-captioned matter.

We agree with Petitioner that the Court of Appeal’s finding of a duty of reasonable care in this case “is such a seismic change in the law and so fundamentally wrong, with such grave consequences, that this Court’s review is imperative.” (Pet. 6.) The unprecedented duty of care put forward by the Court of Appeal—requiring prescription drug manufacturers to exercise reasonable care toward users of a current drug when deciding when to bring a new drug to market (Op. 11)—would have far-reaching, harmful implications for innovation that the Court of Appeal failed properly to weigh.

If upheld, this new duty of care would significantly disincentivize pharmaceutical innovation by allowing juries to second-guess complex scientific and business decisions about which potential drugs to prioritize and when to bring them to market. The threat of massive liability simply for not developing a drug sooner would make companies reluctant to invest the immense resources needed to bring new treatments to patients. Perversely, this would deprive the public of lifesaving and less costly new medicines. And the prospective harm from the Court of Appeal’s decision is not limited only to the pharmaceutical industry.

We urge the Court to grant the Petition for Review and to hold that innovative firms do not owe the users of current products a “duty to innovate” or a “duty to market”—that is, that firms cannot be held liable to users of a current product for development or commercialization decisions on the basis that those decisions could have facilitated the introduction of a less harmful, alternative product.

Interest of Amicus Curiae

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center aimed at building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates. It also has longstanding expertise in evaluating law and policy relating to innovation and the legal environment facing commercial activity. In this letter, we wish to briefly highlight some of the crucial considerations concerning the effect on innovation incentives that we believe would arise from the Court of Appeal’s ruling in this case.[1]

The Court of Appeal’s Duty of Care Standard Would Impose Liability Without Requiring Actual “Harm”

The Court of Appeal’s ruling marks an unwarranted departure from decades of products-liability law requiring plaintiffs to prove that the product that injured them was defective. Expanding liability to products never even sold is an unprecedented, unprincipled, and dangerous approach to product liability. Plaintiffs’ lawyers may seek to apply this new theory to many other beneficial products, arguing manufacturers should have sold a superior alternative sooner. This would wreak havoc on innovation across industries.

California Civil Code § 1714 does not impose liability for “fail[ing] to take positive steps to benefit others,” (Brown v. USA Taekwondo (2021) 11 Cal.5th 204, 215), and Plaintiffs did not press a theory that the medicine they received was defective. Moreover, the product included all the warnings required by federal and state law. Thus, Plaintiffs’ case—as accepted by the Court of Appeal—is that they consumed a product authorized by the FDA, that they were fully aware of its potential side effects, but maybe they would have had fewer side effects had Gilead made the decision to accelerate (against some indefinite baseline) the development of an alternative medicine. To call this a speculative harm is an understatement, and to dismiss Gilead’s conduct as unreasonable because motivated by a crass profit motive, (Op. at 32), elides many complicated facts that belie such a facile assertion.

A focus on the narrow question of profits for a particular drug misunderstands the inordinate complexity of pharmaceutical development and risks seriously impeding the rate of drug development overall. Doing so

[over-emphasizes] the recapture of “excess” profits on the relatively few highly profitable products without taking into account failures or limping successes experienced on the much larger number of other entries. If profits were held to “reasonable” levels on blockbuster drugs, aggregate profits would almost surely be insufficient to sustain a high rate of technological progress. . . . If in addition developing a blockbuster is riskier than augmenting the assortment of already known molecules, the rate at which important new drugs appear could be retarded significantly. Assuming that important new drugs yield substantial consumers’ surplus untapped by their developers, consumers would lose along with the drug companies. Should a tradeoff be required between modestly excessive prices and profits versus retarded technical progress, it would be better to err on the side of excessive profits. (F. M. Scherer, Pricing, Profits, and Technological Progress in the Pharmaceutical Industry, 7 J. Econ. Persp. 97, 113 (1993)).

Indeed, Plaintiffs’ claim on this ground is essentially self-refuting. If the “superior” product they claim was withheld for “profit” reasons was indeed superior, then Plaintiffs could have expected to make a superior return on that product. Thus, Plaintiffs claim they were allegedly “harmed” by not having access to a product that Petitioners were not yet ready to market, even though Petitioners had every incentive to release a potentially successful alternative as soon as possible, subject to a complex host of scientific and business considerations affecting the timing of that decision.

Related, the Court of Appeal’s decision rests on the unfounded assumption that Petitioner “knew” TAF was safer than TDF after completing Phase I trials. This ignores the realities of the drug development process and the inherent uncertainty of obtaining FDA approval, even after promising early results. Passing Phase I trials, which typically involve a small number of healthy volunteers, is a far cry from having a marketable drug. According to the Biotechnology Innovation Organization, only 7.9% of drugs that enter Phase I trials ultimately obtain FDA approval.[2] (Biotechnology Innovation Organization, Clinical Development Success Rates and Contributing Factors 2011-2020, Fig. 8b (2021), available at https://perma.cc/D7EY-P22Q.) Even after Phase II trials, which assess efficacy and side effects in a larger patient population, the success rate is only about 15.1%. (Id.) Thus, at the time Gilead decided to pause TAF development, it faced significant uncertainty about whether TAF would ever reach the market, let alone ultimately prove safer than TDF.

Moreover, the clock on Petitioner’s patent exclusivity for TAF was ticking throughout the development process. Had Petitioner “known” that TAF was a safer and more effective drug, it would have had every incentive to bring it to market as soon as possible to maximize the period of patent protection and the potential to recoup its investment. The fact that Petitioner instead chose to focus on TDF strongly suggests that it did not have the level of certainty the Court of Appeal attributed to it.

Although conventional wisdom has often held otherwise, economists generally dispute the notion that companies have an incentive to unilaterally suppress innovation for economic gain.

While rumors long have circulated about the suppression of a new technology capable of enabling automobiles to average 100 miles per gallon or some new device capable of generating electric power at a fraction of its current cost, it is rare to uncover cases where a worthwhile technology has been suppressed altogether. (John J. Flynn, Antitrust Policy, Innovation Efficiencies, and the Suppression of Technology, 66 Antitrust L.J. 487, 490 (1998)).

Calling such claims “folklore,” the economists Armen Alchian and William Allen note that, “if such a [technology] did exist, it could be made and sold at a price reflecting the value of [the new technology], a net profit to the owner.” (Armen A. Alchian & William R. Allen, Exchange & Production: Competition, Coordination, & Control (1983), at 292). Indeed, “even a monopolist typically will have an incentive to adopt an unambiguously superior technology.” (Joel M. Cohen and Arthur J. Burke, An Overview of the Antitrust Analysis of Suppression of Technology, 66 Antitrust L.J. 421, 429 n. 28 (1998)). While nominal suppression of technology can occur for a multitude of commercial and technological reasons, there is scant evidence that doing so coincides with harm to consumers, except where doing so affirmatively interferes with market competition under the antitrust laws—a claim not advanced here.

One reason the tort system is inapt for second-guessing commercial development and marketing decisions is that those decisions may be made for myriad reasons that do not map onto the specific safety concern of a products-liability action. For example, in the 1930s, AT&T abandoned the commercial development of magnetic recording “for ideological reasons. . . . Management feared that availability of recording devices would make customers less willing to use the telephone system and so undermine the concept of universal service.” (Mark Clark, Suppressing Innovation: Bell Laboratories and Magnetic Recording, 34 Tech. & Culture 516, 520-24 (1993)). One could easily imagine arguments that coupling telephones and recording devices would promote safety. But the determination of whether safety or universal service (and the avoidance of privacy invasion) was a “better” basis for deciding whether to pursue the innovation is not within the ambit of tort law (nor the capability of a products-liability jury). And yet, it would necessarily become so if the Court of Appeal’s decision were to stand.

A Proper Assessment of Public Policy Would Cut Strongly Against Adoption of the Court of Appeal’s Holding

The Court of Appeal notes that “a duty that placed manufacturers ‘under an endless obligation to pursue ever-better new products or improvements to existing products’ would be unworkable and unwarranted,” (Op. 10), yet avers that “plaintiffs are not asking us to recognize such a duty” because “their negligence claim is premised on Gilead’s possession of such an alternative in TAF; they complain of Gilead’s knowing and intentionally withholding such a treatment….” (Id).

From an economic standpoint, this is a distinction without a difference.

Both a “duty to invent” and a “duty to market” what is already invented would increase the cost of bringing any innovative product to market by saddling the developer with an expected additional (and unavoidable) obligation as a function of introducing the initial product, differing only perhaps by degree. Indeed, a “duty to invent” could conceivably be more socially desirable because in that case a firm could at least avoid liability by undertaking the process of discovering new products (a socially beneficial activity), whereas the “duty to market” espoused by the Court of Appeal would create only the opposite incentive—the incentive never to gain knowledge of a superior product on the basis of which liability might attach.[3]

And public policy is relevant. This Court in Brown v. Superior Court, (44 Cal. 3d 1049 (1988)), worried explicitly about the “[p]ublic policy” implications of excessive liability rules for the provision of lifesaving drugs. (Id. at 1063-65). As the Court in Brown explained, drug manufacturers “might be reluctant to undertake research programs to develop some pharmaceuticals that would prove beneficial or to distribute others that are available to be marketed, because of the fear of large adverse monetary judgments.” (Id. at 1063). The Court of Appeal agreed, noting that “the court’s decision [in Brown] was grounded in public policy concerns. Subjecting prescription drug manufacturers to strict liability for design defects, the court worried, might discourage drug development or inflate the cost of otherwise affordable drugs.” (Op. 29).

In rejecting the relevance of the argument here, however, the Court of Appeal (very briefly) argued a) that Brown espoused only a policy against burdening pharmaceutical companies with a duty stemming from unforeseeable harms, (Op. 49-50), and b) that the relevant cost here might be “some failed or wasted efforts,” but not a reduction in safety. (Op. 51).[4] Both of these claims are erroneous.

On the first, the legalistic distinction between foreseeable and unforeseeable harm was not, in fact, the determinative distinction in Brown. Rather, that distinction was relevant only because it maps onto the issue of incentives. In the face of unforeseeable, and thus unavoidable, harm, pharmaceutical companies would have severely diminished incentives to innovate. While foreseeable harms might also deter innovation by imposing some additional cost, these costs would be smaller, and avoidable or insurable, so that innovation could continue. To be sure, the Court wanted to ensure that the beneficial, risk-reduction effects of the tort system were not entirely removed from pharmaceutical companies. But that meant a policy decision that necessarily reduced the extent of tort-based risk optimization in favor of the manifest, countervailing benefit of relatively higher innovation incentives. That same calculus applies here, and it is this consideration, not the superficial question of foreseeability, that animated this Court in Brown.

On the second, the Court of Appeal inexplicably fails to acknowledge that the true cost of the imposition of excessive liability risk from a “duty to market” (or “duty to innovate”) is not limited to the expenditure of wasted resources, but the non-expenditure of any resources. The court’s contention appears to contemplate that such a duty would not remove a firm’s incentive to innovate entirely, although it might deter it slightly by increasing its expected cost. But economic incentives operate at the margin. Even if there remains some profit incentive to continue to innovate, the imposition of liability risk simply for the act of doing so would necessarily reduce the amount of innovation (in some cases, and especially for some smaller companies less able to bear the additional cost, to the point of deterring innovation entirely). But even this reduction in incentive is a harm. The fact that some innovation may still occur despite the imposition of considerable liability risk is not a defense of the imposition of that risk; rather, it is a reason to question its desirability, exactly as this Court did in Brown.

The Court of Appeal’s Decision Would Undermine Development of Lifesaving and Safer New Medicines

Innovation is a long-term, iterative process fraught with uncertainty. At the outset of research and development, it is impossible to know whether a potential new drug will ultimately prove superior to existing drugs. Most attempts at innovation fail to yield a marketable product, let alone one that is significantly safer or more effective than its predecessors. Deciding whether to pursue a particular line of research depends on weighing myriad factors, including the anticipated benefits of the new drug, the time and expense required to develop it, and its financial viability relative to existing products. Sometimes, potentially promising drug candidates are not pursued fully, even if theoretically “better” than existing drugs to some degree, because the expected benefits are not sufficient to justify the substantial costs and risks of development and commercialization.

If left to stand, the Court of Appeal’s decision would mean that whenever this stage of development is reached for a drug that may offer any safety improvement, the manufacturer will face potential liability for failing to bring that drug to market, regardless of the costs and risks involved in its development or the extent of the potential benefit. Such a rule would have severe unintended consequences that would stifle innovation.

First, by exposing manufacturers to liability on the basis of early-stage research that has not yet established a drug candidate’s safety and efficacy, the Court of Appeal’s rule would deter manufacturers from pursuing innovations in the first place. Drug research involves constant iteration, with most efforts failing and the potential benefits of success highly uncertain until late in the process. If any improvement, no matter how small or tentative, could trigger liability for failing to develop the new drug, manufacturers will be deterred from trying to innovate at all.

Second, such a rule would force manufacturers to direct scarce resources to developing and commercializing drugs that offer only small or incremental benefits because failing to do so would invite litigation. This would necessarily divert funds away from research into other potential drugs that could yield greater advancements. Further, as each small improvement is made, it reduces the relative potential benefit from, and therefore the incentive to undertake, further improvements. Rather than promoting innovation, the Court of Appeal’s decision would create incentives that favor small, incremental changes over larger, riskier leaps with the greatest potential to significantly advance patient welfare.

Third, and conversely, the Court of Appeal’s decision would set an unrealistic and dangerous standard of perfection for drug development. Pharmaceutical companies should not be expected to bring only the “safest” version of a drug to market, as this would drastically increase the time and cost of drug development and deprive patients of access to beneficial treatments in the meantime.

Fourth, the threat of liability would lead to inefficient and costly distortions in how businesses organize their research and development efforts. To minimize the risk of liability, manufacturers may avoid integrating ongoing research into existing product lines, instead keeping the processes separate unless and until a potential new technology is developed that offers benefits so substantial as to clearly warrant the costs and liability exposure of its development in the context of an existing drug line. Such an incentive would prevent potentially beneficial innovations from being pursued and would increase the costs of drug development.

Finally, the ruling would create perverse incentives that could actually discourage drug companies from developing and introducing safer alternative drugs. If bringing a safer drug to market later could be used as evidence that the first-generation drug was not safe enough, companies may choose not to invest in developing improved versions at all in order to avoid exposing themselves to liability. This would, of course, directly undermine the goal of increasing drug safety overall.

The Court of Appeal gave insufficient consideration to these severe policy consequences of the duty it recognized. A manufacturer’s decision when to bring a potentially safer drug to market involves complex trade-offs that courts are ill-equipped to second-guess—particularly in the limited context of a products-liability determination.

Conclusion

The Court of Appeal’s novel “duty to market” any known, less-harmful alternative to an existing product would deter innovation to the detriment of consumers. The Court of Appeal failed to consider how its decision would distort incentives in a way that harms the very patients the tort system is meant to protect. This Court should grant review to address these important legal and policy issues and to prevent this unprecedented expansion of tort liability from distorting manufacturers’ incentives to develop new and better products.

[1] No party or counsel for a party authored or paid for this amicus letter in whole or in part.

[2] It is important to note that this number varies with the kind of medicine involved, but across all categories of medicines there is a high likelihood of failure subsequent to Phase I trials.

[3] To the extent the concern is with disclosure of information regarding a potentially better product, that is properly a function of the patent system, which requires public disclosure of new ideas in exchange for the receipt of a patent. (See Brenner v. Manson, 383 U.S. 519, 533 (1966) (“one of the purposes of the patent system is to encourage dissemination of information concerning discoveries and inventions.”)). Of course, the patent system preserves innovation incentives despite the mandatory disclosure of information by conferring an exclusive right to the inventor to use the new knowledge. By contrast, using the tort system as an information-forcing device in this context would impose risks and costs on innovation without commensurate benefit, ensuring less, rather than more, innovation.

[4] The Court of Appeal makes a related argument when it claims that “the duty does not require manufacturers to perfect their drugs, but simply to act with reasonable care for the users of the existing drug when the manufacturer has developed an alternative that it knows is safer and at least equally efficacious. Manufacturers already engage in this type of innovation in the ordinary course of their business, and most plaintiffs would likely face a difficult road in establishing a breach of the duty of reasonable care.” (Op. at 52-3).

Continue reading
Innovation & the New Economy

How a Recent California Appellate Court Decision Will Chill Drug Development, Raise Pharmaceutical Costs

Popular Media When we are sick or in pain, we need relief. We know available prescription drugs won’t always be perfect. They sometimes have side effects. But . . .

When we are sick or in pain, we need relief. We know available prescription drugs won’t always be perfect. They sometimes have side effects. But we are grateful for even imperfect relief as an alternative to perfect pain.

Pharmaceutical companies aim to identify good drugs and get them to market, while constantly returning to the lab to innovate and make them even better, working to get the next version closer to perfect and with fewer side effects. But, thanks to a recent decision by a California appellate court, the incentives to develop new drugs and innovate to find even better alternatives may be over. California may have permanently impeded all pharmaceutical innovation by holding that a drug company can be sued for bringing two safe drugs to market, but not discovering the better one first. If a new court decision holds, these companies can be punished unless they bring no drug until they find the perfect drug.

Read the full piece here.

Continue reading
Innovation & the New Economy

SEPs: The West Need Not Cede to China

TL;DR TL;DR Background: Policymakers on both sides of the Atlantic are contemplating new regulations on standard-essential patents (SEPs). While the European Union (EU) is attempting to . . .

TL;DR

Background: Policymakers on both sides of the Atlantic are contemplating new regulations on standard-essential patents (SEPs). While the European Union (EU) is attempting to pass legislation toward that end, U.S. authorities like the Department of Commerce and U.S. Patent and Trademark Office are examining the issues and potentially contemplating their own reforms to counteract changes made by the EU.

But… These efforts would ultimately hand an easy geopolitical win to rivals like China. Not only do the expected changes risk harming U.S. and EU innovators and the standardization procedures upon which they rely, but they lend legitimacy to concerning Chinese regulatory responses that clearly and intentionally place a thumb on the scale in favor of domestic firms. The SEP ecosystem is extremely complex, and knee-jerk regulations may create a global race to the bottom that ultimately harms the very firms and consumers they purport to protect.

KEY TAKEAWAYS

EUROPEAN LEGISLATION, GLOBAL REACH

In April 2023, the EU published its “Proposal for a Regulation on Standard Essential Patents.” The proposal seeks to improve transparency by creating a register of SEPs (and accompanying essentiality checks), and to accelerate the diffusion of these technologies by, among other things, implementing a system of nonbinding arbitration of aggregate royalties and “fair, reasonable, and non-discriminatory” (FRAND) terms. 

But while the proposal nominally applies only to European patents, its effects would be far broader. Notably, the opinions on aggregate royalties and FRAND terms would apply worldwide. European policymakers would thus rule (albeit in nonbinding fashion) on the appropriate royalties to be charged around the globe. This would further embolden foreign jurisdictions to respond in kind, often without the guardrails and independence that have traditionally served to cabin policymakers in the West.

CHINA’S EFFORTS TO BECOME A ‘CYBER GREAT POWER’

Chinese policymakers have long considered the SEPs to be of vital strategic importance, and have taken active steps to protect Chinese interests in this space. The latest move came from the Chongqing First Intermediate People’s Court in a dispute between Chinese firm Oppo and Finland’s Nokia. In a controversial December 2023 ruling, the court limited the maximum FRAND royalties that Nokia could charge Oppo for use of Nokia’s SEPs pertaining to the 5G standard.

Unfortunately, the ruling appears obviously biased toward Chinese interests. In calculating the royalties that Nokia could charge Oppo, the court applied a sizable discount in China. It’s been reported that, in reaching its conclusion, the court defined an aggregate royalty rate for all 5G patents, and divided the proceeds by the number of patents each firm held—a widely discredited metric.

The court’s ruling has widely been seen as a protectionist move, which has elicited concern from western policymakers. It appears to set a dangerous precedent in which geopolitical considerations will begin to play an increasingly large role in the otherwise highly complex and technical field of SEP policy.

TRANSPARENCY, AGGREGATE ROYALTY MANDATES, AND FRAND DETERMINATIONS

Leaving aside how China may respond, the EU’s draft regulation will likely be detrimental to innovators. The regulation would create a system of government-run essentiality checks and nonbinding royalty arbitrations. The goal would be to improve transparency and verify that patents declared “standard essential” truly qualify for that designation.

This system would, however, be both costly and difficult to operate. It would require such a large number of qualified experts to serve as evaluators and conciliators that it may prove exceedingly difficult (or impossible) to find them. The sheer volume of work required for these experts would likely be insurmountable, with the costs borne by industry players. Inventors would also be precluded from seeking out injunctions while arbitration is ongoing. Ultimately, while nonbinding, the system may lead to a de facto royalty cap that lowers innovation.

Finally, it’s unclear whether this form of coordinated information sharing and collective royalty setting may give rise to collusion at various points in the value chain. This threatens both to harm consumers and to deter firms from commercializing standardized technologies. 

In short, these kinds of top-down initiatives likely fail to capture the nuances of individualized patents and standards. They may also add confusion and undermine the incentives that drive affordable innovation.

WESTERN POLICYMAKERS MUST RESIST CHINA’S INDUSTRIAL POLICY

The bottom line is that the kinds of changes under consideration by both U.S. and EU policymakers may undermine innovation in the West. SEP entrepreneurs have been successful because they have been able to monetize their innovations. If authorities take steps that needlessly imbalance the negotiation process between innovators and implementers—as Chinese courts have started to do and Europe’s draft regulation may unintendedly achieve—it will harm both U.S. and EU leadership in intellectual-property-intensive industries. In turn, this would accelerate China’s goal of becoming “a cyber great power.”

For more on this issue, see the ICLE issue brief “FRAND Determinations Under the EU SEP Proposal: Discarding the Huawei Framework,” as well as the “ICLE Comments to USPTO on Issues at the Intersection of Standards and Intellectual Property.”

Continue reading
Intellectual Property & Licensing

Questions Arise on SB 1596: The Right to Repair Bill

Popular Media The Oregon Senate earlier this month approved SB 1596, the so-called “right to repair” bill. This legislation now awaits consideration in the Oregon House, with . . .

The Oregon Senate earlier this month approved SB 1596, the so-called “right to repair” bill. This legislation now awaits consideration in the Oregon House, with a hearing of the House Committee on Business and Labor scheduled for Wednesday.

While motivated by good intentions, this legislation risks unintended consequences that could ultimately harm consumers. Lawmakers should proceed cautiously.

Read the full piece here.

Continue reading
Intellectual Property & Licensing

ICLE Response to the AI Accountability Policy Request for Comment

Regulatory Comments I. Introduction: How Do You Solve a Problem Like ‘AI’? On behalf of the International Center for Law & Economics (ICLE), we thank the National . . .

I. Introduction: How Do You Solve a Problem Like ‘AI’?

On behalf of the International Center for Law & Economics (ICLE), we thank the National Telecommunications and Information Administration (NTIA) for the opportunity to respond to this AI Accountability Policy Request for Comment (RFC).

A significant challenge that emerges in discussions concerning accountability and regulation for artificial intelligence is the broad and often ambiguous definition of “AI” itself. This is demonstrated in the RFC’s framing:

This Request for Comment uses the terms AI, algorithmic, and automated decision systems without specifying any particular technical tool or process. It incorporates NIST’s definition of an ‘‘AI system,’’ as ‘‘an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.’’  This Request’s scope and use of the term ‘‘AI’’ also encompasses the broader set of technologies covered by the Blueprint: ‘‘automated systems’’ with ‘‘the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.’’[1]

As stated, the RFC’s scope could be read to cover virtually all software.[2] But it is essential to acknowledge that, for the purposes of considering potential regulation, we lack a definition of AI that is either sufficiently broad as to cover all or even most areas of concern, and sufficiently focused as to be a useful lens for analysis. That is to say, what we think of as AI encompasses a significant diversity of discrete technologies that will be put to a huge number of potential uses.

One useful recent comparison is with the approach the Obama administration took in its deliberations over nanotechnology regulation in 2011.[3] Following years of consultation and debate, the administration opted for a parsimonious, context-specific approach precisely because “nanotechnology” is not really a single technology. In that proceeding, the administration ultimately recognized that it was not the general category of “nanotechnology” that was relevant, nor the fact that nanotechnologies are those that operate at very small scales, but rather the means by and degree to which certain tools grouped under the broad heading of “nanotechnology” could “alter the risks and benefits of a specific application.”[4] This calls to mind Judge Frank Easterbrook’s famous admonition that a “law of cyberspace” would be no more useful than a dedicated “law of the horse.”[5] Indeed, we believe Easterbrook’s observation applies equally to the creation of a circumscribed “law of AI.”

While there is nothing inherently wrong with creating a broad regulatory framework to address a collection of loosely related subjects, there is a danger that the very breadth of such a framework might over time serve to foreclose more fruitful and well-fitted forms of regulation.

A second concern in the matter immediately at hand is, as mentioned above, the potential for AI regulation to be formulated so broadly as to encompass essentially all software. Whether by design or accident, this latter case runs a number of risks. First, since the scope of the regulation will potentially cover a much broader subject, the narrow discussion of “AI” will miss many important aspects of broader software regulation, and will, as a consequence, create an ill-fitted legal regime. Second, by sweeping in a far wider range of tools into such a regulation than the drafters publicly acknowledge, the democratic legitimacy of the process is undermined.

A.      The Danger of Regulatory Overaggregation

The current hype surrounding AI has been driven by popular excitement, as well as incentives for media to capitalize on that excitement. While this is understandable, it arguably has led to oversimplification in public discussions about the underlying technologies. In reality, AI is an umbrella term that encompasses a diverse range of technologies, each with its own unique characteristics and applications.

For instance, relatively lower-level technologies like large language models (LLMs)[6] differ significantly from diffusion techniques.[7] At the level of applications, recommender systems can employ a wide variety of different machine-learning (or even more basic statistical) techniques.[8] All of these techniques collectively called “AI” also differ from the wide variety of algorithms employed by search engines, social media, consumer software, video games, streaming services, and so forth, although each also contains software “smarts,” so to speak, that could theoretically be grouped under the large umbrella of “AI.”

And none of the foregoing bear much resemblance at all to what the popular imagination conjures when we speak of AI—that is, artificial general intelligence (AGI), which some experts argue may not even be achievable.[9]

Attempting to create a single AI regulatory scheme commits what we refer to as “regulatory overaggregation”—sweeping together a disparate set of more-or-less related potential regulatory subjects under a single category in a manner that overfocuses on the abstract term and obscures differences among the subjects. The domains of “privacy rights” and “privacy regulation” are illustrative of the dangers inherent in this approach. There are, indeed, many potential harms (both online and offline) that implicate the concept of “privacy,” but the differences among these recommend examining closely the various contexts that attend each.

Individuals often invoke their expectation of “privacy,” for example, in contexts where they want to avoid the public revelation of personal or financial information. This sometimes manifests as the assertion of a right to control data as a form of quasi-property, or as a form of a right to anti-publicity (that is, a right not to be embarrassed publicly). Indeed, writing in 1890 with his law partner Samuel D. Warren, future Supreme Court Justice Louis Brandeis posited a “right to privacy” as akin to a property right.[10] Warren & Brandeis argued that privacy is not merely a matter of seclusion, but extends to the individual’s control over their personal information.[11] This “right to be let alone” delineates a boundary against unwarranted intrusion, which can be seen as a form of intangible property right.[12]

This framing can be useful as an abstract description of a broad class of interests and concerns, but it fails to offer sufficient specificity to describe actionable areas of law. Brandeis & Warren were concerned primarily with publicity;[13] that is, with a property right to control one’s public identity as a public figure. This, in turn, implicates a wide range of concerns, from an individual’s interest in commercialization of their public image to their options for mitigating defamation, as well as technologies that range from photography to website logging to GPS positioning.

But there are clearly other significant public concerns that fall broadly under the heading of “privacy” that cannot be adequately captured by the notion of controlling a property right “to be let alone.” Consider, for example, the emerging issue of “revenge porn.” It is certainly a privacy harm in the Brandeisian sense that it implicates the property right not to have one’s private images distributed without consent. But that framing fails to capture the full extent of potential harms, such as emotional distress and reputational damage.[14] Similarly, cases in which an individual’s cellphone location data are sold to bounty hunters are not primarily about whether a property right has been violated, as they raise broader issues concerning potential abuses of power, stalking, and even physical safety.[15]

These examples highlight some of the ways that, in failing to take account of the distinct facts and contexts that can attend privacy harms, an overaggregated “law of privacy” may tend to produce regulations insufficiently tailored to address those diverse harms.

By contrast, the domain of intellectual property (IP) may serve as an instructive counterpoint to the overaggregated nature of privacy regulation. IP encompasses a vast array of distinct legal constructs, including copyright, patents, trade secrets, trademarks, and moral rights, among others. But in the United States—and indeed, in most jurisdictions around the world—there is no overarching “law of intellectual property” that gathers all of these distinct concerns under a singular regulatory umbrella. Instead, legislation is specific to each area, resulting in copyright-specific acts, patent-specific acts, and so forth. This approach acknowledges that, within IP law, each IP construct invokes unique rights, harms, and remedies that warrant a tailored legislative focus.

The similarity of some of these areas does lend itself to conceptual borrowing, which has tended to enrich the legislative landscape. For example, U.S. copyright law has imported doctrines from patent law.[16] Despite such cross-pollination, copyright law and patent law remain distinct. In this way, intellectual property demonstrates the advantages of focusing on specific harms and remedies. This could serve as a valuable model for AI, where the harms and remedies are equally diverse and context dependent.

If AI regulations are too broad, they may inadvertently encompass any algorithm used in commercially available software, effectively stifling innovation and hindering technological advancements. This is no less true of good-faith efforts to craft laws in any number of domains that nonetheless suffer from a host of unintended consequences.[17]

At the same time, for a regulatory regime covering such a broad array of varying technologies to be intelligible, it is likely inevitable that tradeoffs made to achieve administrative efficiency will cause at least some real harms to be missed. Indeed, NTIA acknowledges this in the RFC:

Commentators have raised concerns about the validity of certain accountability measures. Some audits and assessments, for example, may be scoped too narrowly, creating a ‘‘false sense’’ of assurance. Given this risk, it is imperative that those performing AI accountability tasks are sufficiently qualified to provide credible evidence that systems are trustworthy.[18]

To avoid these unintended consequences, it is crucial to develop a more precise understanding of AI and its various subdomains, and to focus any regulatory efforts toward addressing specific harms that would not otherwise be captured by existing laws. The RFC declares that its aim is “to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.”[19] As we discuss below, rather than promulgate a set of recommendations about the use of AI, NTIA should focus on cataloguing AI technologies and creating useful taxonomies that regulators and courts can use when they identify tangible harms.

II. AI Accountability and Cost-Benefit Analysis

The RFC states that:

The most useful audits and assessments of these systems, therefore, should extend beyond the technical to broader questions about governance and purpose. These might include whether the people affected by AI systems are meaningfully consulted in their design and whether the choice to use the technology in the first place was well-considered.[20]

It is unlikely that consulting all of the people potentially affected by a set of technological tools could fruitfully contribute to the design of any regulatory system other than one that simply bans those tools.[21] Any intelligible accountability framework must be dedicated to evaluating the technology’s real-world impacts, rather than positing thought experiments about speculative harms. Where tangible harms can be identified, such evaluations should encompass existing laws that focus on those harms and how various AI technologies might alter how existing law would apply. Only in cases where the impact of particular AI technologies represents a new kind of harm, or raises concerns that fall outside existing legal regimes, should new regulatory controls be contemplated.

AI technologies will have diverse applications and consequences, with the potential for both beneficial and harmful outcomes. Rather than focus on how to constrain either AI developers or the technology itself, the focus should be on how best to mitigate or eliminate any potential negative consequences to individuals or society.

NTIA asks:

AI accountability measures have been proposed in connection with many different goals, including those listed below. To what extent are there tradeoffs among these goals?[22]

This question acknowledges that, fundamentally, AI accountability comes down to cost-benefit analysis. In conducting such analysis, we urge that the NTIA and any other agencies be sure to account not only for potential harms, but to take very seriously the massive benefits these technologies might provide.

A.      The Law Should Identify and Address Tangible Harms, Incorporating Incremental Changes

To illustrate the challenges inherent to tailoring regulation of a new technology like AI to address the ways that it might generally create harm, it could be useful to analogize to a different existing technology: photography. If camera technology were brand new, we might imagine a vast array of harms that could arise from its use. But it should be obvious that creating an overarching accountability framework for all camera technology is absurd. Instead, laws of general applicability should address harmful uses of cameras, such as the invasion of privacy rights posed by surreptitious filming. Even where a camera is used in the commission of a crime—e.g., surveilling a location in preparation to commit a burglary—it is not typically the technology itself that is the subject of legal concern; rather, it is the acts of surveillance and burglary.

Even where we can identify a tangible harm that a new technology facilitates, the analysis is not complete. Instead, we need to balance the likelihood of harmful uses of that technology with the likelihood of nonharmful (or beneficial) uses of that technology. Copyright law provides an apt example.

Sony,[23] often referred to as the “Betamax case,” was a landmark U.S. Supreme Court case in 1984 that centered on Sony’s Betamax VCR—the first consumer device that could record television shows for later viewing, a concept now referred to as time-shifting.[24] Plaintiffs alleged that, by manufacturing and selling the Betamax VCRs, Sony was secondarily liable for copyright infringement carried out by its customers when they recorded television shows.[25] In a 5-4 decision, the Supreme Court ruled in favor of Sony, holding that the use of the Betamax VCR to record television shows for later personal viewing constituted “fair use” under U.S. copyright law.[26]

Critical for our purposes here was that the Court found that Sony could not be held liable for contributory infringement because the Betamax VCR was capable of “substantial noninfringing uses.”[27] This is to say that, faced with a new technology (recording relatively high-quality copies of television shows and movies at home), the Court recognized that, while the Betamax might facilitate some infringement, it would be inappropriate to apply a presumption against its use.

Sony and related holdings did not declare that using VCRs to infringe copyright was acceptable. Indeed, copyright enforcement for illegal reproduction has continued apace, even when using new technologies capable of noninfringing uses.[28] At the same time, the government did not create a new regulatory and licensing regime to govern the technology, despite the fact that it was a known vector for some illicit activity.

Note, the Sony case is also important for its fair-use analysis, and is widely cited for the proposition that so-called “time shifting” is permissible. That is not central to our point here, particularly as there is no analogue to fair use proposed in the AI context. But even here, it represents how the law adapts to develop doctrines that excuse conduct that would otherwise be a violation. In the case of copyright, unauthorized reproduction is infringement, period.[29] Fair use is raised as an affirmative defense[30] to excuse some unauthorized reproduction because courts have long recognized that, when viewed case-by-case, application of legal rules need to be tailored to make room for unexpected fact patterns where acts that would otherwise be considered violations yield some larger social benefit.

We are not suggesting the development of a fair-use doctrine for AI, but are instead insisting that AI accountability and regulation must be consistent with the case-by-case approach that has characterized the common law for centuries. Toward that end, it would be best for law relevant to AI to emerge through that same bottom-up, case-by-case process. To the extent that any new legislation is passed, it should be incremental and principles-based, thereby permitting the emergence of law that best fits particular circumstances and does not conflict with other principles of common law.

By contrast, there are instances where the law has recognized that certain technologies are more likely to be used for criminal purposes and should be strictly regulated. For example, many jurisdictions have made possession of certain kinds of weapons—e.g., nunchaku, shuriken “throwing stars,” and switchblade knives—per se illegal, despite possible legal uses (such as martial-arts training).[31] Similarly, although there is a strong Second Amendment protection for firearms in the United States, it is illegal for a felon to possess a firearm.[32] The reason these prohibitions developed is because it was deemed that possession of these devices in most contexts had no other possible use than the violation of the law. But these sorts of technologies are the exception, not the rule. Many chemicals that can be easily used as poisons are nonetheless available as, e.g., cleaning agents or fertilizers.

1.        The EU AI Act: An overly broad attempt to regulate AI

Nonetheless, some advocate regulating AI by placing new technologies into various broad categories of risk, each with their own attendant rules. For example, as proposed by the European Commission, the EU’s AI Act would regulate the use of AI systems that ostensibly pose risks to health, safety, and fundamental rights.[33] The proposal defines AI systems broadly to include essentially any software, and sorts them into three risk levels: unacceptable, high, and limited risk.[34] Unacceptable-risk systems are prohibited outright, while high-risk systems are subject to strict requirements, including mandatory conformity assessments.[35] Limited-risk systems face certain requirements related to adequate documentation and transparency.[36]

The AI Act defines AI so broadly that it would apply even to ordinary general-purpose software, as well as software that uses machine learning but does not pose significant risks.[37] The plain terms of the AI Act could be read to encompass common office applications, spam filters, and recommendation engines, thus potentially imposing considerable compliance burdens on businesses for their use of software that provides benefits dramatically greater than any expected costs.[38] A recently proposed amendment would “ban the use of facial recognition in public spaces, predictive policing tools, and to impose transparency measures on generative AI applications OpenAI’s ChatGPT.”[39]

This approach constitutes a hodge-podge of top-down tech policing and one-off regulations. The AI Act starts with the presumption that regulators can design an abstract, high-level set of categories that capture the risk from “AI” and then proceeds to force arbitrary definitions of particular “AI” implementations into those categories. This approach may get some things right and some things wrong, but none of what good it does will be with principled consistency. For example, it might be the case that “predictive policing” is a problem that merits per se prohibition, but is it really an AI problem? What happens if the police get exceptionally good at using publicly available data and spreadsheets to approximate 80% of what they are able to do with AI? Or even just 50% efficacy? Is it the use of AI that is a harm, or is it the practice itself?

Similarly, a requirement that firms expose the sources on which they train their algorithms might be good in some contexts, but useless or harmful in others.[40] Certainly, it can make sense when thinking about current publicly available generative tools that create images and video, and have no ability to point to a license or permission for their training data. Such cases have a high likelihood of copyright infringement. But should every firm be expected to do this? Surely there will be many cases where firms use their own internal data, or data not subject to property-rights protection at all, but where exposing those sources reveals sensitive internal information, like know-how or other trade secrets. In those cases, a transparency obligation could have a chilling effect.

By contrast, it seems hard to believe that every use of public facial recognition should be banned. For instance, what if local authorities had limited access to facial recognition to find lost children or victims of trafficking?

More broadly, a strict transparency requirement could essentially make advanced machine-learning techniques illegal. By their nature, machine-learning systems and applications that employ LLMs make inferences and predictions that are, very often, not replicable.[41] That is, by their very nature they are not reviewable in a way that would be easily explained to a human in a transparency review. This means that strong transparency obligations could make it legally untenable to employ those techniques.

The broad risk-based approach taken by the AI Act faces difficult enforcement hurdles as well, as demonstrated by the EU’s proposal to essentially ban the open-source community from providing access to generative models.[42] In other words, not only do the proposed amendments seek to prohibit large companies such as OpenAI, Google, Anthropic, Amazon, Microsoft, and IBM from offering API access to generative AI models, but they would also prohibit open-source developers and distributors such as GitHub from doing the same.[43] Moreover, the prohibitions have extraterritorial effects; for example, the EU might seek to impose large fines on U.S. companies for permitting access to their models in the United States, on grounds that those models could be imported into the EU by third parties.[44] These provisions reflect not only an attempt to control the distribution of AI technology but also the wider implications that such attempts would essentially require steering worldwide innovation down a narrow, heavily regulated path.

2.        Focus on the harm and the wrongdoers, not the innovators

None of the foregoing is to suggest that it is impossible for AI to be misused. Where it is misused, there should be actionable legal consequences. For example, if a real-estate developer intentionally used AI tools to screen out individuals on the basis of protected characteristics from purchasing homes, that should be actionable. If a criminal found a novel way to use Chat GPT to commit fraud, that should be actionable. If generative AI is used to create “deep fakes” that further some criminal plot, that should be actionable. But in all those cases, it is not the AI itself that is the relevant unit of legal analysis, but the action of the criminal and the harm he causes.

To try to build a regulatory framework that makes it impossible for bad actors to misuse AI will be ultimately fruitless. Bad actors will always find ways to misuse tools, and heavy-handed regulatory requirements (or even strong suggestions of such) might chill the development of useful tools that could generate an enormous amount of social welfare.

B.      Do Not Neglect the Benefits

A major complication in parsing the wisdom of potential AI regulation is that the technology remains largely in development. Indeed, this is the impetus for many of the calls to “do something” before it is “too late.”[45] The fear that some express is that, unless a wise regulator intervenes in the development process, the technology will inevitably develop in ways that yield more harm than good.[46]

But trying to regulate AI in accordance with the precautionary principle would almost certainly stifle development and dampen the tremendous, but unknowable, good that would emerge as these technologies mature and we find unique uses for them. Moreover, precautionary regulation, even in high-risk industries like nuclear power, can lead to net harms to social welfare.[47]

It is important here to distinguish two broad categories of concern about AI. First, there is the generalized concern about AGI, expressed as fear that we are inadvertently creating a super intelligence with the power to snuff out human life at its whim. We reject this fear as a legitimate basis for new regulatory frameworks, although we concede that it is theoretically possible that this presumption may need to be revisited as AI technologies progress. None of the technologies currently under consideration are anywhere close to AGI. They are essentially just advanced prediction engines, whether the predictions concern text or pixels.[48] It seems highly unlikely that we will accidentally stumble onto AGI by plugging a few thousand prediction engines into one another.

There are more realistic concerns that these very impressive technologies will be misused to further discrimination and crime, or will have such a disruptive impact on areas like employment that they will quickly generate tremendous harms. When contemplating harms that could occur, however, it is also necessary to recognize that many significant benefits could also be generated. Moreover, as with earlier technologies, economic disruptions will provide both challenges and opportunities. It is easy to see the immediate effect on the jobs of content writers, for instance, posed by ChatGPT, but less easy to measure the benefits that will be realized by firms that can deploy this technology to “in-source” tasks.

Firms often face what is called the “make-or-buy” decision. A firm that decides to purchase the services of an outside designer or copywriter has determined that doing so is more efficient than developing that talent in-house. But the fact that many firms employ a particular mix of outsourced and in-house talent to fulfill their business needs does not suggest a universally optimal solution to the make-or-buy problem. All we can do is describe how, under current conditions, firms solve this problem.

AI will surely augment the basis on which firms deal with the make-or-buy decision. Pre-AI, it might have made sense to outsource a good deal of work that was not core to a firm’s mission. Post-AI, it might be the case that the firm can afford to hire additional workers who can utilize AI tools to more quickly and affordably manage the work that had been previously outsourced. Thus, the ability of AI tools to shift the make-or-buy decision, in itself, says nothing about the net welfare effects to society. Arguments could very well be made for either side. If history is any guide, however, it appears likely that AI tools will allow firms to do more with less, while also enabling more individuals to start new businesses with less upfront expense.

Moreover, by freeing capital from easily automated tasks, existing firms and new entrepreneurs could better focus on their core business missions. Excess investments previously made in supporting, for example, the creation of marketing content could be repurposed into R&D-intensive work. Simplistic static analyses of the substitution power of AI tools will almost surely mislead us, and make us neglect the larger social welfare that could be gained from organizations improving their efficiency with AI tools.

Economists have consistently found that dynamic competition—characterized by firms vying to deliver novel and enhanced products and services to consumers—contributes significantly more to economic growth than static competition, where technology is held constant, and firms essentially compete solely on price. As Joseph Schumpeter noted:

[I]t is not [price] competition which counts but the competition from the new commodity, the new technology, the new source of supply, the new type of organization…. This kind of competition is as much more effective than the other as a bombardment is in comparison with forcing a door, and so much more important that it becomes a matter of comparative indifference whether competition in the ordinary sense functions more or less promptly; the powerful lever that in the long run expands output and brings down prices is in any case made of other stuff.[49]

Technological advancements yield substantial welfare benefits for consumers, and there is a comprehensive body of scholarly work substantiating the contributions of technological innovation to economic growth and societal welfare. [50] There is also compelling evidence that technological progress engenders extensive spillovers not fully appropriated by the innovators.[51] Business-model innovations—such as advancements in organization, production, marketing, or distribution—can similarly result in extensive welfare gains.[52]

AI tools obviously are delivering a new kind of technological capability for firms and individuals. The disruptions they will bring will similarly spur business-model innovation as firms scramble to find innovative ways to capitalize on the technology. The potential economic dislocations can, in many cases, amount to reconstitution: a person who was a freelance content writer can be shifted to a different position that manages the output of generative AI and provides human edits to ensure that content makes sense and is based in fact. In many other cases, the dislocations will likely lead to increased opportunities for workers of all sorts.

With this in mind, policymakers need to consider how to identify those laws and regulations that are most likely to foster this innovation, while also enabling courts and regulators to adequately deal with potential harms. Although it is difficult to prescribe particular policies to boost innovation, there is strong evidence about what sorts of policies should be avoided. Most importantly, regulation of AI should avoid inadvertently destroying those technologies.[53] As Adam Thierer has argued, “if public policy is guided at every turn by the fear of hypothetical worst-case scenarios and the precautionary mindset, then innovation becomes less likely.”[54]

Thus, policymakers must be cautious to avoid unduly restricting the range of AI tools that compete for consumer acceptance. Key to fostering investment and innovation is not merely the endorsement of technological advancement, but advocacy for policies that empower innovators to execute and commercialize their technology.

By contrast, consider again the way that some EU lawmakers want to treat “high risk” algorithms under the AI Act. According to recently proposed amendments, if a “high risk” algorithm learns something beyond what its developers expect it to learn, the algorithm would need to undergo a conformity assessment.[55]

One of the prime strengths of AI tools is their capacity for unexpected discoveries, offering potential insights and solutions that might not have been anticipated by human developers. As the Royal Society has observed:

Machine learning is a branch of AI that enables computer systems to perform specific tasks intelligently. Traditional approaches to programming rely on hardcoded rules, which set out how to solve a problem, step-by-step. In contrast, machine learning systems are set a task, and given a large amount of data to use as examples (and non-examples) of how this task can be achieved, or from which to detect patterns. The system then learns how best to achieve the desired output.[56]

By labeling unexpected behavior as inherently risky and necessitating regulatory review, we risk stifling this serendipitous aspect of AI technologies, potentially curtailing their capacity for innovation. It could contribute to a climate of regulatory caution that hampers swift progress in discovering the full potential and utility of AI tools.

C.     AI Regulation Should Follow the Model of Common Law

In a recent hearing of the U.S. Senate Judiciary Committee, OpenAI CEO Sam Altman suggested that the United States needs a central “AI regulator.”[57] As a general matter, we expect this would be unnecessarily duplicative. As we have repeatedly emphasized, the right approach to regulating AI is not the establishment of an overarching regulatory framework, but a careful examination of how AI technologies will variously interact with different parts of the existing legal system. We are not alone in this; former Special Assistant to the President for Technology and Competition Policy Tim Wu recently opined that federal agencies would be well-advised to rely on existing law and enhance that law where necessary in order to catch unexpected situations that may arise from the use of AI tools.[58]

As Judge Easterbrook famously wrote in the context of what was then called “cyberspace,” we do not need a special law for AI any more than we need a “law of the horse.”[59]

1.        An AI regulator’s potential effects on competition

More broadly, there are risks to competition that attend creating a centralized regulator for a new technology like AI. As an established player in the AI market, OpenAI might favor a strong central regulator because of the potential that such an agency could act in ways that hinder the viability of new entrants.[60] In short, an incumbent often can gain by raising its rivals’ regulatory costs, or by manipulating the relationship between its industry’s average and marginal costs. This dynamic can create strong strategic incentives for industry incumbents to promote regulation.

Economists and courts have long studied actions that generate or amplify market dominance by placing competitors at a disadvantage, especially by raising rivals’ costs.[61] There exist numerous strategies to put competitors at a disadvantage or push them out of the market without needing to compete on price. While antitrust action focuses on private actors and their ability to raises rival’s costs, it is well-accepted that “lobbying legislatures or regulatory agencies to create regulations that disadvantage rivals” has similar effects.[62]

Suppose a new regulation costs $1 million in annual compliance costs. Only companies that are sufficiently large and profitable will be able to cover those costs, which keeps out newcomers and smaller competitors. This effect of keeping out smaller competitors by raising their costs may more than offset the regulatory burden on the incumbent. New entrants typically produce on a smaller scale, and therefore find it more difficult to spread increased costs over a large number of units. This makes it harder for them to compete with established firms like OpenAI, which can absorb these costs more easily due to their larger scale of production.

This type of cost increase can often look benign. In the United Mine Workers vs. Pennington[63] case, a coal corporation was alleged to have conspired with the union representing its workforce to establish higher wage rates. How could higher wages be anticompetitive? This seemingly contradictory conclusion came from University of California at Berkeley economist Oliver Williamson, who interpreted the action to be an effort to maximize profits by raising entry barriers.[64] Using a model with a dominant incumbent and a fringe of other competitors, he demonstrated that wage-rate increases could lead to profit maximization if they escalated the fringe’s costs more than they did the dominant firm’s costs. Intuitively, while the dominant firm is dominant, the market price is determined by the marginal producers and the dominant company’s price is determined by the prices of its competitors. If a regulation raises the competitors’ per-unit costs by $2, the dominant company will be able to raise its price by as much as $2 per unit. Even if the regulation hurts the dominant firm, so long as its price increase exceeds its additional cost, the dominant firm can profit from the regulation.

As a result, while regulations might increase costs for OpenAI, they also serve to protect it from potential competition by raising the barriers to entry. In this sense, regulation can be seen as a strategic tool for incumbent firms to maintain or strengthen their market position. None of this analysis rests on OpenAI explicitly wanting to raise its rivals’ costs. That is just the competitive implication of such regulations. Thus, while there may be many benign reasons for a firm like OpenAI to call for regulation in good faith, the ultimate lesson presented by the economics of regulation should counsel caution when imposing strong centralized regulations on a nascent industry.

2.        A central licensing regulator for AI would be a mistake

NTIA asks:

Are there ways in which accountability mechanisms are unlikely to further, and might even frustrate, the development of trustworthy AI? Are there accountability mechanisms that unduly impact AI innovation and the competitiveness of U.S. developers?[65]

We are not alone in the  belief that imposing a licensing regime would present just such a barrier to innovation.[66] In the recent Senate hearings, the idea of a central regulator was endorsed as means to create and administer a licensing regime.[67] Perhaps in some narrow applications of particular AI technologies, there could be specific contexts in which licensing is appropriate (e.g., in providing military weapons), but broadly speaking, we believe this is inadvisable. Owing to the highly diverse nature of AI technologies, trying to license AI development is a fraught exercise, as NTIA itself acknowledges:

A developer training an AI tool on a customer’s data may not be able to tell how that data was collected or organized, making it difficult for the developer to assure the AI system. Alternatively, the customer may use the tool in ways the developer did not foresee or intend, creating risks for the developer wanting to manage downstream use of the tool. When responsibility along this chain of AI system development and deployment is fractured, auditors must decide whose data and which relevant models to analyze, whose decisions to examine, how nested actions fit together, and what is within the audit’s frame.[68]

Rather than design a single regulation to cover AI, ostensibly administered through a single licensing regime, NTIA should acknowledge the broad set of industries currently seeking to employ a diverse range of AI products that differ in fundamental ways. The implications of AI deployment in health care, for instance, vastly differ from those in transportation. A centralized AI regulator might struggle to comprehend the nuances and intricacies of each distinct industry, thus potentially leading to ineffective or inappropriate licensing requirements.

Analogies have been drawn between AI and sectors like railroads and nuclear power, which have dedicated regulators.[69] These sectors, however, are more homogenous and discrete than the AI industry (if such an industry even exists, apart from the software industry more generally). AI is much closer to a general-purpose tool, like chemicals or combustion engines. We do not enact central regulators to license every aspect of the development and use of chemicals, but instead allow different agencies to treat their use differently as is appropriate for the context. For example, the Occupational Safety and Health Administration (OSHA) will regulate employee exposure to dangerous substances encountered in the workplace, while various consumer-protection boards will regulate the adulteration of goods.

The notion of licensing implies that companies would need to obtain permission prior to commercializing a particular piece of code. This could introduce undesirable latency into the process of bringing AI technologies to market (or, indeed, even of correcting errors in already-deployed products). Given the expansive potential to integrate AI technologies into diverse products and services, this delay could significantly impede technological progress and innovation. Given the strong global interest in the subject, such delays threaten to leave the United States behind its more energetic competitors in the race for AI innovation.

As in other consumer-protection regimes, a better approach would be to eschew licensing and instead create product-centric and harm-centric frameworks that other sectoral regulators or competition authorities could incorporate into their tailored rules for goods and services.

For instance, safety standards for medical devices should be upheld, irrespective of whether AI is involved. This product-centric regulatory approach would ensure that the desired outcomes of safety, quality, and effectiveness are achieved without stymieing innovation. With their deep industry knowledge and experience, sectoral regulators will generally be better positioned to address the unique challenges and considerations posed by AI technology deployed within their spheres of influence.

NTIA alludes to one of the risks of an overaggregated regulator when it notes that:

For some trustworthy AI goals, it will be difficult to harmonize standards across jurisdictions or within a standard- setting body, particularly if the goal involves contested moral and ethical judgements. In some contexts, not deploying AI systems at all will be the means to achieve the stated goals.[70]

Indeed, the institutional incentives that drive bureaucratic decision making often converge on this solution of preventing unexpected behavior by regulated entities.[71] But at what cost? If a regulator is unable to imagine how to negotiate the complicated tradeoffs among interested parties across all AI-infused technologies, it will act to slow or prevent the technology from coming to market. This will make us all worse off, and will only strengthen the position of our competitors on the world stage.

D.      The Impossibility of Explaining Complexity

NTIA notes that:

According to NIST, ‘‘trustworthy AI’’ systems are, among other things, ‘‘valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with their harmful bias managed.’’[72]

And in the section titled “Accountability Inputs and Transparency, NTIA asks a series of questions designed to probe what can be considered a realistic transparency obligation for developers and deployers of AI systems. We urge NTIA to resist the idea that AI systems be “explainable,” for the reasons set forth herein.

One of the significant challenges in AI accountability is making AI systems explainable to users. It is crucial to acknowledge that providing a clear explanation of how an AI model—such as an LLM or a diffusion model—arrives at a specific output is an inherently complex task, and may not be possible at all. As the UK Royal Society has noted in its paper on AI explainability:

Much of the recent excitement about advances in AI has come as a result of advances in statistical techniques. These approaches – including machine learning – often leverage vast amounts of data and complex algorithms to identify patterns and make predictions. This complexity, coupled with the statistical nature of the relationships between inputs that the system constructs, renders them difficult to understand, even for expert users, including the system developers. [73]

These models are designed with intricate architectures and often rely on vast troves of data to arrive at outputs, which can make it nearly impossible to reverse-engineer the process. Due to these complexities, it may be unfeasible to make AI fully explainable to users. Moreover, users themselves often do not value explainability, and may be largely content with a “black box” system when it consistently provides accurate results.[74]

Instead, to the extent that regulators demand visibility into AIs, the focus should be on the transparency of the AI-development process, system inputs, and the general guidelines for AI that developers use in preparing their models. Ultimately, we suspect that, even here, such measures will do little to resolve the inherent complexity in understanding how AI tools produce their outputs.

In a more limited sense, we should consider the utility in transparency of AI-infused technology for most products and consumers. NTIA asks:

Given the likely integration of generative AI tools such as large language models (e.g., ChatGPT) or other general-purpose AI or foundational models into downstream products, how can AI accountability mechanisms inform people about how such tools are operating and/or whether the tools comply with standards for trustworthy AI?[75]

As we note above, the proper level of analysis for AI technologies is the product into which they are incorporated. But even there, we need to ask whether it matters to an end user whether a product they are using relies on ChatGPT or a different algorithm for predictively generating text. If the product malfunctions, what matters is the malfunction and the accountability for the product. Most users do not really care whether a developer writes a program using C++ or Java, and neither should they explicitly care whether he incorporates a generative AI algorithm to predict text, or uses some other method of statistical analysis. The presence of an AI component becomes analytically necessary when diagnosing how something went wrong, but ex ante, it is likely irrelevant from a consumer’s perspective.

Thus, it may be the case that a more fruitful avenue for NTIA to pursue would be to examine how a strict-liability or product-liability legal regime might be developed for AI. These sorts of legal frameworks put the onus on AI developers to ensure that their products behave appropriately­. Such legal frameworks also provide consumers with reassurance that they have recourse if and when they are harmed by a product that contains AI technology. Indeed, it could very well be the case that overemphasizing “trust” in AI systems could end up misleading users in important contexts.[76] This would strengthen the case for a predictable liability regime.

1.        The deepfakes problem demonstrates that we do not need a new body of law

The phenomenon of generating false depictions of individuals using advanced AI techniques—commonly called “deepfakes”—is undeniably concerning, particularly when it can be used to create detrimental false public statements,[77] facilitate fraud,[78] or create nonconsensual pornography.[79] But while deepfakes use modern technological tools, they are merely the most recent iteration of the age-old problem of forgery. Importantly, existing law already equips us with the tools needed to address the challenges posed by deepfakes, rendering many recent legislative proposals at the state level both unnecessary and potentially counterproductive. Consider one of the leading proposals offered by New York State.[80]

Existing laws in New York and at the federal level provide remedies for individuals aggrieved by deepfakes, and they do so within a legal system that has already worked to incorporate the context of these harms, as well as the restrictions of the First Amendment and related defenses. For example, defamation laws can be applied where a deepfake falsely suggests an individual has posed for an explicit photograph or video.[81] New York law also acknowledges the tort of intentional infliction of emotional distress, which likely could be applied to the unauthorized use of a person’s likeness in explicit content.[82] In addition, the tort of unjust enrichment can be brought to bear where appropriate, as can the Lanham Act §43(a), which prohibits false advertising and implied false endorsements.[83] Furthermore, victims may hold copyright in the photograph or video used in a deepfake, presenting grounds for an infringement action.[84]

Thus, while advanced deepfakes are new, the harms they can cause and the law’s ability to address those harms is not novel. Legislation that attempts to carve out new categories of harms in these situations are, at best, reinventing the wheel and, at worst, risk creating confusing tensions in the existing legal system.

III.      The Role of NTIA in AI Accountability

NTIA asks if “the lack of a federal law focused on AI systems [is] a barrier to effective AI accountability?”[85] In short, no, this is not a barrier, so long as the legal system is allowed to evolve to incorporate the novel challenges raised by AI technologies.

As noted in the previous section, there is a need to develop standards, both legal and technical. As we are in the early days of AI technology, the exact contours of the various legal changes that might be needed to incorporate AI tools into existing law remain unclear. At this point, we would urge NTIA—to the extent that it wants to pursue regulatory, licensing, transparency, and other similar obligations—to develop a series of workshops through which leading technology and legal experts could confer on developing a vision for how such legal changes would work in practice.

By gathering stakeholders and fostering an ongoing dialogue, NTIA can help to create a collaborative environment in which organizations can share knowledge, experiences, and innovations to address AI accountability and its associated challenges. By promoting industry collaboration, NTIA could also help build a foundation of trust and cooperation among organizations involved in AI development and deployment. This, in turn, will facilitate the establishment of standards and best practices that address specific concerns, while mitigating the risk of overregulation that could stifle innovation and progress. In this capacity, NTIA should focus on encouraging the development of context-specific best practices that prioritize the containment of identifiable harms. By fostering a collaborative atmosphere, the agency can support a dynamic and adaptive AI ecosystem that is capable of addressing evolving challenges while safeguarding the societal benefits of AI advancements.

In addressing AI accountability, it is essential for NTIA to adopt a harm-focused framework that targets the negative impacts of AI systems rather than the technology itself. This approach would recognize that AI technology can have diverse applications, with consequences that will depend on the context in which they are used. By prioritizing the mitigation of specific harms, NTIA can ensure that regulations are tailored to address real-world outcomes and provide a more targeted and effective regulatory response.

A harm-focused framework also acknowledges that different AI technologies pose differing levels of risk and potential for misuse. NTIA can play a proactive role in guiding the creation of policies that reflect these nuances, striking a balance between encouraging innovation and ensuring the responsible development and use of AI. By centering the discussion on actual harms and their causes, NTIA can foster meaningful dialogue among stakeholders and facilitate the development of industry best practices designed to minimize negative consequences.

Moreover, this approach ensures that AI accountability policies are consistent with existing laws and regulations, as it emphasizes the need to assess AI-related harms within the context of the broader legal landscape. By aligning AI accountability measures with other established regulatory frameworks, the NTIA can provide clear guidance to AI developers and users, while avoiding redundancy and conflicting regulations. Ultimately, a harm-focused framework allows the NTIA to better address the unique challenges posed by AI technology and foster an assurance ecosystem that prioritizes safety, ethics, and legal compliance without stifling innovation.

IV.    Conclusion

Another risk of the current AI hysteria is that fatigue will set in, and the public will become numbed to potential harms. Overall, this may shrink the public’s appetite for the kinds of legal changes that will be needed to address those actual harms that do emerge. News headlines that push doomsday rhetoric and a community of experts all too eager to respond to the market incentives for apocalyptic projections only exacerbate the risk of that outcome. A recent one-line letter, signed by AI scientists and other notable figures, highlights the problem:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.[86]

Novel harms absolutely will emerge from products that employ AI, as has been the case for every new technology. The introduction of automobiles created new risks of harm from high-speed auto-related deaths, for example. But rhetoric about AI being an existential risk on the level of a pandemic or nuclear war is irresponsible.

Perhaps one of the most important positions NTIA can assume, therefore, is that of a calm, collected expert agency that helps restrain the worst impulses to regulate AI out of existence due to blind fear.

In essence, the key challenge confronting policymakers lies in navigating the dichotomy of mitigating actual risks presented by AI, while simultaneously safeguarding the substantial benefits it offers. It is undeniable that the evolution of AI will bring about disruption and may provide a conduit for malevolent actors, just as technologies like the printing press and the internet have done in the past. This does not, however, merit taking an overly cautious stance that would suppress the potential benefits of AI.

As we formulate policy, it is crucial to eschew dystopian science-fiction narratives and instead ground our approach in realistic scenarios. The proposition that computer systems, even those as advanced as AI tools, could spell the end of humanity lacks substantial grounding.

The current state of affairs represents a geo-economic competition to harness the benefits of AI in myriad domains. Contrary to fears that AI poses an existential risk, the real danger may well lie in attempts to overly regulate and stifle the technology’s potential. The indiscriminate imposition of regulations could inadvertently thwart AI advancements, resulting in a loss of potential benefits that could be far more detrimental to social welfare.

[1] AI Accountability Policy Request for Comment, Docket No. 230407-0093, 88 FR 22433, National Telecommunications and Information Administration (Apr. 14, 2023) (“RFC”).

[2] Indeed, this approach appears to be the default position of many policymakers around the world. See, e.g., Mikolaj Barczentewicz, EU’s Compromise AI Legislation Remains Fundamentally Flawed, Truth on the Market (Feb. 8, 2022), https://truthonthemarket.com/2022/02/08/eus-compromise-ai-legislation-remains-fundamentally-flawed; The fundamental flaw of this approach is that, while AI techniques use statistics, “statistics also includes areas of study which are not concerned with creating algorithms that can learn from data to make predictions or decisions. While many core concepts in machine learning have their roots in data science and statistics, some of its advanced analytical capabilities do not naturally overlap with these disciplines.” See, Explainable AI: The Basics, The Royal Society (2019) at 7 available at https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf (“Royal Society Briefing”).

[3] John P. Holdren, Cass R. Sunstein, & Islam A. Siddiqui, Memorandum for the Heads of Executive Departments and Agencies, Executive Office of the White House (Jun. 9, 2011), available at https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/for-agencies/nanotechnology-regulation-and-oversight-principles.pdf.

[4] Id.

[5] Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. L. Forum 207 (1996).

[6] LLMs are a type of artificial-intelligence model designed to parse and generate human language at a highly sophisticated level. The deployment of LLMs has driven progress in fields such as conversational AI, automated content creation, and improved language understanding across a multitude of applications, even suggesting that these models might represent an initial step toward the achievement of artificial general intelligence (AGI). See Alejandro Pen?a et al., Leveraging Large Language Models for Topic Classification in the Domain of Public Affairs, arXiv (Jun. 5, 2023), https://arxiv.org/abs/2306.02864v1.

[7] Diffusion models are a type of generative AI built from a hierarchy of denoising autoencoders, which can achieve state-of-the-art results in such tasks as class-conditional image synthesis, super-resolution, inpainting, colorization, and stroke-based synthesis. Unlike other generative models, these likelihood-based models do not exhibit mode collapse and training instabilities. By leveraging parameter sharing, they can model extraordinarily complex distributions of natural images without necessitating billions of parameters, as in autoregressive models. See Robin Rombach et al., High-Resolution Image Synthesis with Latent Diffusion Models, arXiv (Dec. 20, 2021), https://arxiv.org/abs/2112.10752.

[8] Recommender systems are advanced tools currently used across a wide array of applications, including web services, books, e-learning, tourism, movies, music, e-commerce, news, and television programs, where they provide personalized recommendations to users. Despite recent advancements, there is a pressing need for further improvements and research in order to offer more efficient recommendations that can be applied across a broader range of applications. See Deepjyoti Roy & Mala Dutta, A Systematic Review and Research Perspective on Recommender Systems, 9 J. Big Data 59 (2022), available at https://journalofbigdata.springeropen.com/counter/pdf/10.1186/s40537-022-00592-5.pdf.

[9] AGI refers to hypothetical future AI systems that possess the ability to understand or learn any intellectual task that a human being can do. While the realization of AGI remains uncertain, it is distinct from the more specialized AI systems currently in use. For a skeptical take on the possibility of AGI, see Roger Penrose, The Emperor’s New Mind (Oxford Univ. Press 1989).

[10] Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193 (1890).

[11] Id. at 200.

[12] Id. at 193.

[13] Id. at 196-97.

[14] Notably, courts do try to place a value on emotional distress and related harms. But because these sorts of violations are deeply personal, attempts to quantify such harms in monetary terms are rarely satisfactory to the parties involved.

[15] Martin Giles, Bounty Hunters Tracked People Secretly Using US Phone Giants’ Location Data, MIT Tech. Rev. (Feb. 7, 2019), https://www.technologyreview.com/2019/02/07/137550/bounty-hunters-tracked-people-secretly-using-us-phone-giants-location-data.

[16] See, e.g., Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S. 417, 439 (1984) (The Supreme Court imported the doctrine of “substantial noninfringing uses” into copyright law from patent law).

[17] A notable example is how the Patriot Act, written to combat terrorism, was ultimately used to take down a sitting governor in a prostitution scandal. See Noam Biale, Eliot Spitzer: From Steamroller to Steamrolled, ACLU, Oct. 29, 2007, https://www.aclu.org/news/national-security/eliot-spitzer-steamroller-steamrolled.

[18] RFC at 22437.

[19] Id. at 22433.

[20] Id. at 22436.

[21] Indeed, the RFC acknowledges that, even as some groups are developing techniques to evaluate AI systems for bias or disparate impact, “It should be recognized that for some features of trustworthy AI, consensus standards may be difficult or impossible to create.” RFC at 22437. Arguably, this problem is inherent to constructing an overaggregated regulator, particularly one that will be asked to consulting a broad public on standards and rulemaking.

[22] Id. at 22439.

[23] Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S.at 417.

[24] Id.

[25] Id.

[26] Id. at 456.

[27] Id.

[28] See, e.g., Defendant Indicted for Camcording Films in Movie Theaters and for Distributing the Films on Computer Networks First Prosecution Under Newly-Enacted Family Entertainment Copyright Act, U.S. Dept of Justice (Aug. 4, 2005), available at https://www.justice.gov/archive/criminal/cybercrime/press-releases/2005/salisburyCharge.htm.

[29] 17 U.S.C. 106.

[30] See 17 U.S.C. 107; Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 590 (1994) (“Since fair use is an affirmative defense, its proponent would have difficulty carrying the burden of demonstrating fair use without favorable evidence about relevant markets.”).

[31] See, e.g., N.Y. Penal Law § 265.01; Wash. Rev. Code Ann. § 9.41.250; Mass. Gen. Laws Ann. ch. 269, § 10(b).

[32] See, e.g., 18 U.S.C.A. § 922(g).

[33] Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206 final. The latest proposed text of the AI Act is available at https://www.europarl.europa.eu/doceo/document/A-9-2023-0188_EN.html.

[34] Id. at amendment 36 recital 14.

[35] Id.

[36] Id.

[37] See e.g., Mikolaj Barczentewicz, supra note 2.

[38] Id.

[39] Foo Yun Chee, Martin Coulter & Supantha Mukherjee, EU Lawmakers’ Committees Agree Tougher Draft AI Rules, Reuters (May 11, 2023), https://www.reuters.com/technology/eu-lawmakers-committees-agree-tougher-draft-ai-rules-2023-05-11.

[40] See infra at notes 71-77 and accompanying text.

[41] Explainable AI: The Basics, supra note 2 at 8.

[42] See e.g., Delos Prime, EU AI Act to Target US Open Source Software, Technomancers.ai (May 13, 2023), https://technomancers.ai/eu-ai-act-to-target-us-open-source-software.

[43] Id.

[44] To be clear, it is not certain how such an extraterritorial effect will be obtained, and this is just a proposed amendment to the law. Likely, there will need to be some form of jurisdictional hook, i.e., that this applies only to firms with an EU presence.

[45]  Eliezer Yudkowsky, Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, Time (Mar. 29, 2023), https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough.

[46] See, e.g., Kiran Stacey, UK Should Play Leading Role on Global AI Guidelines, Sunak to Tell Biden, The Guardian (May 31, 2023), https://www.theguardian.com/technology/2023/may/31/uk-should-play-leading-role-in-developing-ai-global-guidelines-sunak-to-tell-biden.

[47] See, e.g., Matthew J. Neidell, Shinsuke Uchida & Marcella Veronesi, The Unintended Effects from Halting Nuclear Power Production: Evidence from Fukushima Daiichi Accident, NBER Working Paper 26395 (2022), https://www.nber.org/papers/w26395 (Japan abandoning nuclear energy in the wake of the Fukushima disaster led to decreased energy consumption, which in turn led to increased mortality).

[48] See, e.g., Will Knight, Some Glimpse AGI in ChatGPT. Others Call It a Mirage, Wired (Apr. 10, 2023), https://www.wired.com/story/chatgpt-agi-intelligence (“GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input.”)

[49] Joseph A. Schumpeter, Capitalism, Socialism And Democracy 74 (1976).

[50] See, e.g., Jerry Hausman, Valuation of New Goods Under Perfect and Imperfect Competition, in The Economics Of New Goods 209–67 (Bresnahan & Gordon eds., 1997).

[51] William D. Nordhaus, Schumpeterian Profits in the American Economy: Theory and Measurement, NBER Working Paper No. 10433 (Apr. 2004) at 1, http://www.nber.org/papers/w10433 (“We conclude that only a miniscule fraction of the social returns from technological advances over the 1948-2001 period was captured by producers, indicating that most of the benefits of technological change are passed on to consumers rather than captured by producers.”).

[52] See generally Oliver E. Williamson, Markets And Hierarchies, Analysis And Antitrust Implications: A Study In The Economics Of Internal Organization (1975).

[53] See, e.g., Nassim Nicholas Taleb, Antifragile: Things That Gain From Disorder (2012) (“In action, [via negativa] is a recipe for what to avoid, what not to do.”).

[54] Adam Thierer, Permissionless Innovation: The Continuing Case For Comprehensive Technological Freedom (2016).

[55] See, e.g., Artificial Intelligence Act, supra note 33, at amendment 112 recital 66.

[56] Explainable AI: The Basics, supra note 2 at 6.

[57] Cecilia Kang, OpenAI’s Sam Altman Urges A.I. Regulation in Senate Hearing, NY Times (May 16, 2023), https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html; see also Mike Solana & Nick Russo, Regulate Me, Daddy, Pirate Wires (May 23, 2023), https://www.piratewires.com/p/regulate-me-daddy.

[58] Cristiano Lima, Biden’s Former Tech Adviser on What Washington is Missing about AI, The Washington Post (May 30, 2023), https://www.washingtonpost.com/politics/2023/05/30/biden-former-tech-adviser-what-washington-is-missing-about-ai.

[59] Frank H. Easterbrook, supra note 5.

[60]  See Lima, supra note 58 (“I’m not in favor of an approach that would create heavy compliance costs for market entry and that would sort of regulate more abstract harms.”)

[61] Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73:2 Am. Econ. R. 267, 267–71 (1983), http://www.jstor.org/stable/1816853.

[62] Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36:1 J. Indus. Econ. 19 (1987), https://doi.org/10.2307/2098594.

[63] United Mine Workers of Am. v. Pennington, 381 U.S. 657, 661 (1965).

[64] Oliver E. Williamson, Wage Rates as a Barrier to Entry: The Pennington Case in Perspective, 82:1 Q. J. Econ. 85 (1968), https://doi.org/10.2307/1882246.

[65] RFC at 22439.

[66] See, e.g., Lima, supra note 58 (“Licensing regimes are the death of competition in most places they operate”).

[67] Kang, supra note 57; Oversight of A.I.: Rules for Artificial Intelligence: Hearing Before the Subcomm. on Privacy, Technology, and the Law of the S. Comm. on the Judiciary, 118th Cong. (2023) (statement of Sam Altman, at 11), available at https://www.judiciary.senate.gov/download/2023-05-16-testimony-altman.

[68] RFC at 22437.

[69] See, e.g., Transcript: Senate Judiciary Subcommittee Hearing on Oversight of AI, Tech Policy Press (May 16, 2023), https://techpolicy.press/transcript-senate-judiciary-subcommittee-hearing-on-oversight-of-ai (“So what I’m trying to do is make sure that you just can’t go build a nuclear power plant. Hey Bob, what would you like to do today? Let’s go build a nuclear power plant. You have a nuclear regulatory commission that governs how you build a plant and is licensed.”)

[70] RFC at 22438.

[71] See, e.g., Raymond J. March, The FDA and the COVID?19: A Political Economy Perspective, 87(4) S. Econ. J. 1210, 1213-16 (2021), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8012986 (discussing the political economy that drives incentives of bureaucratic agencies in the context of the FDA’s drug-approval process).

[72] RFC at 22434.

[73] Explainable AI: The Basics, supra, note 2 at 12.

[74] Id. at 20.

[75] Id. at 22439.

[76] Explainable AI: The Basics, supra note 2 at 22. (“Not only is the link between explanations and trust complex, but trust in a system may not always be a desirable outcome. There is a risk that, if a system produces convincing but misleading explanations, users might develop a false sense of confidence or understanding, mistakenly believing it is trustworthy as a result.”)

[77] Kate Conger, Hackers’ Fake Claims of Ukrainian Surrender Aren’t Fooling Anyone. So What’s Their Goal?, NY Times (Apr. 5, 2022), https://www.nytimes.com/2022/04/05/us/politics/ukraine-russia-hackers.html.

[78] Pranshu Verma, They Thought Loved Ones Were Calling for Help. It Was an AI Scam, The Washington Post (Mar. 5, 2023), https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam.

[79] Video: Deepfake Porn Booms in the Age of A.I., NBC News (Apr. 28, 2023), https://www.nbcnews.com/now/video/deepfake-porn-booms-in-the-age-of-a-i-171726917562.

[80] S5857B, NY State Senate (2018), https://www.nysenate.gov/legislation/bills/2017/s5857/amendment/b.

[81] See, e.g., Rejent v. Liberation Publications, Inc., 197 A.D.2d 240, 244–45 (1994); see also, Leser v. Penido, 62 A.D.3d 510, 510–11 (2009).

[82] See, e.g., Howell v. New York Post Co,. 612 N.E.2d 699 (1993).

[83] See, e.g., Mandarin Trading Ltd. v. Wildenstein, 944 N.E.2d 1104 (2011); 15 U.S.C. §1125(a).

[84] 17 U.S.C. 106.

[85] RFC at 22440.

[86] Statement on AI Risk, Center for AI Safety, https://www.safe.ai/statement-on-ai-risk (last visited Jun. 7, 202).

Continue reading
Innovation & the New Economy

FTC v Amgen: The Economics of Bundled Discounts, Part One

TOTM The Federal Trade Commission (FTC) recently announced that it would seek to block Amgen’s proposed $27.8 billion acquisition of Horizon Therapeutics. The move was the culmination of . . .

The Federal Trade Commission (FTC) recently announced that it would seek to block Amgen’s proposed $27.8 billion acquisition of Horizon Therapeutics. The move was the culmination of several years’ worth of increased scrutiny from both Congress and the FTC into antitrust issues in the biopharmaceutical industry. While the FTC’s move didn’t elicit much public comment, it raised considerable alarm in various corners of the biopharmaceutical industry—specifically, that it would chill beneficial biopharmaceutical M&A activity.

This piece, which aims to shed light on the FTC’s theory of the harm in the case and its consequences for the industry, will be divided into two parts. This first post will discuss the overall biopharmaceutical market and the FTC’s stated theory of harm. In a subsequent post, I will dive more deeply into the economic theories that underpin the case and the risk-benefit tradeoff inherent in the FTCs decision to challenge the merger.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection