Showing 9 of 121 Publications in Consumer Welfare Standard

ICLE Comments to NTIA on Dual-Use Foundation AI Models with Widely Available Model Weights

Regulatory Comments I. Introduction We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual . . .

I. Introduction

We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights” proceeding. In these comments, we endeavor to offer recommendations to foster the innovative and responsible production of artificial intelligence (AI), encompassing both open-source and proprietary models. Our comments are guided by a belief in the transformative potential of AI, while recognizing NTIA’s critical role in guiding the development of regulations that not only protect consumers but also enable this dynamic field to flourish. The agency should seek to champion a balanced and forward-looking approach toward AI technologies that allows them to evolve in ways that maximize their social benefits, while navigating the complexities and challenges inherent in their deployment.

NTIA’s question “How should [the] potentially competing interests of innovation, competition, and security be addressed or balanced?”[1] gets to the heart of ongoing debates about AI regulation. There is no panacea to be discovered, as all regulatory choices require balancing tradeoffs. It is crucial to bear this in mind when evaluating, e.g., regulatory proposals that implicitly treat AI as inherently dangerous and regard as obvious that stringent regulation is the only effective strategy to mitigate such risks.[2] Such presumptions discount AI’s unknown but potentially enormous capacity to produce innovation, and inadequately account for other tradeoffs inherent to imposing a risk-based framework (e.g., requiring disclosure of trade secrets or particular kinds of transparency that could yield new cybersecurity attack vectors). Adopting an overly cautious stance risks not only stifling AI’s evolution, but may also preclude a fulsome exploration of its potential to foster social, economic, and technological advancement. A more restrictive regulatory environment may also render AI technologies more homogenous and smother development of the kinds of diverse AI applications needed to foster robust competition and innovation.

We observe this problematic framing in the executive order (EO) that serves as the provenance of this RFC.[3] The EO repeatedly proclaims the importance of “[t]he responsible development and use of AI” in order to “mitigate[e] its substantial risks.”[4] Specifically, the order highlights concerns over “dual-use foundation models”—i.e., AI systems that, while beneficial, could pose serious risks to national security, national economic security, national public health, or public safety.[5] Concerningly, one of the categories the EO flags as illicit “dual use” are systems “permitting the evasion of human control or oversight through means of deception or obfuscation.”[6] This open-ended category could be interpreted so broadly that essentially any general-purpose generative-AI system would classify.

The EO also repeatedly distinguishes “open” versus “closed” approaches to AI development, while calling for “responsible” innovation and competition.[7] On our reading, the emphasis the EO places on this distinction raises alarm bells about the administration’s inclination to stifle innovation through overly prescriptive regulatory frameworks, diminishment of the intellectual property rights that offer incentives for innovation, and regulatory capture that favors incumbents over new entrants. In favoring one model of AI development over another, the EO’s prescriptions could inadvertently hamper the dynamic competitive processes that are crucial both for technological progress and for the discovery of solutions to the challenges that AI technology poses.

Given the inchoate nature of AI technology—much less the uncertain markets in which that technology will ultimately be deployed and commercialized—NTIA has an important role to play in elucidating for policymakers the nuances that might lead innovators to choose an open or closed development model, without presuming that one model is inherently better than the other—or that either is necessarily “dangerous.” Ultimately, the preponderance of AI risks will almost certainly emerge idiosyncratically. It will be incumbent on policymakers to address such risks in an iterative fashion as they become apparent. For now, it is critical to resist the urge to enshrine crude and blunt categories for the heterogeneous suite of technologies currently gathered under the broad banner of  “AI.”

Section II of these comments highlights the importance of grounding AI regulation in actual harms, rather than speculative risks, while outlining the diversity of existing AI technologies and the need for tailored approaches. Section III starts with discussion of some of the benefits and challenges posed by both open and closed approaches to AI development, while cautioning against overly prescriptive definitions of “openness” and advocating flexibility in regulatory frameworks. It proceeds to examine the EO’s prescription to regulate so-called “dual-use” foundation models, underscoring some potential unintended consequences for open-source AI development and international collaboration. Section IV offers some principles to craft an effective regulatory model for AI, including distinguishing between low-risk and high-risk applications, avoiding static regulatory approaches, and adopting adaptive mechanisms like regulatory sandboxes and iterative rulemaking. Section V concludes.

II. Risk Versus Harm in AI Regulation

In many of the debates surrounding AI regulation, disproportionate focus is placed on the need to mitigate risks, without sufficient consideration of the immense benefits that AI technologies could yield. Moreover, because these putative risks remain largely hypothetical, proposals to regulate AI descend quickly into an exercise in shadowboxing.

Indeed, there is no single coherent definition of what even constitutes “AI.” The term encompasses a wide array of technologies, methodologies, and applications, each with distinct characteristics, capabilities, and implications for society. From foundational models that can generate human-like text, to algorithms capable of diagnosing diseases with greater accuracy than human doctors, to “simple” algorithms that facilitate a more tailored online experience, AI applications and their underlying technologies are as varied as they are transformative.

This diversity has profound implications for the regulation and development of AI. Very different regulatory considerations are relevant to AI systems designed for autonomous vehicles than for those used in financial algorithms or creative-content generation. Each application domain comes with its own set of risks, benefits, ethical dilemmas, and potential social impacts, necessitating tailored approaches to each use case. And none of these properties of AI map clearly onto the “open” and “closed” designations highlighted by the EO and this RFC. This counsels for focus on specific domains and specific harms, rather than how such technologies are developed.[8]

As in prior episodes of fast-evolving technologies, what is considered cutting-edge AI today may be obsolete tomorrow. This rapid pace of innovation further complicates the task of crafting policies and regulations that will be both effective and enduring. Policymakers and regulators must navigate this terrain with a nuanced understanding of AI’s multifaceted nature, including by embracing flexible and adaptive regulatory frameworks that can accommodate AI’s continuing evolution.[9] A one-size-fits-all approach could inadvertently stifle innovation or entrench the dominance of a few large players by imposing barriers that disproportionately affect smaller entities or emerging technologies.

Experts in law and economics have long scrutinized both market conduct and regulatory rent seeking that serve to enhance or consolidate market power by disadvantaging competitors, particularly through increasing the costs incurred by rivals.[10] Various tactics may be employed to undermine competitors or exclude them from the market that do not involve direct price competition. It is widely recognized that “engaging with legislative bodies or regulatory authorities to enact regulations that negatively impact competitors” produces analogous outcomes.[11] It is therefore critical that the emerging markets for AI technologies not engender opportunities for firms to acquire regulatory leverage over rivals. Instead, recognizing the plurality of AI technologies and encouraging a multitude of approaches to AI development could help to cultivate a more vibrant and competitive ecosystem, driving technological progress forward and maximizing AI’s potential social benefits.

This overarching approach counsels skepticism about risk-based regulatory frameworks that fail to acknowledge how the theoretical harms of one type of AI system may be entirely different from those of another. Obviously, the regulation of autonomous drones is a very different sort of problem than the regulation of predictive policing or automated homework tutors. Even within a single circumscribed domain of generative AI—such as “smart chatbots” like ChatGPT or Claude—different applications may present entirely different kinds of challenges. A highly purpose-built version of such a system might be employed by government researchers to develop new materiel for the U.S. Armed Forces, while a general-purpose commercial chatbot would employ layers of protection to ensure that ordinary users couldn’t learn how to make advanced weaponry. Rather treating “chatbots” as possible vectors for weapons development, a more appropriate focus would target high-capability systems designed to assist in developing such systems. Were it the case that a general-purpose chatbot inadvertently revealed some information on building weapons, all incentives would direct that AI’s creators to treat that as a bug to fix, not a feature to expand.

Take, for example, the recent public response to the much less problematic AI-system malfunctions that accompanied Google’s release of its Gemini program.[12] Gemini was found to generate historically inaccurate images, such as ethnically diverse U.S. senators from the 1800s, including women.[13] Google quickly acknowledged that it did not intend for Gemini to create inaccurate historical images and turned off the image-generation feature to allow time for the company to work on significant improvements before re-enabling it.[14] While Google blundered in its initial release, it had every incentive to discover and remedy the problem. The market response provided further incentive for Google to get it right in the future.[15] Placing the development of such systems under regulatory scrutiny because some users might be able to jailbreak a model and generate some undesirable material would create disincentives to the production of AI systems more generally, with little gained in terms of public safety.

Rather than focus on the speculative risks of AI, it is essential to ground regulation in the need to address tangible harms that stem from the observed impacts of AI technologies on society. Moreover, focusing on realistic harms would facilitate a more dynamic and responsive regulatory approach. As AI technologies evolve and new applications emerge, so too will the  potential harms. A regulatory framework that prioritizes actual harms can adapt more readily to these changes, enabling regulators to update or modify policies in response to new evidence or social impacts. This flexibility is particularly important for a field like AI, where technological advancements could quickly outpace regulation, creating gaps in oversight that may leave individuals and communities vulnerable to harm.

Furthermore, like any other body of regulatory law, AI regulation must be grounded in empirical evidence and data-driven decision making. Demanding a solid evidentiary basis as a threshold for intervention would help policymakers to avoid the pitfalls of reacting to sensationalized or unfounded AI fears. This would not only enhance regulators’ credibility with stakeholders, but would also ensure that resources are dedicated to addressing the most pressing and substantial issues arising from the development of AI.

III. The Regulation of Foundation Models

NTIA is right to highlight the tremendous promise that attends the open development of AI technologies:

Dual use foundation models with widely available weights (referred to here as open foundation models) could play a key role in fostering growth among less resourced actors, helping to widely share access to AI’s benefits…. Open foundation models can be readily adapted and fine-tuned to specific tasks and possibly make it easier for system developers to scrutinize the role foundation models play in larger AI systems, which is important for rights- and safety-impacting AI systems (e.g. healthcare, education, housing, criminal justice, online platforms etc.)

…Historically, widely available programming libraries have given researchers the ability to simultaneously run and understand algorithms created by other programmers. Researchers and journals have supported the movement towards open science, which includes sharing research artifacts like the data and code required to reproduce results.[16]

The RFC proceeds to seek input on how to define “open” and “widely available.”[17] These, however, are the wrong questions. NTIA should instead proceed from the assumption that there are no harms inherent to either “open” or “closed” development models; it should be seeking input on anything that might give rise to discrete harms in either open or closed systems.

NTIA can play a valuable role by recommending useful alterations to existing law where gaps currently exist, regardless of the business or distribution model employed by the AI developer. In short, there is nothing necessarily more or less harmful about adopting an “open” or a “closed” approach to software systems. The decision to pursue one path over the other will be made based on the relevant tradeoffs that particular firms face. Embedding such distinctions in regulation is arbitrary, at best, and counterproductive to the fruitful development of AI, at worst.

A. ‘Open’ or ‘Widely Available’ Model Weights

To the extent that NTIA is committed to drawing distinctions between “open” and “closed” approaches to developing foundation models, it should avoid overly prescriptive definitions of what constitutes “open” or “widely available” model weights that could significantly hamper the progress and utility of AI technologies.

Imposing narrow definitions risks creating artificial boundaries that fail to accurately reflect AI’s technical and operational realities. They could also inadvertently exclude or marginalize innovative AI models that fall outside those rigid parameters, despite their potential to contribute positively to technological advancement and social well-being. For instance, a definition of “open” that requires complete public accessibility without any form of control or restriction might discourage organizations from sharing their models, fearing misuse or loss of intellectual property.

Moreover, prescriptive definitions could stifle the organic growth and evolution of AI technologies. The AI field is characterized by its rapid pace of change, where today’s cutting-edge models may become tomorrow’s basic tools. Prescribing fixed criteria for what constitutes “openness” or “widely available” risks anchoring the regulatory landscape to this specific moment in time, leaving the regulatory framework less able to adapt to future developments and innovations.

Given AI developers’ vast array of applications, methodologies, and goals, it is imperative that any definitions of “open” or “widely available” model weights embrace flexibility. A flexible approach would acknowledge how the various stakeholders within the AI ecosystem have differing needs, resources, and objectives, from individual developers and academic researchers to startups and large enterprises. A one-size-fits-all definition of “openness” would fail to accommodate this diversity, potentially privileging certain forms of innovation over others and skewing the development of AI technologies in ways that may not align with broader social needs.

Moreover, flexibility in defining “open” and “widely available” must allow for nuanced understandings of accessibility and control. There can, for example, be legitimate reasons to limit openness, such as protecting sensitive data, ensuring security, and respecting intellectual-property rights, while still promoting a culture of collaboration and knowledge sharing. A flexible regulatory approach would seek a balanced ecosystem where the benefits of open AI models are maximized, and potential risks are managed effectively.

B. The Benefits of ‘Open’ vs ‘Closed’ Business Models

NTIA asks:

What benefits do open model weights offer for competition and innovation, both in the AI marketplace and in other areas of the economy? In what ways can open dual-use foundation models enable or enhance scientific research, as well as education/training in computer science and related fields?[18]

An open approach to AI development has obvious benefits, as NTIA has itself acknowledged in other contexts.[19] Open-foundation AI models represent a transformative force, characterized by their accessibility, adaptability, and potential for widespread application across various sectors. The openness of these models may serve to foster an environment conducive to innovation, wherein developers, researchers, and entrepreneurs can build on existing technologies to create novel solutions tailored to diverse needs and challenges.

The inherent flexibility of open-foundation models can also catalyze a competitive market, encouraging a healthy ecosystem where entities ranging from startups to established corporations may all participate on roughly equal footing. By lowering some entry barriers related to access to basic AI technologies, this competitive environment can further drive technological advancements and price efficiencies, ultimately benefiting consumers and society at-large.

But more “closed” approaches can also prove very valuable. As NTIA notes in this RFC, it is rarely the case that a firm pursues a purely open or closed approach. These terms exist along a continuum, and firms blend models as necessary.[20] And just as firms readily mix elements of open and closed business models, a regulator should be agnostic about the precise mix that firms employ, which ultimately must align with the realities of market dynamics and consumer preferences.

Both open and closed approaches offer distinct benefits and potential challenges. For instance, open approaches might excel in fostering a broad and diverse ecosystem of applications, thereby appealing to users and developers who value customization and variety. They can also facilitate a more rapid dissemination of innovation, as they typically impose fewer restrictions on the development and distribution of new applications. Conversely, closed approaches, with their curated ecosystems, often provide enhanced security, privacy, and a more streamlined user experience. This can be particularly attractive to users less inclined to navigate the complexities of open systems. Under the right conditions, closed systems can likewise foster a healthy ecosystem of complementary products.

The experience of modern digital platforms demonstrates that there is no universally optimal approach to structuring business activities, thus illustrating the tradeoffs inherent in choosing among open and closed business models. The optimal choice depends on the specific needs and preferences of the relevant market participants. As Jonathan M. Barnett has noted:

Open systems may yield no net social gain over closed systems, can pose a net social loss under certain circumstances, and . . . can impose a net social gain under yet other circumstances.[21]

Similar considerations apply in the realm of AI development. Closed or semi-closed ecosystems can offer such advantages as enhanced security and curated offerings, which may appeal to certain users and developers. These benefits, however, may come at the cost of potentially limited innovation, as a firm must rely on its own internal processes for research and development. Open models, on the other hand, while fostering greater collaboration and creativity, may also introduce risks related to quality control, intellectual-property protection, and a host of other concerns that may be better controlled in a closed business model. Even along innovation dimensions, closed platforms can in many cases outperform open models.

With respect to digital platforms like the App Store and Google Play Store, there is a “fundamental welfare tradeoff between two-sided proprietary…platforms and two-sided platforms which allow ‘free entry’ on both sides of the market.”[22] Consequently, “it is by no means obvious which type of platform will create higher product variety, consumer adoption and total social welfare.”[23]

To take another example, consider the persistently low adoption rates for consumer versions of the open-source Linux operating system, versus more popular alternatives like Windows or MacOS.[24] A closed model like Apple’s MacOS is able to outcompete open solutions by better leveraging network effects and developing a close relationship with end users.[25] Even in this example, adoption of open versus closed models varies across user types, with, e.g., developers showing a strong preference for Linux over Mac, and only a slight preference for Windows over Linux.[26] This underscores the point that the suitability of an open or closed model varies not only by firm and product, nor even solely by user, but by the unique fit of a particular model for a particular user in a particular context. Many of those Linux-using developers will likely not use it on their home computing device, for example, even if they prefer it for work.

The dynamics among consumers and developers further complicate prevailing preferences for open or closed models. For some users, the security and quality assurance provided by closed ecosystems outweigh the benefits of open systems’ flexibility. On the developer side, the lower barriers to entry in more controlled ecosystems that smooth the transaction costs associated with developing and marketing applications can democratize application development, potentially leading to greater innovation within those ecosystems. Moreover, distinctions between open and closed models can play a critical role in shaping inter-brand competition. A regulator placing its thumb on the business-model scale would push the relevant markets toward less choice and lower overall welfare.[27]

By differentiating themselves through a focus on ease-of-use, quality, security, and user experience, closed systems contribute to a vibrant competitive landscape where consumers have clear choices between differing “brands” of AI. Forcing an AI developer to adopt practices that align with a regulator’s preconceptions about the relative value of “open” and “closed” risks homogenizing the market and diminishing the very competition that spurs innovation and consumer choice.

Consider some of the practical benefits sought by deployers when choosing between open and closed models. For example, it’s not straightforward to say close is inherently better than open when considering issues of data sharing or security; even here, there are tradeoffs. Open innovation in AI—characterized by the sharing of data, algorithms, and methodologies within the research community and beyond—can mitigate many of the risks associated with model development. This openness fosters a culture of transparency and accountability, where AI models and their applications are subject to scrutiny by a broad community of experts, practitioners, and the general public. This collective oversight can help to identify and address potential safety and security concerns early in the development process, thus enhancing AI technologies’ overall trustworthiness.

By contrast, a closed system may implement and enforce standardized security protocols more quickly. A closed system may have a sharper, more centralized focus on providing data security to users, which may perform better along some dimensions. And while the availability of code may provide security in some contexts, in other circumstances, closed systems perform better.[28]

In considering ethical AI development, different types of firms should be free to experiment with different approaches, even blending them where appropriate. For example, Claude’s approach to “Collective Constitutional AI” adopts what is arguably a “semi-open” model, blending proprietary elements with certain aspects of openness to foster innovation, while also maintaining a level of control.[29] This model might strike an appropriate balance, in that it ensures some degree of proprietary innovation and competitive advantage while still benefiting from community feedback and collaboration.

On the other hand, fully open-source development could lead to a different, potentially superior result that meets a broader set of needs through community-driven evolution and iteration. There is no way to determine, ex ante, that either an open or a closed approach to AI development will inherently provide superior results for developing “ethical” AI. Each has its place, and, most likely, the optimal solutions will involve elements of both approaches.

In essence, codifying a regulatory preference for one business model over the other would oversimplify the intricate balance of tradeoffs inherent to platform ecosystems. Economic theory and empirical evidence suggest that both open and closed platforms can drive innovation, serve consumer interests, and stimulate healthy competition, with all of these considerations depending heavily on context. Regulators should therefore aim for flexible policies that support coexistence of diverse business models, fostering an environment where innovation can thrive across the continuum of openness.

C. Dual-Use Foundation Models and Transparency Requirements

The EO and the RFC both focus extensively on so-called “dual-use” foundation models:

Foundation models are typically defined as, “powerful models that can be fine-tuned and used for multiple purposes.” Under the Executive Order, a “dual-use foundation model” is “an AI model that is trained on broad data; generally uses self-supervision, contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters….”[30]

But this framing will likely do more harm than good. As noted above, the terms “AI” or “AI model” are frequently invoked to refer to very different types of systems. Further defining these models as “dual use” is also unhelpful, as virtually any tool in existence can be “dual use” in this sense. Certainly, from a certain perspective, all software—particularly highly automated software—can pose a serious risk to “national security” or “safety.” Encryption and other privacy-protecting tools certainly fit this definition.[31] While it is crucial to mitigate harms associated with the misuse of AI technologies, the blanket treatment of all foundation models under this category is overly simplistic.

The EO identifies certain clear risks, such as the possibility that models could aid in the creation of chemical, biological, or nuclear weaponry. These categories are obvious subjects for regulatory control, but the EO then appears to open a giant definitional loophole that threatens to subsume virtually any useful AI system. It employs expansive terminology to describe a more generalized threat—specifically, that dual-use models could “[permit] the evasion of human control or oversight through means of deception or obfuscation.”[32] Such language could encompass a wide array of general-purpose AI models. Furthermore, by labeling systems capable of bypassing human decision making as “dual use,” the order implicitly suggests that all AI could pose such risk as warrants national-security levels of scrutiny.

Given the EO’s broad definition of AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” numerous software systems not typically even considered AI might be categorized as “dual-use” models.[33] Essentially, any sufficiently sophisticated statistical-analysis tool could qualify under this definition.

A significant repercussion of the EO’s very broad reporting mandates for dual-use systems, and one directly relevant to the RFC’s interest in promoting openness, is that these might chill open-source AI development.[34] Firms dabbling in AI technologies—many of which might not consider their projects to be dual use—might keep their initiatives secret until they are significantly advanced. Faced with the financial burden of adhering to the EO’s reporting obligations, companies that lack a sufficiently robust revenue model to cover both development costs and legal compliance might be motivated to dodge regulatory scrutiny in the initial phases, consequently dampening the prospects for transparency.

It is hard to imagine how open-source AI projects could survive in such an environment. Open-source AI code libraries like TensorFlow[35] and PyTorch[36] foster remarkable innovation by allowing developers to create new applications that use cutting-edge models. How could a paradigmatic startup developer working out of a garage genuinely commit to open-source development if tools like these fall under the EO’s jurisdiction? Restricting access to the weights that models use—let alone avoiding open-source development entirely—may hinder independent researchers’ ability to advance the forefront of AI technology.

Moreover, scientific endeavors typically benefit from the contributions of researchers worldwide, as collaborative efforts on a global scale are known to fast-track innovation. The pressure the EO applies to open-source development of AI tools could curtail international cooperation, thereby distancing American researchers from crucial insights and collaborations. For example, AI’s capacity to propel progress in numerous scientific areas is potentially vast—e.g., utilizing MRI images and deep learning for brain-tumor diagnoses[37] or employing machine learning to push the boundaries of materials science.[38] Such research does not benefit from stringent secrecy, but thrives on collaborative development. Enabling a broader community to contribute to and expand upon AI advancements supports this process.

Individuals respond to incentives. Just as how well-intentioned seatbelt laws paradoxically led to an uptick in risky driving behaviors,[39] ill-considered obligations placed on open-source AI developers could unintentionally stifle the exchange of innovative concepts crucial to maintain the United States’ leadership in AI innovation.

IV. Regulatory Models that Support Innovation While Managing Risks Effectively

In the rapidly evolving landscape of artificial intelligence (AI), it is paramount to establish governance and regulatory frameworks that both encourage innovation and ensure safety and ethical integrity. An effective regulatory model for AI should be adaptive, principles-based, and foster a collaborative environment among regulators, developers, researchers, and the broader community. A number of principles can help in developing this regime.

A. Low-Risk vs High-Risk AI

First, a clear distinction should be made between low-risk AI applications that enhance operational efficiency or consumer experience and high-risk applications that could have significant safety implications. Low-risk applications like search algorithms and chatbots should be governed by a set of baseline ethical guidelines and best practices that encourage innovation, while ensuring basic standards are met. On the other hand, high-risk applications—such as those used by law enforcement or the military—would require more stringent review processes, including impact assessments, ethical reviews, and ongoing monitoring to mitigate potentially adverse effects.

Contrast this with the recently enacted AI Act in the European Union, and its decision to create presumptions of risk for general purpose AI (GPAI) systems, such as large language models (LLMs), that present what the EU has termed so-called “systemic risk.”[40] Article 3(65) of the AI Act defines systemic risk as “a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.”[41]

This definition bears similarities to the “Hand formula” in U.S. tort law, which balances the burden of precautions against the probability and severity of potential harm to determine negligence.[42] The AI Act’s notion of systemic risk, however, is applied more broadly to entire categories of AI systems based on their theoretical potential for widespread harm, rather than on a case-by-case basis.

The designation of LLMs as posing “systemic risk” is problematic for several reasons. It creates a presumption of risk merely based on a GPAI system’s scale of operations, without any consideration of the actual likelihood or severity of harm in specific use cases. This could lead to unwarranted regulatory intervention and unintended consequences that hinder the development and deployment of beneficial AI technologies. And this broad definition of systemic risk gives regulators significant leeway to intervene in how firms develop and release their AI products, potentially blocking access to cutting-edge tools for European citizens, even in the absence of tangible harms.

While it is important to address potential risks associated with AI systems, the AI Act’s approach risks stifling innovation and hindering the development of beneficial AI technologies within the EU.

B. Avoid Static Regulatory Approaches

AI regulators are charged with overseeing a dynamic and rapidly developing market, and should therefore avoid erecting a rigid framework that force new innovations into ill-fitting categories. The “regulatory sandbox” may provide a better model to balance innovation with risk management. By allowing developers to test and refine AI technologies in a controlled environment under regulatory oversight, sandboxes can be used to help identify and address potential issues before wider deployment, all while facilitating dialogue between innovators and regulators. This approach not only accelerates the development of safe and ethical AI solutions, but also builds mutual understanding and trust. Where possible, NTIA should facilitate policy experimentation with regulatory sandboxes in the AI context.

Meta’s Open Loop program is an example of this kind of experimentation.[43] This program is a policy prototyping research project focused on evaluating the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0.[44] The goal is to assess whether the framework is understandable, applicable, and effective in assisting companies to identify and manage risks associated with generative AI. It also provides companies an opportunity to familiarize themselves with the NIST AI RMF and its application in risk-management processes for generative AI systems. Additionally, it aims to collect data on existing practices and offer feedback to NIST, potentially influencing future RMF updates.

1. Regulation as a discovery process

Another key principle is to ensure that regulatory mechanisms are adaptive. Some examples of adaptive mechanisms are iterative rulemaking and feedback loops that allow regulations to be updated continuously in response to new developments and insights. Such mechanisms enable policymakers to respond swiftly to technological breakthroughs, ensuring that regulations remain relevant and effective, without stifling innovation.

Geoffrey Manne & Gus Hurwitz have recently proposed a framework for “regulation as a discovery process” that could be adapted to AI.[45] They argue for a view of regulation not merely as a mechanism for enforcing rules, but as a process for discovering information that can inform and improve regulatory approaches over time. This perspective is particularly pertinent to AI, where the pace of innovation and the complexity of technologies often outstrip regulators’ understanding and ability to predict future developments. This framework:

in its simplest formulation, asks regulators to consider that they might be wrong. That they might be asking the wrong questions, collecting the wrong information, analyzing it the wrong way—or even that Congress has given them the wrong authority or misunderstood the problem that Congress has tasked them to address.[46]

That is to say, an adaptive approach to regulation requires epistemic humility, with the understanding that, particularly for complex, dynamic industries:

there is no amount of information collection or analysis that is guaranteed to be “enough.” As Coase said, the problem of social cost isn’t calculating what those costs are so that we can eliminate them, but ascertaining how much of those social costs society is willing to bear.[47]

In this sense, modern regulators’ core challenge is to develop processes that allow for iterative development of knowledge, which is always in short supply. This requires a shift in how an agency conceptualizes its mission, from one of writing regulations to one of assisting lawmakers to assemble, filter, and focus on the most relevant and pressing information needed to understand a regulatory subject’s changing dynamics.[48]

As Hurwitz & Manne note, existing efforts to position some agencies as information-gathering clearinghouses suffer from a number of shortcomings—most notably, that they tend to operate on an ad hoc basis, reporting to Congress in response to particular exigencies.[49] The key to developing a “discovery process” for AI regulation would instead require setting up ongoing mechanisms to gather and report on data, as well as directing the process toward “specifications for how information should be used, or what the regulator anticipated to find in the information, prior to its collection.”[50]

Embracing regulation as a discovery process means acknowledging the limits of our collective knowledge about AI’s potential risks and benefits. This underscores why regulators should prioritize generating and utilizing new information through regulatory experiments, iterative rulemaking, and feedback loops. A more adaptive regulatory framework could respond to new developments and insights in AI technologies, thereby ensuring that regulations remain relevant and effective, without stifling innovation.

Moreover, Hurwitz & Manne highlight the importance of considering regulation as an information-producing activity.[51] In AI regulation, this could involve setting up mechanisms that allow regulators, innovators, and the public to contribute to and benefit from a shared pool of knowledge about AI’s impacts. This could include public databases of AI incidents, standardized reporting of AI-system performance, or platforms for sharing best practices in AI safety and ethics.

Static regulatory approaches may fail to capture the evolving landscape of AI applications and their societal implications. Instead, a dynamic, information-centric regulatory strategy that embraces the market as a discovery process could better facilitate beneficial innovations, while identifying and mitigating harms.

V. Conclusion

As the NTIA navigates the complex landscape of AI regulation, it is imperative to adopt a nuanced, forward-looking approach that balances the need to foster innovation with the imperatives of ensuring public safety and ethical integrity. The rapid evolution of AI technologies necessitates a regulatory framework that is both adaptive and principles-based, eschewing static snapshots of the current state of the art in favor of flexible mechanisms that could accommodate the dynamic nature of this field.

Central to this approach is to recognize that the field of AI encompasses a diverse array of technologies, methodologies, and applications, each with its distinct characteristics, capabilities, and implications for society. A one-size-fits-all regulatory model would not only be ill-suited to the task at-hand, but would also risk stifling innovation and hindering the United States’ ability to maintain its leadership in the global AI industry. NTIA should focus instead on developing tailored approaches that distinguish between low-risk and high-risk applications, ensuring that regulatory interventions are commensurate with the potential identifiable harms and benefits associated with specific AI use cases.

Moreover, the NTIA must resist the temptation to rely on overly prescriptive definitions of “openness” or to favor particular business models over others. The coexistence of open and closed approaches to AI development is essential to foster a vibrant, competitive ecosystem that drives technological progress and maximizes social benefits. By embracing a flexible regulatory framework that allows for experimentation and iteration, the NTIA can create an environment conducive to innovation while still ensuring that appropriate safeguards are in place to mitigate potential risks.

Ultimately, the success of the U.S. AI industry will depend on the ability of regulators, developers, researchers, and the broader community to collaborate in developing governance frameworks that are both effective and adaptable. By recognizing the importance of open development and diverse business models, the NTIA can play a crucial role in shaping the future of AI in ways that promote innovation, protect public interests, and solidify the United States’ position as a global leader in this transformative field.

[1] Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights, Docket No. 240216-0052, 89 FR 14059, National Telecommunications and Information Administration (Mar. 27, 2024) at 14063, question 8(a) [hereinafter “RFC”].

[2] See, e.g., Kristian Stout, Systemic Risk and Copyright in the EU AI Act, Truth on the Market (Mar. 19, 2024), https://truthonthemarket.com/2024/03/19/systemic-risk-and-copyright-in-the-eu-ai-act.

[3] Exec. Order No. 14110, 88 F.R. 75191 (2023), https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence?_fsi=C0CdBzzA [hereinafter “EO”].

[4] See, e.g., EO at §§ 1; 2(c), 5.2(e)(ii); and § 8(c);

[5] Id. at § 3(k).

[6] Id. at § (k)(iii).

[7] Id. at § 4.6. As NTIA notes, the administration refers to “widely available model weight,” which is equivalent to “open foundation models” in this proceeding. RFC at 14060.

[8] For more on the “open” vs “closed” distinction and its poor fit as a regulatory lens, see, infra, at nn. 19-41 and accompanying text.

[9] Adaptive regulatory frameworks are discussed, infra, at nn. 42-53 and accompanying text.

[10] See Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73:2 Am. Econ. R. 267, 267–71 (1983), http://www.jstor.org/stable/1816853.

[11] See Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36:1 J. Indus. Econ. 19 (1987), https://doi.org/10.2307/2098594.

[12] Cindy Gordon, Google Pauses Gemini AI Model After Latest Debacle, Forbes (Feb. 29, 2024), https://www.forbes.com/sites/cindygordon/2024/02/29/google-latest-debacle-has-paused-gemini-ai-model/?sh=3114d093536c.

[13] Id.

[14] Id.

[15] Breck Dumas, Google Loses $96B in Value on Gemini Fallout as CEO Does Damage Control, Yahoo Finance (Feb. 28, 2024), https://finance.yahoo.com/news/google-loses-96b-value-gemini-233110640.html.

[16] RFC at 14060.

[17] RFC at 14062, question 1.

[18] RFC at 14062, question 3(a).

[19] Department of Commerce, Competition in the Mobile Application Ecosystem (2023), https://www.ntia.gov/report/2023/competition-mobile-app-ecosystem (“While retaining appropriate latitude for legitimate privacy, security, and safety measures, Congress should enact laws and relevant agencies should consider measures (such as rulemaking) designed to open up distribution of lawful apps, by prohibiting… barriers to the direct downloading of applications.”).

[20] RFC at 14061 (“‘openness’ or ‘wide availability’ of model weights are also terms without clear definition or consensus. There are gradients of ‘openness,’ ranging from fully ‘closed’ to fully ‘open’”).

[21] See Jonathan M. Barnett, The Host’s Dilemma: Strategic Forfeiture in Platform Markets for Informational Goods, 124 Harv. L. Rev. 1861, 1927 (2011).

[22] Id. at 2.

[23] Id. at 3.

[24]  Desktop Operating System Market Share Worldwide Feb 2023 – Feb 2024, statcounter, https://gs.statcounter.com/os-market-share/desktop/worldwide (last visited Mar. 27, 2024).

[25]  Andrei Hagiu, Proprietary vs. Open Two-Sided Platforms and Social Efficiency (Harv. Bus. Sch. Strategy Unit, Working Paper No. 09-113, 2006).

[26] Joey Sneddon, More Developers Use Linux than Mac, Report Shows, Omg Linux (Dec. 28, 2022), https://www.omglinux.com/devs-prefer-linux-to-mac-stackoverflow-survey.

[27] See Michael L. Katz & Carl Shapiro, Systems Competition and Network Effects, 8 J. Econ. Persp. 93, 110 (1994), (“[T]he primary cost of standardization is loss of variety: consumers have fewer differentiated products to pick from, especially if standardization prevents the development of promising but unique and incompatible new systems”).

[28] See. e.g., Nokia, Threat Intelligence Report 2020 (2020), https://www.nokia.com/networks/portfolio/cyber-security/threat-intelligence-report-2020; Randal C. Picker, Security Competition and App Stores, Network Law Review (Aug. 23, 2021), https://www.networklawreview.org/picker-app-stores.

[29] Collective Constitutional AI: Aligning a Language Model with Public Input, Anthropic (Oct. 17, 2023), https://www.anthropic.com/news/collective-constitutional-ai-aligning-a-language-model-with-public-input.

[30] RFC at 14061.

[31] Encryption and the “Going Dark” Debate, Congressional Research Service (2017), https://crsreports.congress.gov/product/pdf/R/R44481.

[32] EO at. § 3(k)(iii).

[33] EO at § 3(b).

[34] EO at § 4.2 (requiring companies developing dual-use foundation models to provide ongoing reports to the federal government on their activities, security measures, model weights, and red-team testing results).

[35] An End-to-End Platform for Machine Learning, TensorFlow, https://www.tensorflow.org (last visited Mar. 27, 2024).

[36] Learn the Basics, PyTorch, https://pytorch.org/tutorials/beginner/basics/intro.html (last visited Mar. 27, 2024).

[37] Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov, & Taeg Keun Whangbo, Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging, 15(16) Cancers (Basel) 4172 (2023), available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10453020.

[38] Keith T. Butler, et al., Machine Learning for Molecular and Materials Science, 559 Nature 547 (2018), available at https://www.nature.com/articles/s41586-018-0337-2.

[39] The Peltzman Effect, The Decision Lab, https://thedecisionlab.com/reference-guide/psychology/the-peltzman-effect (last visited Mar. 27, 2024).

[40] European Parliament, European Parliament legislative Resolution of 13 March 2024 on the Proposal for a Regulation of the European Parliament and of the Council on Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206, available at https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html [hereinafter “EU AI Act”].

[41] Id. at Art. 3(65).

[42] See Stephen G. Gilles, On Determining Negligence: Hand Formula Balancing, the Reasonable Person Standard, and the Jury, 54 Vanderbilt L. Rev. 813, 842-49 (2001).

[43] See Open Loop’s First Policy Prototyping Program in the United States, Meta, https://www.usprogram.openloop.org (last visited Mar. 27. 2024).

[44] Id.

[45] Justin (Gus) Hurwitz & Geoffrey A. Manne, Pigou’s Plumber: Regulation as a Discovery Process, SSRN (2024), available at https://laweconcenter.org/resources/pigous-plumber.

[46] Id. at 32.

[47] Id. at 33.

[48] See id. at 28-29

[49] Id. at 37.

[50] Id. at 37-38.

[51] Id.

Continue reading
Innovation & the New Economy

US v. Apple Lawsuit Has Big Implications for Competition and Innovation

TOTM The lawsuit filed yesterday by the U.S. Justice Department (DOJ) against Apple for monopolization of the U.S. smartphone market (joined by 15 states and the District of . . .

The lawsuit filed yesterday by the U.S. Justice Department (DOJ) against Apple for monopolization of the U.S. smartphone market (joined by 15 states and the District of Columbia) has big implications for American competition and innovation.

At the heart of the complaint is the DOJ’s assertion that…

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Bill C-59 and the Use of Structural Merger Presumptions in Canada

Regulatory Comments We, the undersigned, are scholars from the International Center for Law & Economics (ICLE) with experience in the academy, enforcement agencies, and private practice in . . .

We, the undersigned, are scholars from the International Center for Law & Economics (ICLE) with experience in the academy, enforcement agencies, and private practice in competition law. We write to address a key aspect of proposed amendments to Canadian competition law. Specifically, we focus on clauses in Bill C-59 pertinent to mergers and acquisitions and, in particular, the Bureau of Competition’s recommendation that the Bill should:

Amend Clauses 249-250 to enact rebuttable presumptions for mergers consistent with those set out in the U.S. Merger Guidelines.[1]

The Bureau’s recommendation seeks to codify in Canadian competition law the structural presumptions outlined in the 2023 U.S. Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) Merger Guidelines.  On balance, however, adoption of that recommendation would impede, rather than promote, fair competition and the welfare of Canadian consumers.

The cornerstone of the proposed change lies in the introduction of rebuttable presumptions of illegality for mergers that exceed specified market-share or concentration thresholds. While this approach may seem intuitive, the economic literature and U.S. enforcement experience militate against its adoption in Canadian law.

The goal of enhancing—indeed, strengthening—Canadian competition law should not be conflated with the adoption of foreign regulatory guidelines. The most recent U.S. Merger Guidelines establish new structural thresholds, based primarily on the Herfindahl-Hirschman Index (HHI) and market share, to establish presumptions of anticompetitive effects and illegality. Those structural presumptions, adopted a few short months ago, are inconsistent with established economic literature and are untested in U.S. courts. Those U.S. guidelines should not be codified in Canadian law without robust deliberation to ensure alignment with Canadian legal principles, on the one hand, and with economic realities and evidence, on the other.

Three points are especially important. First, concentration measures are widely considered to be a poor proxy for the level of competition that prevails in a given market. Second, lower merger thresholds may lead to enforcement errors that discourage investment and entrepreneurial activity and allocate enforcement resources to the wrong cases. Finally, these risks are particularly acute when concentration thresholds are used not as useful indicators but, instead, as actual legal presumptions (albeit rebuttable ones). We discuss each of these points in more detail below.

What Concentration Measures Can and Cannot Tell Us About Competition

While the use of concentration measures and thresholds can provide a useful preliminary-screening mechanism to identify potentially problematic mergers, substantially lowering the thresholds to establish a presumption of illegality is inadvisable for several reasons.

First, too strong a reliance on concentration measures lacks economic foundation and is likely prone to frequent error. Economists have been studying the relationship between concentration and various potential indicia of anticompetitive effects—price, markup, profits, rate of return, etc.—for decades.[2] There are hundreds of empirical studies addressing this topic.[3]

The assumption that “too much” concentration is harmful assumes both that the structure of a market is what determines economic outcomes and that anyone could know what the “right” amount of concentration is. But as economists have understood since at least the 1970s (and despite an extremely vigorous, but futile, effort to show otherwise), market structure does not determine outcomes.[4]

This skepticism toward concentration measures as a guide for policy is well-supported, and is held by scholars across the political spectrum.  To take one prominent, recent example, professors Fiona Scott Morton (deputy assistant U.S. attorney general for economics in the DOJ Antitrust Division under President Barack Obama, now at Yale University); Martin Gaynor (former director of the FTC Bureau of Economics under President Obama, now serving as special advisor to Assistant U.S. Attorney General Jonathan Kanter, on leave from Carnegie Mellon University), and Steven Berry (an industrial-organization economist at Yale University) surveyed the industrial-organization literature and found that presumptions based on measures of concentration are unlikely to provide sound guidance for public policy:

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration.…

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates.[5]

As Chad Syverson recently summarized:

Perhaps the deepest conceptual problem with concentration as a measure of market power is that it is an outcome, not an immutable core determinant of how competitive an industry or market is… As a result, concentration is worse than just a noisy barometer of market power. Instead, we cannot even generally know which way the barometer is oriented.[6]

This does not mean that concentration measures have no use in merger screening. Rather, market concentration is often unrelated to antitrust-enforcement goals because it is driven by factors that are endogenous to each industry. Enforcers should not rely too heavily on structural presumptions based on concentration measures, as these may be poor indicators of the instances in which antitrust enforcement is most beneficial to competition and consumers.

At What Level Should Thresholds Be Set?

Second, if concentration measures are to be used in some fashion, at what level or levels should they be set?

The U.S. 2010 Horizontal Merger Guidelines were “b?ased on updated HHI thresholds that more accurately reflect actual enforcement practice.”[7] These numbers were updated in 2023, but without clear justification. While the U.S. enforcement authorities cite several old cases (cases that implicated considerably higher levels of concentration than those in their 2023 guidelines), we agree with comments submitted in 2022 by now-FTC Bureau of Economics Director Aviv Nevo and colleagues, who argued against such a change. They wrote:

Our view is that this would not be the most productive route for the agencies to pursue to successfully prevent harmful mergers, and could backfire by putting even further emphasis on market definition and structural presumptions.

If the agencies were to substantially change the presumption thresholds, they would also need to persuade courts that the new thresholds were at the right level. Is the evidence there to do so? The existing body of research on this question is, today, thin and mostly based on individual case studies in a handful of industries. Our reading of the literature is that it is not clear and persuasive enough, at this point in time, to support a substantially different threshold that will be applied across the board to all industries and market conditions. (emphasis added) [8]

Lower merger thresholds create several risks. One is that such thresholds will lead to excessive “false positives”; that is, too many presumptions against mergers that are likely to be procompetitive or benign. This is particularly likely to occur if enforcers make it harder for parties to rebut the presumptions, e.g., by requiring stronger evidence the higher the parties are above the (now-lowered) threshold. Raising barriers to establishing efficiencies and other countervailing factors makes it more likely that procompetitive mergers will be blocked. This not only risks depriving consumers of lower prices and greater innovation in specific cases, but chills beneficial merger-and-acquisition activity more broadly. The prospect of an overly stringent enforcement regime discourages investment and entrepreneurial activity. It also allocates scarce enforcement resources to the wrong cases.

Changing the Character of Structural Presumptions

Finally, the risks described above are particularly acute, given the change in the character of structural presumptions described in the U.S. Merger Guidelines. The 2023 Merger Guidelines—and only the 2023 Merger Guidelines—state that certain structural features of mergers will raise a “presumption of illegality.”[9]

U.S. merger guidelines published in 1982,[10] 1992 (revised in 1997),[11] and 2010[12] all describe structural thresholds seen by the agencies as pertinent to merger screening. None of them mention a “presumption of illegality.” In fact, as the U.S. agencies put it in the 2010 Horizontal Merger Guidelines:

The purpose of these thresholds is not to provide a rigid screen to separate competitively benign mergers from anticompetitive ones, although high levels of concentration do raise concerns. Rather, they provide one way to identify some mergers unlikely to raise competitive concerns and some others for which it is particularly important to examine whether other competitive factors confirm, reinforce, or counteract the potentially harmful effects of increased concentration.[13]

The most worrisome category of mergers identified in the 1992 U.S. merger guidelines were said to be presumed “likely to create or enhance market power or facilitate its exercise.” The 1982 guidelines did not describe “presumptions” so much as that certain mergers that may be matters of “significant competitive concern” and “likely” to be subject to challenge.

Hence, earlier editions of the U.S. merger guidelines describe the ways that structural features of mergers might inform, but not determine, internal agency analysis of those mergers. That was useful information for industry, the bar, and the courts. Equally useful were descriptions of mergers that were “unlikely to have adverse competitive effects and ordinarily require no further analysis,”[14] as well as intermediate types of mergers that “potentially raise significant competitive concerns and often warrant scrutiny.”[15]

Similarly, the 1992 U.S. merger guidelines identified a tier of mergers deemed “unlikely to have adverse competitive effects and ordinarily require no further analysis,” as well as intermediate categories of mergers either unlikely to have anticompetitive effects or, in the alternative, potentially raising significant competitive concerns, depending on various factors described elsewhere in the guidelines.[16]

By way of contrast, the new U.S. guidelines include no description of any mergers that are unlikely to have adverse competitive effects. And while the new merger guidelines do stipulate that the “presumption of illegality can be rebutted or disproved,” they offer very limited means of rebuttal.

This is at odds with prior U.S. agency practice and established U.S. law. Until very recently, U.S. agency staff sought to understand proposed mergers under the totality of their circumstances, much as U.S. courts came to do. Structural features of mergers (among many others) might raise concerns of greater or lesser degrees. These might lead to additional questions in some instances; more substantial inquiries under a “second request” in a minority of instances; or, eventually, a complaint against a very small minority of proposed mergers. In the alternative, they might help staff avoid wasting scarce resources on mergers “unlikely to have anticompetitive effects.”

Prior to a hearing or a trial on the merits, there might be strong, weak, or no appreciable assessments of likely liability, but there was no prima facie determination of illegality.

And while U.S. merger trials did tend to follow a burden-shifting framework for plaintiff and defendant production, they too looked to the “totality of the circumstances”[17] and a transaction’s “probable effect on future competition”[18] to determine liability, and they looked away from strong structural presumptions. As then-U.S. Circuit Judge Clarence Thomas observed in the Baker-Hughes case:

General Dynamics began a line of decisions differing markedly in emphasis from the Court’s antitrust cases of the 1960s. Instead of accepting a firm’s market share as virtually conclusive proof of its market power, the Court carefully analyzed defendants’ rebuttal evidence.[19]

Central to the holding in Baker Hughes—and contra the 2023 U.S. merger guidelines—was that, because the government’s prima facie burden of production was low, the defendant’s rebuttal burden should not be unduly onerous.[20] As the U.S. Supreme Court had put it, defendants would not be required to clearly disprove anticompetitive effects, but rather, simply to “show that the concentration ratios, which can be unreliable indicators of actual market behavior . . . did not accurately depict the economic characteristics of the [relevant] market.”[21]

Doing so would not end the matter. Rather, “the burden of producing additional evidence of anticompetitive effects shifts to the government, and merges with the ultimate burden of persuasion, which remains with the government at all times.”[22]

As the U.S. Supreme Court decision in Marine Bancorporation underscores, even by 1974, it was well understood that concentration ratios “can be unreliable indicators” of market behavior and competitive effects.

As explained above, research and enforcement over the ensuing decades have undermined reliance on structural presumptions even further. As a consequence, the structure/conduct/performance paradigm has been largely abandoned, because it’s widely recognized that market structure is not outcome–determinative.

That is not to say that high concentration cannot have any signaling value in preliminary agency screening of merger matters. But concentration metrics that have proven to be unreliable indicators of firm behavior and competitive effects should not be enshrined in Canadian statutory law. That would be a step back, not a step forward, for merger enforcement.

 

[1] Matthew Boswell, Letter to the Chair and Members of the House of Commons Standing Committee on Finance, Competition Bureau Canada (Mar. 1, 2024), available at https://sencanada.ca/Content/Sen/Committee/441/NFFN/briefs/SM-C-59_CompetitionBureauofCND_e.pdf.

[2] For a few examples from a very large body of literature, see, e.g., Steven Berry, Martin Gaynor, & Fiona Scott Morton, Do Increasing Markups Matter? Lessons from Empirical Industrial Organization, 33J. Econ. Perspectives 44 (2019); Richard Schmalensee, Inter-Industry Studies of Structure and Performance, in 2 Handbook of Industrial Organization 951-1009 (Richard Schmalensee & Robert Willig, eds., 1989); William N. Evans, Luke M. Froeb, & Gregory J. Werden, Endogeneity in the Concentration-Price Relationship: Causes, Consequences, and Cures, 41 J. Indus. Econ. 431 (1993); Steven Berry, Market Structure and Competition, Redux, FTC Micro Conference (Nov. 2017), available at https://www.ftc.gov/system/files/documents/public_events/1208143/22_-_steven_berry_keynote.pdf; Nathan Miller, et al., On the Misuse of Regressions of Price on the HHI in Merger Review, 10 J. Antitrust Enforcement 248 (2022).

[3] Id.

[4] See Harold Demsetz, Industry Structure, Market Rivalry, and Public Policy, 16 J. L. & Econ. 1 (1973).

[5] Berry, Gaynor, & Scott Morton, supra note 2.

[6] Chad Syverson, Macroeconomics and Market Power: Context, Implications, and Open Questions 33 J. Econ. Persp. 23, (2019) at 26.

[7] Joseph Farrell & Carl Shapiro, The 2010 Horizontal Merger Guidelines After 10 Years, 58 REV. IND. ORG. 58, (2021). https://link.springer.com/article/10.1007/s11151-020-09807-6.

[8] John Asker et al, Comments on the January 2022 DOJ and FTC RFI on Merger Enforcement (Apr. 20, 2022), available at https://www.regulations.gov/comment/FTC-2022-0003-1847 at 15-6.

[9] U.S. Dep’t Justice & Fed. Trade Comm’n, Merger Guidelines (Guideline One) (Dec. 18, 2023), available at https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[10] U.S. Dep’t Justice, 1982 Merger Guidelines (1982), https://www.justice.gov/archives/atr/1982-merger-guidelines.

[11] U.S. Dep’t Justice & Fed. Trade Comm’n, 1992 Merger Guidelines (1992), https://www.justice.gov/archives/atr/1992-merger-guidelines; U.S. Dep’t Justice & Fed. Trade Comm’n, 1997 Merger Guidelines (1997), https://www.justice.gov/archives/atr/1997-merger-guidelines.

[12] U.S. Dep’t Justice & Fed. Trade Comm’n, Horizontal Merger Guidelines (Aug. 19, 2010), https://www.justice.gov/atr/horizontal-merger-guidelines-08192010; The U.S. antitrust agencies also issued Vertical Merger Guidelines in 2020. Although these were formally withdrawn in 2021 by the FTC, but not DOJ, they too are supplanted by the 2023 Merger Guidelines. See U.S. Dep’t Justice & Fed. Trade Comm’n, Vertical Merger Guidelines (Jun. 30, 2020), available at https://www.ftc.gov/system/files/documents/public_statements/1580003/vertical_merger_guidelines_6-30-20.pdf.

[13] 2010 Horizontal Merger Guidelines.

[14] Id.

[15] Id.

[16] 1992 Merger Guidelines.

[17]  United States v. Baker-Hughes Inc., 908 F.2d 981, 984 (D.C. Cir. 1990).

[18] Id. at 991.

[19] Id. at 990 (citing Hospital Corp. of Am. v. FTC, 807 F.2d 1381, 1386 (7th Cir.1986), cert. denied, 481 U.S. 1038, 107 S.Ct. 1975, 95 L.Ed.2d 815 (1987).

[20]  Id. at 987, 992.

[21]  United States v. Marine Bancorporation Inc., 418 U.S. 602, 631 (1974) (internal citations omitted).

[22]  Baker-Hughes, 908 F.2d at 983.

Continue reading
Antitrust & Consumer Protection

A Competition Perspective on Physician Non-Compete Agreements

Scholarship Abstract Physician non-compete agreements may have significant competitive implications, and effects on both providers and patients, but they are treated variously under the law on . . .

Abstract

Physician non-compete agreements may have significant competitive implications, and effects on both providers and patients, but they are treated variously under the law on a state-by-state basis. Reviewing the relevant law and the economic literature cannot identify with confidence the net effects of such agreements on either physicians or health care delivery with any generality. In addition to identifying future research projects to inform policy, it is argued that the antitrust “rule of reason” provides a useful and established framework with which to evaluate such agreements in specific health care markets and, potentially, to address those agreements most likely to do significant damage to health care competition and consumers.

Continue reading
Antitrust & Consumer Protection

A Competition Law & Economics Analysis of Sherlocking

ICLE White Paper Abstract Sherlocking refers to an online platform’s use of nonpublic third-party business data to improve its own business decisions—for instance, by mimicking the successful products . . .

Abstract

Sherlocking refers to an online platform’s use of nonpublic third-party business data to improve its own business decisions—for instance, by mimicking the successful products and services of edge providers. Such a strategy emerges as a form of self-preferencing and, as with other theories about preferential access to data, it has been targeted by some policymakers and competition authorities due to the perceived competitive risks originating from the dual role played by hybrid platforms (acting as both referees governing their platforms, and players competing with the business they host). This paper investigates the competitive implications of sherlocking, maintaining that an outright ban is unjustified. First, the paper shows that, by aiming to ensure platform neutrality, such a prohibition would cover scenarios (i.e., the use of nonpublic third-party business data to calibrate business decisions in general, rather than to adopt a pure copycat strategy) that should be analyzed separately. Indeed, in these scenarios, sherlocking may affect different forms of competition (inter-platform v. intra-platform competition). Second, the paper argues that, in either case, the practice’s anticompetitive effects are questionable and that the ban is fundamentally driven by a bias against hybrid and vertically integrated players.

I. Introduction

The dual role some large digital platforms play (as both intermediary and trader) has gained prominence among the economic arguments used to justify the recent wave of regulation hitting digital markets around the world. Many policymakers have expressed concern about potential conflicts of interest among companies that have adopted this hybrid model and that also control important gateways for business users. In other words, the argument goes, some online firms act not only as regulators who set their platforms’ rules and as referees who enforce those rules, but also as market players who compete with their business users. This raises the fear that large platforms could reserve preferential treatment for their own services and products, to the detriment of downstream rivals and consumers. That, in turn, has led to calls for platform-neutrality rules.

Toward this aim, essentially all of the legislative initiatives undertaken around the world in recent years to enhance competition in digital markets have included anti-discrimination provisions that target various forms of self-preferencing. Self-preferencing, it has been said, serves as the symbol of the current competition-policy zeitgeist in digital markets.[1] Indeed, this conduct is considered functional to leveraging strategies that would grant gatekeepers the chance to entrench their power in core markets and extend it into associated markets.[2]

Against this background, so-called “sherlocking” has emerged as one form of self-preferencing. The term was coined roughly 20 years ago, after Apple updated its own app Sherlock (a search tool on its desktop-operating system) to mimic a third-party application called Watson, which was created by Karelia Software to complement the Apple tool’s earlier version.[3] According to critics of self-preferencing generally and sherlocking in particular, biased intermediation and related conflicts of interest allow gatekeepers to exploit their preferential access to business users’ data to compete against them by replicating successful products and services. The implied assumption is that this strategy is relevant to competition policy, even where no potential intellectual-property rights (IPRs) are infringed and no slavish imitation sanctionable under unfair-competition laws is detected. Indeed, under such theories, sherlocking would already be prevented by the enforcement of these rules.

To tackle perceived misuse of gatekeepers’ market position, the European Union’s Digital Markets Act (DMA) introduced a ban on sherlocking.[4] Similar concerns have also motivated requests for intervention in the United States,[5] Australia,[6] and Japan.[7] In seeking to address at least two different theories of gatekeepers’ alleged conflicts of interest, these proposed bans on exploiting access to business users’ data are not necessarily limited to the risk of product imitation, but may include any business decision whatsoever that a platform may make while relying on that data.

In parallel with the regulatory initiatives, the conduct at-issue has also been investigated in some antitrust proceedings, which appear to seek the very same twofold goal. In particular, in November 2020, the European Commission sent a statement of objections to Amazon that argued the company had infringed antitrust rules through the systematic use of nonpublic business data from independent retailers who sell on the Amazon online marketplace in order to benefit Amazon’s own retail business, which directly competes with those retailers.[8] A similar investigation was opened by the UK Competition and Markets Authority (CMA) in July 2022.[9]

Further, as part of the investigation opened into Apple’s App Store rule requiring developers to use Apple’s in-app purchase mechanism to distribute paid apps and/or paid digital content, the European Commission also showed interest in evaluating whether Apple’s conduct might disintermediate competing developers from relevant customer data, while Apple obtained valuable data about those activities and its competitors’ offers.[10] The European Commission and UK CMA likewise launched an investigation into Facebook Marketplace, with accusations that Meta used data gathered from advertisers in order to compete with them in markets where the company is active, such as classified ads.[11]

There are two primary reasons these antitrust proceedings are relevant. First, many of the prohibitions envisaged in regulatory interventions (e.g., DMA) clearly took inspiration from the antitrust investigations, thus making it important to explore the insights that competition authorities may provide to support an outright ban. Second, given that regulatory intervention will be implemented alongside competition rules (especially in Europe) rather than displace them,[12] sherlocking can be assessed at both the EU and national level against dominant players that are not eligible for “gatekeeper” designation under the DMA. For those non-gatekeeper firms, the practice may still be investigated by antitrust authorities and assessed before courts, aside from the DMA’s per se prohibition. And, of course, investigations and assessments of sherlocking could also be made even in those jurisdictions where there isn’t an outright ban.

The former sis well-illustrated by the German legislature’s decision to empower its national competition authority with a new tool to tackle abusive practices that are similar and functionally equivalent to the DMA.[13] Indeed, as of January 2021, the Bundeskartellamt may identify positions of particular market relevance (undertakings of “paramount significance for competition across markets”) and assess their possible anticompetitive effects on competition in those areas of digital ecosystems in which individual companies may have a gatekeeper function. Both the initiative’s aims and its list of practices are similar to the DMA. They are distinguished primarily by the fact that the German list is exhaustive, and the practices at-issue are not prohibited per se, but are subject to a reversal of the burden of proof, allowing firms to provide objective justifications. For the sake of this analysis, within the German list, one provision prohibits designated undertakings from “demanding terms and conditions that permit … processing data relevant for competition received from other undertakings for purposes other than those necessary for the provision of its own services to these undertakings without giving these undertakings sufficient choice as to whether, how and for what purpose such data are processed.”[14]

Unfortunately, none of the above-mentioned EU antitrust proceedings have concluded with a final decision that addresses the merits of sherlocking. This precludes evaluating whether the practice would have survived before the courts. Regarding the Apple investigation, the European Commission dropped the case over App Store rules and issued a new statement of objections that no longer mentions sherlocking.[15] Further, the European Commission and the UK CMA accepted the commitments offered by Amazon to close those investigations.[16] The CMA likewise accepted the commitments offered by Meta.[17]

Those outcomes can be explained by the DMA’s recent entry into force. Indeed, because of the need to comply with the new regulation, players designated as gatekeepers likely have lost interest in challenging antitrust investigations that target the very same conduct prohibited by the DMA.[18] After all, given that the DMA does not allow any efficiency defense against the listed prohibitions, even a successful appeal against an antitrust decision would be a pyrrhic victory. From the opposite perspective, the same applies to the European Commission, which may decide to save time, costs, and risks by dropping an ongoing case against a company designated as a gatekeeper under the DMA, knowing that the conduct under investigation will be prohibited in any case.

Nonetheless, despite the lack of any final decision on sherlocking, these antitrust assessments remain relevant. As already mentioned, the DMA does not displace competition law and, in any case, dominant platforms not designated as gatekeepers under the DMA still may face antitrust investigations over sherlocking. This applies even more for jurisdictions, such as the United States, that are evaluating DMA-like legislative initiatives (e.g., the American Innovation and Choice Online Act, or “AICOA”).

Against this background, drawing on recent EU cases, this paper questions the alleged anticompetitive implications of sherlocking, as well as claims that the practice fails to comply with existing antitrust rules.

First, the paper illustrates that prohibitions on the use of nonpublic third-party business data would cover two different theories that should be analyzed separately. Whereas a broader case involves all the business decisions adopted by a dominant platform because of such preferential access (e.g., the launch of new products or services, the development or cessation of existing products or services, the calibration of pricing and management systems), a more specific case deals solely with the adoption of a copycat strategy. By conflating these theories in support of a blanket ban that condemns any use of nonpublic third-party business data, EU antitrust authorities are fundamentally motivated by the same policy goal pursued by the DMA—i.e., to impose a neutrality regime on large online platforms. The competitive implications differ significantly, however, as adopting copycat strategies may only affect intra-brand competition, while using said data to improve other business decisions could also affect inter-platform competition.

Second, the paper shows that, in both of these scenarios, the welfare effects of sherlocking are unclear. Notably, exploiting certain data to better understand the market could help a platform to develop new products and services, to improve existing products and services, or more generally to be more competitive with respect to both business users and other platforms. As such outcomes would benefit consumers in terms of price and quality, any competitive advantage achieved by the hybrid platform could be considered unlawful only if it is not achieved on the merits. In a similar vein, if sherlocking is used by a hybrid platform to deliver replicas of its business users’ products and services, that would likely provide short-term procompetitive effects benefitting consumers with more choice and lower prices. In this case, the only competitive harm that would justify an antitrust intervention resides in (uncertain) negative long-term effects on innovation.

As a result, in any case, an outright ban of sherlocking, such as is enshrined in the DMA, is economically unsound since it would clearly harm consumers.

The paper is structured as follows. Section II describes the recent antitrust investigations of sherlocking, illustrating the various scenarios that might include the use of third-party business data. Section III investigates whether sherlocking may be considered outside the scope of competition on the merits for bringing competitive advantages to platforms solely because of their hybrid business model. Section IV analyzes sherlocking as a copycat strategy by investigating the ambiguous welfare effects of copying in digital markets and providing an antitrust assessment of the practice at issue. Section V concludes.

II. Antitrust Proceedings on Sherlocking: Platform Neutrality and Copycat Competition

Policymakers’ interest in sherlocking is part of a larger debate over potentially unfair strategies that large online platforms may deploy because of their dual role as an unavoidable trading partner for business users and a rival in complementary markets.

In this scenario, as summarized in Table 1, the DMA outlaws sherlocking, establishing that to “prevent gatekeepers from unfairly benefitting from their dual role,”[19] they are restrained from using, in competition with business users, “any data that is not publicly available that is generated or provided by those business users in the context of their use of the relevant core platform services or of the services provided together with, or in support of, the relevant core platform services, including data generated or provided by the customers of those business users.”[20] Recital 46 further clarifies that the “obligation should apply to the gatekeeper as a whole, including but not limited to its business unit that competes with the business users of a core platform service.”

A similar provision was included in the American Innovation and Choice Online Act (AICOA), which was considered, but not ultimately adopted, in the 117th U.S. Congress. AICOA, however, would limit the scope of the ban to the offer of products or services that would compete with those offered by business users.[21] Concerns about copycat strategies were also reported in the U.S. House of Representatives’ investigation of the state of competition in digital markets as supporting the request for structural-separation remedies and line-of-business restrictions to eliminate conflicts of interest where a dominant intermediary enters markets that place it in competition with dependent businesses.[22] Interestingly, however, in the recent complaint filed by the U.S. Federal Trade Commission (FTC) and 17 state attorneys general against Amazon that accuses the company of having deployed an interconnected strategy to block off every major avenue of competition (including price, product selection, quality, and innovation), there is no mention of sherlocking among the numerous unfair practices under investigation.[23]

Evaluating regulatory-reform proposals for digital markets, the Australian Competition and Consumer Commission (ACCC) also highlighted the risk of sherlocking, arguing that it could have an adverse effect on competition, notably on rivals’ ability to compete, when digital platforms exercise their strong market position to utilize nonpublic data to free ride on the innovation efforts of their rivals.[24] Therefore, the ACCC suggested adopting service-specific codes to address self-preferencing by, for instance, imposing data-separation requirements to restrain dominant app-store providers from using commercially sensitive data collected from the app-review process to develop their own apps.[25]

Finally, on a comparative note, it is also useful to mention the proposals advanced by the Japanese Fair Trade Commission (JFTC) in its recent market-study report on mobile ecosystems.[26] In order to ensure equal footing among competitors, the JFTC specified that its suggestion to prevent Google and Apple from using nonpublic data generated by other developers’ apps aims at pursuing two purposes. Such a ban would, indeed, concern not only use of the data for the purpose of developing competing apps, products, and services, but also its use for developing their own apps, products, and services.

TABLE 1: Legislative Initiatives and Proposals to Ban Sherlocking

As previously anticipated, sherlocking recently emerged as an antitrust offense in three investigations launched by the European Commission and the UK CMA.

In the first case, Amazon’s alleged reliance on marketplace sellers’ nonpublic business data has been claimed to distort fair competition on its platform and prevent effective competition. In its preliminary findings, the Commission argued that Amazon takes advantage of its hybrid business model, leveraging its access to nonpublic third-party sellers’ data (e.g., the number of ordered and shipped units of products; sellers’ revenues on the marketplace; the number of visits to sellers’ offers; data relating to shipping, to sellers’ past performance, and to other consumer claims on products, including the activated guarantees) to adjust its retail offers and strategic business decisions to the detriment of third-party sellers, which are direct competitors on the marketplace.[27] In particular, the Commission was concerned that Amazon uses such data for its decision to start and end sales of a product, for its pricing system, for its inventory-planning and management system, and to identify third-party sellers that Amazon’s vendor-recruitment teams should approach to invite them to become direct suppliers to Amazon Retail. To address the data-use concern, Amazon committed not to use nonpublic data relating to, or derived from, independent sellers’ activities on its marketplace for its retail business and not to use such data for the purposes of selling branded goods, as well as its private-label products.[28]

A parallel investigation ended with similar commitments in the UK.[29] According to the UK CMA, Amazon’s access to and use of nonpublic seller data could result in a competitive advantage for Amazon Retail arising from its operation of the marketplace, rather than from competition on the merits, and may lead to relevant adverse effects on competition. Notably, it was alleged this could result in a reduction in the scale and competitiveness of third-party sellers on the Amazon Marketplace; a reduction in the number and range of product offers from third-party sellers on the Amazon Marketplace; and/or less choice for consumers, due to them being offered lower quality goods and/or paying higher prices than would otherwise be the case.

It is also worth mentioning that, by determining that Amazon is an undertaking of paramount significance for competition across markets, the Bundeskartellamt emphasized the competitive advantage deriving from Amazon’s access to nonpublic data, such as Glance Views, sales figures, sale quantities, cost components of products, and reorder status.[30] Among other things, with particular regard to Amazon’s hybrid role, the Bundeskartellamt noted that the preferential access to competitively sensitive data “opens up the possibility for Amazon to optimize its own-brand assortment.”[31]

A second investigation involved Apple and its App Store rule.[32] According to the European Commission, the mandatory use of Apple’s own proprietary in-app purchase system (IAP) would, among other things, grant Apple full control over the relationship its competitors have with customers, thus disintermediating those competitors from customer data and allowing Apple to obtain valuable data about the activities and offers of its competitors.

Finally, Meta faced antitrust proceedings in both the EU and the UK.[33] The focus was on Facebook Marketplace—i.e., an online classified-ads service that allows users to advertise goods for sale. According to the European Commission and the CMA, Meta unilaterally imposes unfair trading conditions on competing online-classified ads services that advertise on Facebook or Instagram. These terms and conditions, which authorize Meta to use ads-related data derived from competitors for the benefit of Facebook Marketplace, are considered unjustified, as they impose an unnecessary burden on competitors and only benefit Facebook Marketplace. The suspicion is that Meta has used advertising data from Facebook Marketplace competitors for the strategic planning, product development, and launch of Facebook Marketplace, as well as for Marketplace’s operation and improvement.

Overall, these investigations share many features. The concerns about third-party business-data use, as well as about other forms of self-preferencing, revolve around the competitive advantages that accrue to a dominant platform because of its dual role. Such advantages are considered unfair, as they are not the result of the merits of a player, but derived purely and simply from its role as an important gateway to reach end users. Moreover, this access to valuable business data is not reciprocal. The feared risk is the marginalization of business users competing with gatekeepers on the gatekeepers’ platforms and, hence, the alleged harm to competition is the foreclosure of rivals in complementary markets (horizontal foreclosure).

The focus of these investigations was well-illustrated by the European Commission’s decision on Amazon’s practice.[34] The Commission’s concern was about the “data delta” that Amazon may exploit, namely the additional data related to third-party sellers’ listings and transactions that are not available to, and cannot be replicated by, the third-party sellers themselves, but are available to and used by Amazon Retail for its own retail operations.[35] Contrary to Amazon Retail—which, according to Commission’s allegations, would have full access to and would use such individual, real-time data of all its third-party sellers to calibrate its own retail decisions—sellers would have access only to their own individual listings and sales data. As a result, the Commission came to the (preliminary) conclusion that real-time access to and use of such volume, variety, and granularity of non-publicly available data from its retail competitors generates a significant competitive advantage for Amazon Retail in each of the different decisional processes that drive its retail operations.[36]

On a closer look, however, while antitrust authorities seem to target the use of nonpublic third-party business data as a single theory of harm, their allegations cover two different scenarios along the lines of what has already been examined with reference to the international legislative initiatives and proposals. Indeed, the Facebook Marketplace case does not involve an allegation of copying, as Meta is accused of gathering data from its business users to launch and improve its ads service, instead of reselling goods and services.

FIGURE 1: Sherlocking in Digital Markets

As illustrated above in Figure 1, while the claim in the latter scenario is that the preferential data use would help dominant players calibrate business decisions in general, the former scenario instead involves the use of such data for a pure copycat strategy of an entire product or service, or some of its specific features.

In both scenarios the aim of the investigations is to ensure platform neutrality. Accordingly, as shown by the accepted commitments, the envisaged solution for antitrust authorities is to impose  data-separation requirements to restrain dominant platforms from using third-party commercially sensitive data. Putting aside that these investigations concluded with commitments from the firms, however, their chances of success before a court differ significantly depending on whether they challenge a product-imitation strategy, or any business decision adopted because of the “data delta.”

A. Sherlocking and Unconventional Theories of Harm for Digital Markets

Before analyzing how existing competition-law rules could be applied to the various scenarios involving the use of third-party business data, it is worth providing a brief overview of the framework in which the assessment of sherlocking is conducted. As competition in the digital economy is increasingly a competition among ecosystems,[37] a lively debate has emerged on the capacity of traditional antitrust analysis to adequately capture the peculiar features of digital markets. Indeed, the combination of strong economies of scale and scope; indirect network effects; data advantages and synergies across markets; and portfolio effects all facilitate ecosystem development all contribute to making digital markets highly concentrated, prone to tipping, and not easily contestable.[38] As a consequence, it’s been suggested that addressing these distinctive features of digital markets requires an overhaul of the antitrust regime.

Such discussions require the antitrust toolkit and theories of harm to illustrate whether and how a particular practice, agreement, or merger is anticompetitive. Notably, at issue is whether traditional antitrust theories of harm are fit for purpose or whether novel theories of harm should be developed in response to the emerging digital ecosystems. The latter requires looking at the competitive impact of expanding, protecting, or strengthening an ecosystem’s position, and particularly whether such expansion serves to exploit a network of capabilities and to control access to key inputs and components.[39]

A significant portion of recent discussions around developing novel theories of harm to better address the characteristics of digital-business models and markets has been devoted to the topic of merger control—in part a result of the impressive number of acquisitions observed in recent years.[40] In particular, the focus has been on analyzing conglomerate mergers that involve acquiring a complementary or unrelated asset, which have traditionally been assumed to raise less-significant competition concerns.

In this regard, an ecosystem-based theory seems to have guided the Bundeskartellamt in its assessment of Meta’s acquisition of Kustomer[41] and by the CMA in Microsoft/Activision.[42] A more recent example is the European Commission’s decision to prohibit the proposed Booking/eTraveli merger, where the Commission explicitly noted that the transaction would have allowed Booking to expand its travel-services ecosystem.[43] The Commission’s concerns were related primarily to the so-called “envelopment” strategy, in which a prominent platform within a specific market broadens its range of services into other markets where there is a significant overlap of customer groups already served by the platform.[44]

Against this background, putative self-preferencing harms represent one of the European Commission’s primary (albeit contentious)[45] attempts to develop new theories of harm built on conglomerate platforms’ ability to bundle services or use data from one market segment to inform product development in another.[46] Originally formulated in the Google Shopping decision,[47] the theory of harm of (leveraging through) self-preferencing has subsequently inspired the DMA, which targets different forms of preferential treatment, including sherlocking.

In particular, it is asserting that platform may use self-preferencing to adopt a leveraging strategy with a twofold anticompetitive effect—that is, excluding or impeding rivals from competing with the platform (defensive leveraging) and extending the platform’s market power into associated markets (offensive leveraging). These goals can be pursued because of the unique role that some large digital platforms play. That is, they not only enjoy strategic market status by controlling ecosystems of integrated complementary products and services, which are crucial gateways for business users to reach end users, but they also perform a dual role as both a critical intermediary and a player active in complementors’ markets. Therefore, conflicts of interests may provide incentives for large vertically integrated platforms to favor their own products and services over those of their competitors.[48]

The Google Shopping theory of harm, while not yet validated by the Court of Justice of the European Union (CJEU),[49] has also found its way into merger analysis, as demonstrated by the European Commission’s recent assessment of iRobot/Amazon.[50] In its statement of objections, the Commission argued that the proposed acquisition of iRobot may give Amazon the ability and incentive to foreclose iRobot’s rivals by engaging in several foreclosing strategies to prevent them from selling robot vacuum cleaners (RVCs) on Amazon’s online marketplace and/or at degrading such rivals’ access to that marketplace. In particular, the Commission found that Amazon could deploy such self-preferencing strategies as delisting rival RVCs; reducing rival RVCs’ visibility in both organic and paid results displayed in Amazon’s marketplace; limiting access to certain widgets or commercially attractive labels; and/or raising the costs of iRobot’s rivals to advertise and sell their RVCs on Amazon’s marketplace.[51]

Sherlocking belongs to this framework of analysis and can be considered a form of self-preferencing, specifically because of the lack of reciprocity in accessing sensitive data.[52] Indeed, while gatekeeper platforms have access to relevant nonpublic third-party business data as a result of their role as unavoidable trading partners, they leverage this information exclusively, without sharing it with third-party sellers, thus further exacerbating an already uneven playing field.[53]

III. Sherlocking for Competitive Advantage: Hybrid Business Model, Neutrality Regimes, and Competition on the Merits

Insofar as prohibitions of sherlocking center on the competitive advantages that platforms enjoy because of their dual role—thereby allowing some players to better calibrate their business decisions due to their preferential access to business users’ data—it should be noted that competition law does not impose a general duty to ensure a level playing field.[54] Further, a competitive advantage does not, in itself, amount to anticompetitive foreclosure under antitrust rules. Rather, foreclosure must not only be proved (in terms of actual or potential effects) but also assessed against potential benefits for consumers in terms of price, quality, and choice of new goods and services.[55]

Indeed, not every exclusionary effect is necessarily detrimental to competition.[56] Competition on the merits may, by definition, lead to the departure from the market or the marginalization of competitors that are less efficient and therefore less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation.[57] Automatically classifying any conduct with exclusionary effects were as anticompetitive could well become a means to protect less-capable, less-efficient undertakings and would in no way protect more meritorious undertakings—thereby potentially hindering a market’s competitiveness.[58]

As recently clarified by the CJEU regarding the meaning of “competition on the merits,” any practice that, in its implementation, holds no economic interest for a dominant undertaking except that of eliminating competitors must be regarded as outside the scope of competition on the merits.[59] Referring to the cases of margin squeezes and essential facilities, the CJEU added that the same applies to practices that a hypothetical equally efficient competitor is unable to adopt because that practice relies on using resources or means inherent to the holding of such a dominant position.[60]

Therefore, while antitrust cases on sherlocking set out to ensure a level playing field and platform neutrality, and therefore center on the competitive advantages that a platform enjoys because of its dual role, mere implementing a hybrid business model does not automatically put such practices outside the scope of competition on the merits. The only exception, according to the interpretation provided in Bronner, is the presence of an essential facility—i.e., an input whose access should be considered indispensable, as there are no technical, legal, or economic obstacles capable of making it impossible, or even unreasonably difficult, to duplicate it.[61]

As a result, unless it is proved that the hybrid platform is an essential facility, sherlocking and other forms of self-preferencing cannot be considered prima facie outside the scope of competition on the merits, or otherwise unlawful. Rather, any assessment of sherlocking demands the demonstration of anticompetitive effects, which in turn requires finding an impact on efficient firms’ ability and incentive to compete. In the scenario at-issue, for instance, the access to certain data may allow a platform to deliver new products or services; to improve existing products or services; or more generally to compete more efficiently not only with respect to the platform’s business users, but also against other platforms. Such an increase in both intra-platform and inter-platform competition would benefit consumers in terms of lower prices, better quality, and a wider choice of new or improved goods and services—i.e., competition on the merits.[62]

In Facebook Marketplace, the European Commission and UK CMA challenged the terms and conditions governing the provision of display-advertising and business-tool services to which Meta required its business customers to sign up.[63] In their view, Meta abused its dominant position by imposing unfair trading conditions on its advertising customers, which authorized Meta to use ads-related data derived from the latter in a way that could afford Meta a competitive advantage on Facebook Marketplace that would not have arisen from competition on the merits. Notably, antitrust authorities argued that Meta’s terms and conditions were unjustified, disproportionate, and unnecessary to provide online display-advertising services on Meta’s platforms.

Therefore, rather than directly questioning the platform’s dual role or hybrid business model, the European Commission and UK CMA decided to rely on traditional case law which considers unfair those clauses that are unjustifiably unrelated to the purpose of the contract, unnecessarily limit the parties’ freedom, are disproportionate, or are unilaterally imposed or seriously opaque.[64] This demonstrates that, outside the harm theory of the unfairness of terms and conditions, a hybrid platform’s use of nonpublic third-party business data to improve its own business decisions is generally consistent with antitrust provisions. Hence, an outright ban would be unjustified.

IV. Sherlocking to Mimic Business Users’ Products or Services

The second, and more intriguing, sherlocking scenario is illustrated by the Amazon Marketplace investigations and regards the original meaning of sherlocking—i.e., where a data advantage is used by a hybrid platform to mimic its business users’ products or services.

Where sherlocking charges assert that the practice allows some platforms to use business users’ data to compete against them by replicating their products or services, it should not be overlooked that the welfare effects of such a copying strategy are ambiguous. While the practice could benefit consumers in the short term by lowering prices and increasing choice, it may discourage innovation over the longer term if third parties anticipate being copied whenever they deliver successful products or services. Therefore, the success of an antitrust investigation essentially relies on demonstrating a harm to innovation that would induce business users to leave the market or stop developing their products and services. In other words, antitrust authorities should be able to demonstrate that, by allowing dominant platforms to free ride on their business guests’ innovation efforts, sherlocking would negatively affect rivals’ ability to compete.

A. The Welfare Effects of Copying

The tradeoff between the short- and long-term welfare effects of copying has traditionally been analyzed in the context of the benefits and costs generated by intellectual-property protection.[65] In particular, the economic literature investigating the optimal life of patents[66] and copyrights[67] focuses on the efficient balance between dynamic benefits associated with innovation and the static costs of monopoly power granted by IPRs.

More recently, product imitation has instead been investigated in the different scenario of digital markets, where dominant platforms adopting a hybrid business model may use third-party sellers’ market data to design and promote their own products over their rivals’ offerings. Indeed, some studies report that large online platforms may attempt to protect their market position by creating “kill zones” around themselves—i.e., by acquiring, copying, or eliminating their rivals.[68] In such a novel setting, the welfare effects of copying are assessed regardless of the presence and the potential enforcement of IPRs, but within a strategy aimed at excluding rivals by exploiting the dual role of both umpire and player to get preferential access to sensitive data and free ride on their innovative efforts.[69]

Even in this context, however, a challenging tradeoff should be considered. Indeed, while in the short term, consumers may benefit from the platform’s imitation strategy in terms of lower prices and higher quality, they may be harmed in the longer term if third parties are discouraged from delivering new products and services. As a result, while there is empirical evidence on hybrid platforms successfully entering into third parties’ adjacent market segments, [70] the extant academic literature finds the welfare implications of such moves to be ambiguous.

A first strand of literature attempts to estimate the welfare impact of the hybrid business model. Notably, Andre Hagiu, Tat-How Teh, and Julian Wright elaborated a model to address the potential implications of an outright ban on platforms’ dual mode, finding that such a structural remedy may harm consumer surplus and welfare even where the platform would otherwise engage in product imitation and self-preferencing.[71] According to the authors, banning the dual mode does not restore the third-party seller’s innovation incentives or the effective price competition between products, which are the putative harms caused by imitation and self-preferencing. Therefore, the authors’ evaluation was that interventions specifically targeting product imitation and self-preferencing were preferable.

Germa?n Gutie?rrez suggested that banning the dual model would generate hardly any benefits for consumers, showing that, in the Amazon case, interventions that eliminate either the Prime program or product variety are likely to decrease welfare.[72]

Further, analyzing Amazon’s business model, Federico Etro found that the platform and consumers’ incentives are correctly aligned, and that Amazon’s business model of hosting sellers and charging commissions prevents the company from gaining through systematic self?preferencing for its private-label and first-party products.[73] In the same vein, on looking at its business model and monetization strategy, Patrick Andreoli-Versbach and Joshua Gans argued that Amazon does not have an obvious incentive to self-preference.[74] Indeed, Amazon’s profitability data show that, on average, the company’s operating margin is higher on third-party sales than on first-party retail sales.

Looking at how modeling details may yield different results with regard to the benefits and harms of the hybrid business model, Simon Anderson and O?zlem Bedre-Defoile maintain that the platform’s choice to sell its own products benefits consumers by lowering prices when a monopoly platform hosts competitive fringe sellers, regardless of the platform’s position as a gatekeeper, whether sellers have an alternate channel to reach consumers, or whether alternate channels are perfect or imperfect substitutes for the platform channel.[75] On the other hand, the authors argued that platform product entry might harm consumers when a big seller with market power sells on its own channel and also on the platform. Indeed, in that case, the platform setting a seller fee before the big seller prices its differentiated products introduces double markups on the big seller’s platform-channel price and leaves some revenue to the big seller.

Studying whether Amazon engages in self-preferencing on its marketplace by favoring its own brands in search results, Chiara Farronato, Andrey Fradkin, and Alexander MacKay demonstrate empirically that Amazon brands remain about 30% cheaper and have 68% more reviews than other similar products.[76] The authors acknowledge, however, that their findings do not imply that consumers are hurt by Amazon brands’ position in search results.

Another strand of literature specifically tackles the welfare effects of sherlocking. In particular, Erik Madsen and Nikhil Vellodi developed a theoretical framework to demonstrate that a ban on insider imitation can either stifle or stimulate innovation, depending on the nature of innovation.[77] Specifically, the ban could stimulate innovation for experimental product categories, while reducing innovation in incremental product markets, since the former feature products with a large chance of superstar demand and the latter generate mostly products with middling demand.

Federico Etro maintains that the tradeoffs at-issue are too complex to be solved with simple interventions, such as bans on dual mode, self-preferencing, or copycatting.[78] Indeed, it is difficult to conclude that Amazon entry is biased to expropriate third-party sellers or that bans on dual mode, self-preferencing, or copycatting would benefit consumers, because they either degrade services and product variety or induce higher prices or commissions.

Similar results are provided by Jay Pil Choi, Kyungmin Kim, and Arijit Mukherjee, who developed a tractable model of a platform-run marketplace where the platform charges a referral fee to the sellers for access to the marketplace, and may also subsequently launch its own private-label product by copying a seller.[79] The authors found that a policy to either ban hybrid mode or only prohibit information use for the launch of private-label products may produce negative welfare implications.

Further, Radostina Shopova argues that, when introducing a private label, the marketplace operator does not have incentive to distort competition and foreclose the outside seller, but does have an incentive to lower fees charged to the outside seller and to vertically differentiate its own product in order to protect the seller’s channel.[80] Even when the intermediary is able to perfectly mimic the quality of the outside seller and monopolize its product space, the intermediary prefers to differentiate its offer and chooses a lower quality for the private-label product. Accordingly, as the purpose of private labels is to offer a lower-quality version of products aimed at consumers with a lower willingness to pay, a marketplace operator does not have an incentive to distort competition in favor of its own product and foreclose the seller of the original higher-quality product.

In addition, according to Jean-Pierre Dubé, curbing development of private-label programs would harm consumers and Amazon’s practices amount to textbook retailing, as they follow an off-the-shelf approach to managing private-label products that is standard for many retail chains in the West.[81] As a result, singling out Amazon’s practices would set a double standard.

Interestingly, such findings about predictors and effects of Amazon’s entry in competition with third-party merchants on its own marketplace are confirmed by the only empirical study developed so far. In particular, analyzing the Home & Kitchen department of Germany’s version of Amazon Marketplace between 2016 and 2021, Gregory S. Crawford, Matteo Courthoud, Regina Seibel, and Simon Zuzek’s results suggest that Amazon’s entry strategy was more consistent with making Marketplace more attractive to consumers than expropriating third-party merchants.[82] Notably, the study showed that, comparing Amazon’s entry decisions with those of the largest third-party merchants, Amazon tends to enter low-growth and low-quality products, which is consistent with a strategy that seeks to make Marketplace more attractive by expanding variety, lessening third-party market power, and/or enhancing product availability. The authors therefore found that Amazon’s entry on Amazon Marketplace demonstrated no systematic adverse effects and caused a mild market expansion.

Massimo Motta and Sandro Shelegia explored interactions between copying and acquisitions, finding that the former (or the threat of copying) can modify the outcome of an acquisition negotiation.[83] According to their model, there could be both static and dynamic incentives for an incumbent to introduce a copycat version of a complementary product. The static rationale consists of lowering the price of the complementary product in order to capture more rents from it, while the dynamic incentive consists of harming a potential rival’s prospects of developing a substitute. The latter may, in turn, affect the direction the entrant takes toward innovation. Anticipating the incumbent’s copying strategy, the entrant may shift resources from improvements to compete with the incumbent’s primary product to developing complementary products.

Jingcun Cao, Avery Haviv, and Nan Li analyzed the opposite scenario—i.e., copycats that seek to mimic the design and user experience of incumbents’ successful products.[84] The authors find empirically that, on average, copycat apps do not have a significant effect on the demand for incumbent apps and that, as with traditional counterfeit products, they may generate a positive demand spillover toward authentic apps.

Massimo Motta also investigated the potential foreclosure effects of platforms adopting a copycat strategy committed to non-discriminatory terms of access for third parties (e.g., Apple App Store, Google Play, and Amazon Marketplace).[85] Notably, according to Motta, when a third-party seller is particularly successful and the platform is unable to raise fees and commissions paid by that seller, the platform may prefer to copy its product or service to extract more profits from users, rather than rely solely on third-party sales. The author acknowledged, however, that even though this practice may create an incentive for self-preferencing, it does not necessarily have anticompetitive effects. Indeed, the welfare effects of the copying strategy are a priori ambiguous.[86] While, on the one hand, the platform’s copying of a third-party product benefits consumers by increasing variety and competition among products, on the other hand, copying might be wasteful for society, in that it entails a fixed cost and may discourage innovation if rivals anticipate that they will be systematically copied whenever they have a successful product.[87] Therefore, introducing a copycat version of a product offered by a firm in an adjacent market might be procompetitive.

B. Antitrust Assessment: Competition, Innovation, and Double Standards

The economic literature has demonstrated that the rationale and welfare effects of sherlocking by hybrid platforms are definitively ambiguous. Against concerns about rivals’ foreclosure, some studies provide a different narrative, illustrating that such a strategy is more consistent with making the platform more attractive to consumers (by differentiating the quality and pricing of the offer) than expropriating business users.[88] Furthermore, copies, imitations, and replicas undoubtedly benefit consumers with more choice and lower prices.

Therefore, the only way to consider sherlocking anticompetitive is by demonstrating long-term deterrent effects on innovation (i.e., reducing rivals’ incentives to invest in new products and services) outweigh consumers’ short-term advantages.[89] Moreover, deterrent effects must not be merely hypothetical, as a finding of abuse cannot be based on a mere possibility of harm.[90] In any case, such complex tradeoffs are at odds with a blanket ban.[91]

Moreover, assessments of the potential impact of sherlocking on innovation cannot disregard the role of IPRs—which are, by definition, the main primary to promote innovation. From this perspective, intellectual-property protection is best characterized as another form of tradeoff. Indeed, the economic rationale of IPRs (in particular, of patents and copyrights) involves, among other things, a tradeoff between access and incentives—i.e., between short-term competitive restrictions and long-term innovative benefits.[92]

According to the traditional incentive-based theory of intellectual property, free riding would represent a dangerous threat that justifies the exclusive rights granted by intellectual-property protection. As a consequence, so long as copycat expropriation does not infringe IPRs, it should be presumed legitimate and procompetitive. Indeed, such free riding is more of an intellectual-property issue than a competitive concern.

In addition, to strike a fair balance between restricting competition and providing incentives to innovation, the exclusive rights granted by IPRs are not unlimited in terms of duration, nor in terms of lawful (although not authorized) uses of the protected subject matter. Under the doctrine of fair use, for instance, reverse engineering represents a legitimate way to obtain information about a firm’s product, even if the intended result is to produce a directly competing product that may steer customers away from the initial product and the patented invention.

Outside of reverse engineering, copying is legitimately exercised once IPRs expire, when copycat competitors can reproduce previously protected elements. As a result of the competitive pressure exerted by new rivals, holders of expired IPRs may react by seeking solutions designed to block or at least limit the circulation of rival products. They could, for example, request other IPRs to cover aspects or functionalities different from those previously protected. They could also bring (sometimes specious) legal action for infringement of the new IPR or for unfair competition by slavish imitation. For these reasons, there have been occasions where copycat competitors have received protection from antitrust authorities against sham litigation brought by IPR holders concerned about losing margins due to pricing pressure from copycats.[93]

Finally, within the longstanding debate on the intersection of intellectual-property protection and competition, EU antitrust authorities have traditionally been unsympathetic toward restrictions imposed by IPRs. The success of the essential-facility doctrine (EFD) is the most telling example of this attitude, as its application in the EU has been extended to IPRs. As a matter of fact, the EFD represents the main antitrust tool for overseeing intellectual property in the EU.[94]

After Microsoft, EU courts have substantially dismantled one of the “exceptional circumstances” previously elaborated in Magill and specifically introduced for cases involving IPRs, with the aim of safeguarding a balance between restrictions to access and incentives to innovate. Whereas the CJEU established in Magill that refusal to grant an IP license should be considered anticompetitive if it prevents the emergence of a new product for which there is potential consumer demand, in Microsoft, the General Court considered such a requirement met even when access to an IPR is necessary for rivals to merely develop improved products with added value.

Given this background, recent competition-policy concerns about sherlocking are surprising. To briefly recap, the practice at-issue increases competition in the short term, but may affect incentives to innovate in the long-term. With regard to the latter, however, the practice neither involves products protected by IPRs nor constitutes a slavish imitation that may be caught under unfair-competition laws.

The case of Amazon, which has received considerable media coverage, is illustrative of the relevance of IP protection. Amazon has been accused of cloning batteries, power strips, wool runner shoes, everyday sling bags, camera tripods, and furniture.[95] One may wonder what kind of innovation should be safeguarded in these cases against potential copies. Admittedly, such examples appear consistent with the findings of the already-illustrated empirical study conducted by Crawford et al. indicating that Amazon tends to enter low-quality products in order to expand variety on the Marketplace and to make it more attractive to consumers.

Nonetheless, if an IPR is involved, right holders are provided with proper means to protect their products against infringement. Indeed, one of the alleged targeted companies (Williams-Sonoma) did file a complaint for design and trademark infringement, claiming that Amazon had copied a chair (Orb Dining Chair) sold by its West Elm brand. According to Williams-Sonoma, the Upholstered Orb Office Chair—which Amazon began selling under its Rivet brand in 2018—was so similar that the ordinary observer would be confused by the imitation.[96] If, instead, the copycat strategy does not infringe any IPR, the potential impact on innovation might not be considered particularly worrisome—at least at first glance.

Further, neither the degree to which third-party business data is unavailable nor the degree to which they are relevant in facilitating copying are clear cut. For instance, in the case of Amazon, public product reviews supply a great deal of information[97] and, regardless of the fact that a third party is selling a product on the Marketplace, anyone can obtain an item for the purposes of reverse engineering.[98]

In addition, antitrust authorities are used to intervening against opportunistic behavior by IPR holders. European competition authorities, in particular, have never before seemed particularly responsive to the motives of inventors and creators versus the need to encourage maximum market openness.

It should also be noted that cloning is a common strategy in traditional markets (e.g., food products)[99] and has been the subject of longstanding controversies between high-end fashion brands and fast-fashion brands (e.g., Zara, H&M).[100] Furthermore, brick-and-mortar retailers also introduce private labels and use other brands’ sales records in deciding what to produce.[101]

So, what makes sherlocking so different and dangerous when deployed in digital markets as to push competition authorities to contradict themselves?[102]

The double standard against sherlocking reflects the same concern and pursues the same goal of the various other attempts to forbid any form of self-preferencing in digital markets. Namely, antitrust investigations of sherlocking are fundamentally driven by the bias against hybrid and vertically integrated players. The investigations rely on the assumption that conflicts of interest have anticompetitive implications and that, therefore, platform neutrality should be promoted to ensure the neutrality of the competitive process.[103] Accordingly, hostility toward sherlocking may involve both of the illustrated scenarios—i.e., the use of nonpublic third-party business data either in adopting any business decision, or just copycat strategies, in particular.

As a result, however, competition authorities end up challenging a specific business model, rather than the specific practice at-issue, which brings undisputed competitive benefits in terms of lower prices and wider consumer choice, and which should therefore be balanced against potential exclusionary risks. As the CJEU has pointed out, the concept of competition on the merits:

…covers, in principle, a competitive situation in which consumers benefit from lower prices, better quality and a wider choice of new or improved goods and services. Thus, … conduct which has the effect of broadening consumer choice by putting new goods on the market or by increasing the quantity or quality of the goods already on offer must, inter alia, be considered to come within the scope of competition on the merits.[104]

Further, in light of the “as-efficient competitor” principle, competition on the merits may lead to “the departure from the market, or the marginalization of, competitors that are less efficient and so less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation.”[105]

It has been correctly noted that the “as-efficient competitor” principle is a reminder of what competition law is about and how it differs from regulation.[106] Competition law aims to protect a process, rather than engineering market structures to fulfill a particular vision of how an industry is to operate.[107] In other words, competition law does not target firms on the basis of size or status and does not infer harm from (market or bargaining) power or business model. Therefore, neither the dual role played by some large online platforms nor their preferential access to sensitive business data or their vertical integration, by themselves, create a competition problem. Competitive advantages deriving from size, status, power, or business model cannot be considered per se outside the scope of competition on the merits.

Some policymakers have sought to resolve these tensions in how competition law regards sherlocking by introducing or envisaging an outright ban. These initiatives and proposals have clearly been inspired by antitrust investigations, but they did so for the wrong reasons. Instead of taking stock of the challenging tradeoffs between short-term benefits and long-term risks that an antitrust assessment of sherlocking requires, they blamed competition law for not providing effective tools to achieve the policy goal of platform neutrality.[108] Therefore, the regulatory solution is merely functional to bypass the traditional burden of proof of antitrust analysis and achieve what competition-law enforcement cannot provide.

V. Conclusion

The bias against self-preferencing strikes again. Concerns about hybrid platforms’ potential conflicts of interest have led policymakers to seek prohibitions to curb different forms of self-preferencing, making the latter the symbol of the competition-policy zeitgeist in digital markets. Sherlocking shares this fate. Indeed, the DMA outlaws any use of business users’ nonpublic data and similar proposals have been advanced in the United States, Australia, and Japan. Further, like other forms of self-preferencing, such regulatory initiatives against sherlocking have been inspired by previous antitrust proceedings.

Drawing on these antitrust investigations, the present research shows the extent to which an outright ban on sherlocking is unjustified. Notably, the practice at-issue includes two different scenarios: the broad case in which a gatekeeper exploits its preferential access to business users’ data to better calibrate all of its business decisions and the narrow case in which such data is used to adopt a copycat strategy. In either scenario, the welfare effects and competitive implications of sherlocking are unclear.

Indeed, the use of certain data by a hybrid platform to improve business decisions generally should be classified as competition on the merits, and may yield an increase in both intra-platform (with respect to business users) and inter-platform (with respect to other platforms) competition. This would benefit consumers in terms of lower prices, better quality, and a wider choice of new or improved goods and services. In a similar vein, if sherlocking is used to deliver replicas of business users’ products or services, the anti-competitiveness of such a strategy may only result from a cumbersome tradeoff between short-term benefits (i.e., lower prices and wider choice) and negative long-term effects on innovation.

An implicit confirmation of the difficulties encountered in demonstrating the anti-competitiveness of sherlocking comes from the recent complaint issued by the FTC against Amazon.[109] Current FTC Chairwoman Lina Khan devoted a significant portion of her previous academic career to questioning Amazon’s practices (including the decision to introduce its own private labels inspired by third-party products)[110] and to supporting the adoption of structural-separation remedies to tackle platforms’ conflicts of interest that induce them to exploit their “systemic informational advantage (gleaned from competitors)” to thwart rivals and strengthen their own position by introducing replica products.[111] Despite these premises and although the FTC’s complaint targets numerous practices belonging to what has been described as an interconnected strategy to block off every major avenue of competition, however, sherlocking is surprisingly off the radar.

Regulatory initiatives to ban sherlocking in order to ensure platform neutrality with respect to business users and a level playing field among rivals would sacrifice undisputed procompetitive benefits on the altar of policy goals that competition rules are not meant to pursue. Sherlocking therefore appears to be a perfect case study of the side effects of unwarranted interventions in digital markets.

[1] Giuseppe Colangelo, Antitrust Unchained: The EU’s Case Against Self-Preferencing, 72 GRUR International 538 (2023).

[2] Jacques Cre?mer, Yves-Alexandre de Montjoye, & Heike Schweitzer, Competition Policy for the Digital Era (2019), 7, https://op.europa.eu/en/publication-detail/-/publication/21dc175c-7b76-11e9-9f05-01aa75ed71a1/language-en (all links last accessed 3 Jan. 2024); UK Digital Competition Expert Panel, Unlocking Digital Competition, (2019) 58, available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[3] You’ve Been Sherlocked, The Economist (2012), https://www.economist.com/babbage/2012/07/13/youve-been-sherlocked.

[4] Regulation (EU) 2022/1925 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (2022), OJ L 265/1, Article 6(2).

[5] U.S. S. 2992, American Innovation and Choice Online Act (AICOA) (2022), Section 3(a)(6), available at https://www.klobuchar.senate.gov/public/_cache/files/b/9/b90b9806-cecf-4796-89fb-561e5322531c/B1F51354E81BEFF3EB96956A7A5E1D6A.sil22713.pdf. See also U.S. House of Representatives, Subcommittee on Antitrust, Commercial, and Administrative Law, Investigation of Competition in Digital Markets, Majority Staff Reports and Recommendations (2020), 164, 362-364, 378, available at https://democrats-judiciary.house.gov/uploadedfiles/competition_in_digital_markets.pdf.

[6] Australian Competition and Consumer Commission, Digital Platform Services Inquiry Report on Regulatory Reform (2022), 125, https://www.accc.gov.au/about-us/publications/serial-publications/digital-platform-services-inquiry-2020-2025/digital-platform-services-inquiry-september-2022-interim-report-regulatory-reform.

[7] Japan Fair Trade Commission, Market Study Report on Mobile OS and Mobile App Distribution (2023), https://www.jftc.go.jp/en/pressreleases/yearly-2023/February/230209.html.

[8] European Commission, 10 Nov. 2020, Case AT.40462, Amazon Marketplace; see Press Release, Commission Sends Statement of Objections to Amazon for the Use of Non-Public Independent Seller Data and Opens Second Investigation into Its E-Commerce Business Practices, European Commission (2020), https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2077.

[9] Press Release, CMA Investigates Amazon Over Suspected Anti-Competitive Practices, UK Competition and Markets Authority (2022), https://www.gov.uk/government/news/cma-investigates-amazon-over-suspected-anti-competitive-practices.

[10] European Commission, 16 Jun. 2020, Case AT.40716, Apple – App Store Practices.

[11] Press Release, Commission Sends Statement of Objections to Meta over Abusive Practices Benefiting Facebook Marketplace, European Commission (2022), https://ec.europa.eu/commission/presscorner/detail/en/ip_22_7728; Press Release, CMA Investigates Facebook’s Use of Ad Data, UK Competition and Markets Authority (2021), https://www.gov.uk/government/news/cma-investigates-facebook-s-use-of-ad-data.

[12] DMA, supra note 4, Recital 10 and Article 1(6).

[13] GWB Digitalization Act, 18 Jan. 2021, Section 19a. On risks of overlaps between the DMA and the competition law enforcement, see Giuseppe Colangelo, The European Digital Markets Act and Antitrust Enforcement: A Liaison Dangereuse, 47 European Law Review 597.

[14] GWB, supra note 13, Section 19a (2)(4)(b).

[15] Press Release, Commission Sends Statement of Objections to Apple Clarifying Concerns over App Store Rules for Music Streaming Providers, European Commission (2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_1217.

[16] European Commission, 20 Dec. 2022, Case AT.40462; Press Release, Commission Accepts Commitments by Amazon Barring It from Using Marketplace Seller Data, and Ensuring Equal Access to Buy Box and Prime, European Commission (2022), https://ec.europa.eu/commission/presscorner/detail/en/ip_22_7777; UK Competition and Markets Authority, 3 Nov. 2023, Case No. 51184, https://www.gov.uk/cma-cases/investigation-into-amazons-marketplace.

[17] UK Competition and Markets Authority, 3 Nov. 2023, Case AT.51013, https://www.gov.uk/cma-cases/investigation-into-facebooks-use-of-data.

[18] See, e.g., Gil Tono & Lewis Crofts (2022), Amazon Data Commitments Match DMA Obligations, EU’s Vestager Say, mLex (2022), https://mlexmarketinsight.com/news/insight/amazon-data-commitments-match-dma-obligation-eu-s-vestager-says (reporting that Commissioner Vestager stated that Amazon’s data commitments definitively appear to match what would be asked within the DMA).

[19] DMA, supra note 4, Recital 46.

[20] Id., Article 6(2) (also stating that, for the purposes of the prohibition, non-publicly available data shall include any aggregated and non-aggregated data generated by business users that can be inferred from, or collected through, the commercial activities of business users or their customers, including click, search, view, and voice data, on the relevant core platform services or on services provided together with, or in support of, the relevant core platform services of the gatekeeper).

[21] AICOA, supra note 5.

[22] U.S. House of Representatives, supra note 5; see also Lina M. Khan, The Separation of Platforms and Commerce, 119 Columbia Law Review 973 (2019).

[23] U.S. Federal Trade Commission, et al. v. Amazon.com, Inc., Case No. 2:23-cv-01495 (W.D. Wash., 2023).

[24] Australian Competition and Consumer Commission, supra note 6, 125.

[25] Id., 124.

[26] Japan Fair Trade Commission, supra note 7, 144.

[27] European Commission, supra note 8. But see also Amazon, Supporting Sellers with Tools, Insights, and Data (2021), https://www.aboutamazon.eu/news/policy/supporting-sellers-with-tools-insights-and-data (claiming that the company is just using aggregate (rather than individual) data: “Just like our third-party sellers and other retailers across the world, Amazon also uses data to run our business. We use aggregated data about customers’ experience across the store to continuously improve it for everyone, such as by ensuring that the store has popular items in stock, customers are finding the products they want to purchase, or connecting customers to great new products through automated merchandising.”)

[28] European Commission, supra note 16.

[29] UK Competition and Markets Authority, supra notes 9 and 16.

[30] Bundeskartellamt, 5 Jul. 2022, Case B2-55/21, paras. 493, 504, and 518.

[31] Id., para. 536.

[32] European Commission, supra note 10.

[33] European Commission, supra note 11; UK Competition and Markets Authority, supra note 11.

[34] European Commission, supra note 16. In a similar vein, see also UK Competition and Markets Authority, supra note 16, paras. 4.2-4.7.

[35] European Commission, supra note 16, para. 111.

[36] Id., para. 123.

[37] Cre?mer, de Montjoye, & Schweitzer, supra note 2, 33-34.

[38] See, e.g., Marc Bourreau, Some Economics of Digital Ecosystems, OECD Hearing on Competition Economics of Digital Ecosystems (2020), https://www.oecd.org/daf/competition/competition-economics-of-digital-ecosystems.htm; Amelia Fletcher, Digital Competition Policy: Are Ecosystems Different?, OECD Hearing on Competition Economics of Digital Ecosystems (2020).

[39] See, e.g., Cristina Caffarra, Matthew Elliott, & Andrea Galeotti, ‘Ecosystem’ Theories of Harm in Digital Mergers: New Insights from Network Economics, VoxEU (2023), https://cepr.org/voxeu/columns/ecosystem-theories-harm-digital-mergers-new-insights-network-economics-part-1 (arguing that, in merger control, the implementation of an ecosystem theory of harm would require assessing how a conglomerate acquisition can change the network of capabilities (e.g., proprietary software, brand, customer-base, data) in order to evaluate how easily competitors can obtain alternative assets to those being acquired); for a different view, see Geoffrey A. Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 George Mason Law Review 1281(2021).

[40] See, e.g., Viktoria H.S.E. Robertson, Digital merger control: adapting theories of harm, (forthcoming) European Competition Journal; Caffarra, Elliott, & Galeotti, supra note 39; OECD, Theories of Harm for Digital Mergers (2023), available at www.oecd.org/daf/competition/theories-of-harm-for-digital-mergers-2023.pdf; Bundeskartellamt, Merger Control in the Digital Age – Challenges and Development Perspectives (2022), available at https://www.bundeskartellamt.de/SharedDocs/Publikation/EN/Diskussions_Hintergrundpapiere/2022/Working_Group_on_Competition_Law_2022.pdf?__blob=publicationFile&v=2; Elena Argentesi, Paolo Buccirossi, Emilio Calvano, Tomaso Duso, Alessia Marrazzo, & Salvatore Nava, Merger Policy in Digital Markets: An Ex Post Assessment, 17 Journal of Competition Law & Economics 95 (2021); Marc Bourreau & Alexandre de Streel, Digital Conglomerates and EU Competition Policy (2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3350512.

[41] Bundeskartellamt, 11 Feb. 2022, Case B6-21/22, https://www.bundeskartellamt.de/SharedDocs/Entscheidung/EN/Fallberichte/Fusionskontrolle/2022/B6-21-22.html;jsessionid=C0837BD430A8C9C8E04D133B0441EB95.1_cid362?nn=4136442.

[42] UK Competition and Markets Authority, Microsoft / Activision Blizzard Merger Inquiry (2023), https://www.gov.uk/cma-cases/microsoft-slash-activision-blizzard-merger-inquiry.

[43] See European Commission, Commission Prohibits Proposed Acquisition of eTraveli by Booking (2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_4573 (finding that a flight product is a crucial growth avenue in Booking’s ecosystem, which revolves around its hotel online-travel-agency (OTA) business, as it would generate significant additional traffic to the platform, thus allowing Booking to benefit from existing customer inertia and making it more difficult for competitors to contest Booking’s position in the hotel OTA market).

[44] Thomas Eisenmann, Geoffrey Parker, & Marshall Van Alstyne, Platform Envelopment, 32 Strategic Management Journal 1270 (2011).

[45] See, e.g., Colangelo, supra note 1, and Pablo Iba?n?ez Colomo, Self-Preferencing: Yet Another Epithet in Need of Limiting Principles, 43 World Competition 417 (2020) (investigating whether and to what extent self-preferencing could be considered a new standalone offense in EU competition law); see also European Commission, Digital Markets Act – Impact Assessment Support Study (2020), 294, https://op.europa.eu/en/publication-detail/-/publication/0a9a636a-3e83-11eb-b27b-01aa75ed71a1/language-en (raising doubts about the novelty of this new theory of harm, which seems similar to the well-established leveraging theories of harm of tying and bundling, and margin squeeze).

[46] European Commission, supra note 45, 16.

[47] European Commission, 27 Jun. 2017, Case AT.39740, Google Search (Shopping).

[48] See General Court, 10 Nov. 2021, Case T-612/17, Google LLC and Alphabet Inc. v. European Commission, ECLI:EU:T:2021:763, para. 155 (stating that the general principle of equal treatment obligates vertically integrated platforms to refrain from favoring their own services as opposed to rival ones; nonetheless, the ruling framed self-preferencing as discriminatory abuse).

[49] In the meantime, however, see Opinion of the Advocate General Kokott, 11 Jan. 2024, Case C-48/22 P, Google v. European Commission, ECLI:EU:C:2024:14, paras. 90 and 95 (arguing that the self-preferencing of which Google is accused constitutes an independent form of abuse, albeit one that exhibits some proximity to cases involving margin squeezing).

[50] European Commission, Commission Sends Amazon Statement of Objections over Proposed Acquisition of iRobot (2023), https://ec.europa.eu/commission/presscorner/detail/en/IP_23_5990.

[51] The same concerns and approach have been shared by the CMA, although it reached a different conclusion, finding that the new merged entity would not have incentive to self-preference its own branded RVCs: see UK Competition and Markets Authority, Amazon / iRobot Merger Inquiry – Clearance Decision (2023), paras. 160, 188, and 231, https://www.gov.uk/cma-cases/amazon-slash-irobot-merger-inquiry.

[52] See European Commission, supra note 45, 304.

[53] Id., 313-314 (envisaging, among potential remedies, the imposition of a duty to make all data used by the platform for strategic decisions available to third parties); see also Désirée Klinger, Jonathan Bokemeyer, Benjamin Della Rocca, & Rafael Bezerra Nunes, Amazon’s Theory of Harm, Yale University Thurman Arnold Project (2020), 19, available at https://som.yale.edu/sites/default/files/2022-01/DTH-Amazon.pdf.

[54] Colangelo, supra note 1; see also Oscar Borgogno & Giuseppe Colangelo, Platform and Device Neutrality Regime: The New Competition Rulebook for App Stores?, 67 Antitrust Bulletin 451 (2022).

[55] See Court of Justice of the European Union (CJEU), 12 May 2022, Case C-377/20, Servizio Elettrico Nazionale SpA v. Autorità Garante della Concorrenza e del Mercato, ECLI:EU:C:2022:379; 19 Apr. 2018, Case C-525/16, MEO v. Autoridade da Concorrência, ECLI:EU:C:2018:270; 6 Sep. 2017, Case C-413/14 P, Intel v. Commission, ECLI:EU:C:2017:632; 6 Oct. 2015, Case C-23/14, Post Danmark A/S v. Konkurrencerådet (Post Danmark II), ECLI:EU:C:2015:651; 27 Mar. 2012, Case C-209/10, Post Danmark A/S v Konkurrencera?det (Post Danmark I), ECLI: EU:C:2012:172; for a recent overview of the EU case law, see also Pablo Iba?n?ez Colomo, The (Second) Modernisation of Article 102 TFEU: Reconciling Effective Enforcement, Legal Certainty and Meaningful Judicial Review, SSRN (2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4598161.

[56] CJEU, Intel, supra note 55, paras. 133-134.

[57] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 73.

[58] Opinion of Advocate General Rantos, 9 Dec. 2021, Case C?377/20, Servizio Elettrico Nazionale SpA v. Autorità Garante della Concorrenza e del Mercato, ECLI:EU:C:2021:998, para. 45.

[59] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 77.

[60] Id., paras. 77, 80, and 83.

[61] CJEU, 26 Nov.1998, Case C-7/97, Oscar Bronner GmbH & Co. KG v. Mediaprint Zeitungs- und Zeitschriftenverlag GmbH & Co. KG, Mediaprint Zeitungsvertriebsgesellschaft mbH & Co. KG and Mediaprint Anzeigengesellschaft mbH & Co. KG, ECLI:EU:C:1998:569.

[62] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 85.

[63] European Commission, supra note 11; UK Competition and Markets Authority, supra note 17, paras. 2.6, 4.3, and 4.7.

[64] See, e.g., European Commission, Case COMP D3/34493, DSD, para. 112 (2001) OJ L166/1; affirmed in GC, 24 May 2007, Case T-151/01, DerGru?nePunkt – Duales System DeutschlandGmbH v. European Commission, ECLI:EU:T:2007:154 and CJEU, 16 Jul. 2009, Case C-385/07 P, ECLI:EU:C:2009:456; European Commission, Case IV/31.043, Tetra Pak II, paras. 105–08, (1992) OJ L72/1; European Commission, Case IV/29.971, GEMA III, (1982) OJ L94/12; CJUE, 27 Mar. 1974, Case 127/73, Belgische Radio en Televisie e socie?te? belge des auteurs, compositeurs et e?diteurs v. SV SABAM and NV Fonior, ECLI:EU:C:1974:25, para. 15; European Commission, Case IV/26.760, GEMA II, (1972) OJ L166/22; European Commission, Case IV/26.760, GEMA I, (1971) OJ L134/15.

[65] See, e.g., Richard A. Posner, Intellectual Property: The Law and Economics Approach, 19 The Journal of Economic Perspectives 57 (2005).

[66] See, e.g., Richard Gilbert & Carl Shapiro, Optimal Patent Length and Breadth, 21 The RAND Journal of Economics 106 (1990); Pankaj Tandon, Optimal Patents with Compulsory Licensing, 90 Journal of Political Economy 470 (1982); Frederic M. Scherer, Nordhaus’ Theory of Optimal Patent Life: A Geometric Reinterpretation, 62 American Economic Review 422 (1972); William D. Nordhaus, Invention, Growth, and Welfare: A Theoretical Treatment of Technological Change, Cambridge, MIT Press (1969).

[67] See, e.g., Hal R. Varian, Copying and Copyright, 19 The Journal of Economic Perspectives 121 (2005); William R. Johnson, The Economics of Copying, 93 Journal of Political Economy 158 (1985); Stephen Breyer, The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies, and Computer Programs, 84 Harvard Law Review 281 (1970).

[68] Sai Krishna Kamepalli, Raghuram Rajan, & Luigi Zingales, Kill Zone, NBER Working Paper No. 27146 (2022), http://www.nber.org/papers/w27146; Massimo Motta & Sandro Shelegia, The “Kill Zone”: Copying, Acquisition and Start-Ups’ Direction of Innovation, Barcelona GSE Working Paper Series Working Paper No. 1253 (2021), https://bse.eu/research/working-papers/kill-zone-copying-acquisition-and-start-ups-direction-innovation; U.S. House of Representatives, Subcommittee on Antitrust, Commercial, and Administrative Law, supra note 8, 164; Stigler Committee for the Study of Digital Platforms, Market Structure and Antitrust Subcommittee (2019) 54, https://research.chicagobooth.edu/stigler/events/single-events/antitrust-competition-conference/digital-platforms-committee; contra, see Geoffrey A. Manne, Samuel Bowman, & Dirk Auer, Technology Mergers and the Market for Corporate Control, 86 Missouri Law Review 1047 (2022).

[69] See also Howard A. Shelanski, Information, Innovation, and Competition Policy for the Internet, 161 University of Pennsylvania Law Review 1663 (2013), 1999 (describing as “forced free riding” the situation occurring when a platform appropriates innovation by other firms that depend on the platform for access to consumers).

[70] See Feng Zhu & Qihong Liu, Competing with Complementors: An Empirical Look at Amazon.com, 39 Strategic Management Journal 2618 (2018).

[71] Andrei Hagiu, Tat-How Teh, and Julian Wright, Should Platforms Be Allowed to Sell on Their Own Marketplaces?, 53 RAND Journal of Economics 297 (2022), (the model assumes that there is a platform that can function as a seller and/or a marketplace, a fringe of small third-party sellers that all sell an identical product, and an innovative seller that has a better product in the same category as the fringe sellers and can invest more in making its product even better; further, the model allows the different channels (on-platform or direct) and the different sellers to offer different values to consumers; therefore, third-party sellers (including the innovative seller) can choose whether to participate on the platform’s marketplace, and whenever they do, can price discriminate between consumers that come to it through the marketplace and consumers that come to it through the direct channel).

[72] See Germa?n Gutie?rrez, The Welfare Consequences of Regulating Amazon (2022), available at http://germangutierrezg.com/Gutierrez2021_AMZ_welfare.pdf (building an equilibrium model where consumers choose products on the Amazon platform, while third-party sellers and Amazon endogenously set prices of products and platform fees).

[73] See Federico Etro, Product Selection in Online Marketplaces, 30 Journal of Economics & Management Strategy 614 (2021), (relying on a model where a marketplace such as Amazon provides a variety of products and can decide, for each product, whether to monetize sales by third-party sellers through a commission or become a seller on its platform, either by commercializing a private label version or by purchasing from a vendor and resell as a first party retailer; as acknowledged by the author, a limitation of the model is that it assumes that the marketplace can set the profit?maximizing commission on each product; if this is not the case, third-party sales would be imperfectly monetized, which would increase the relative profitability of entry).

[74] Patrick Andreoli-Versbach & Joshua Gans, Interplay Between Amazon Store and Logistics, SSRN (2023) https://ssrn.com/abstract=4568024.

[75] Simon Anderson & O?zlem Bedre-Defolie, Online Trade Platforms: Hosting, Selling, or Both?, 84 International Journal of Industrial Organization 102861 (2022).

[76] Chiara Farronato, Andrey Fradkin, & Alexander MacKay, Self-Preferencing at Amazon: Evidence From Search Rankings, NBER Working Paper No. 30894 (2023), http://www.nber.org/papers/w30894.

[77] See Erik Madsen & Nikhil Vellodi, Insider Imitation, SSRN (2023) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3832712 (introducing a two-stage model where the platform publicly commits to an imitation policy and the entrepreneur observes this policy and chooses whether to innovate: if she chooses not to, the game ends and both players earn profits normalized to zero; otherwise, the entrepreneur pays a fixed innovation cost to develop the product, which she then sells on a marketplace owned by the platform).

[78] Federico Etro, The Economics of Amazon, SSRN (2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4307213.

[79] Jay Pil Choi, Kyungmin Kim, & Arijit Mukherjee, “Sherlocking” and Information Design by Hybrid Platforms, SSRN (2023), https://ssrn.com/abstract=4332558 (the model assumes that the platform chooses its referral fee at the beginning of the game and that the cost of entry is the same for both the seller and the platform).

[80] Radostina Shopova, Private Labels in Marketplaces, 89 International Journal of Industrial Organization 102949 (2023), (the model assumes that the market structure is given exogenously and that the quality of the seller’s product is also exogenous; therefore, the paper does not investigate how entry by a platform affects the innovation incentives of third-party sellers).

[81] Jean-Pierre Dube?, Amazon Private Brands: Self-Preferencing vs Traditional Retailing, SSRN (2022) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4205988.

[82] Gregory S. Crawford, Matteo Courthoud, Regina Seibel, & Simon Zuzek, Amazon Entry on Amazon Marketplace, CEPR Discussion Paper No. 17531 (2022), https://cepr.org/publications/dp17531.

[83] Motta & Shelegia, supra note 68.

[84] Jingcun Cao, Avery Haviv, & Nan Li, The Spillover Effects of Copycat Apps and App Platform Governance, SSRN (2023), https://ssrn.com/abstract=4250292.

[85] Massimo Motta, Self-Preferencing and Foreclosure in Digital Markets: Theories of Harm for Abuse Cases, 90 International Journal of Industrial Organization 102974 (2023).

[86] Id.

[87] Id.

[88] See, e.g., Crawford, Courthoud, Seibel, & Zuzek, supra note 82; Etro, supra note 78; Shopova, supra note 80.

[89] Motta, supra note 85.

[90] Servizio Elettrico Nazionale, supra note 55, paras. 53-54; Post Danmark II, supra note 55, para. 65.

[91] Etro, supra note 78; see also Herbert Hovenkamp, The Looming Crisis in Antitrust Economics, 101 Boston University Law Review 489 (2021), 543, (arguing that: “Amazon’s practice of selling both its own products and those of rivals in close juxtaposition almost certainly benefits consumers by permitting close price comparisons. When Amazon introduces a product such as AmazonBasics AAA batteries in competition with Duracell, prices will go down. There is no evidence to suggest that the practice is so prone to abuse or so likely to harm consumers in other ways that it should be categorically condemned. Rather, it is an act of partial vertical integration similar to other practices that the antitrust laws have confronted and allowed in the past.”)

[92] On the more complex economic rationale of intellectual property, see, e.g., William M. Landes & Richard A. Posner, The Economic Structure of Intellectual Property Law, Cambridge, Harvard University Press (2003).

[93] See, e.g., Italian Competition Authority, 18 Jul. 2023 No. 30737, Case A538 – Sistemi di sigillatura multidiametro per cavi e tubi, (2023) Bulletin No. 31.

[94] See CJEU, 6 Apr. 1995, Joined Cases C-241/91 P and 242/91 P, RTE and ITP v. Commission, ECLI:EU:C:1995:98; 29 Apr. 2004, Case C-418/01, IMS Health GmbH & Co. OHG v. NDC Health GmbH & Co. GH, ECLI:EU:C:2004:257; General Court, 17 Sep. 2007, Case T-201/04, Microsoft v. Commission, ECLI:EU:T:2007:289; CJEU, 16 Jul. 2015, Case C-170/13, Huawei Technologies Co. Ltd v. ZTE Corp., ECLI:EU:C:2015:477.

[95] See, e.g., Dana Mattioli, How Amazon Wins: By Steamrolling Rivals and Partners, Wall Street Journal (2022), https://www.wsj.com/articles/amazon-competition-shopify-wayfair-allbirds-antitrust-11608235127; Aditya Kalra & Steve Stecklow, Amazon Copied Products and Rigged Search Results to Promote Its Own Brands, Documents Show, Reuters (2021), https://www.reuters.com/investigates/special-report/amazon-india-rigging.

[96] Williams-Sonoma, Inc. v. Amazon.Com, Inc., Case No. 18-cv-07548 (N.D. Cal., 2018). The suit was eventually dismissed, as the parties entered into a settlement agreement: Williams-Sonoma, Inc. v. Amazon.Com, Inc., Case No. 18-cv-07548-AGT (N.D. Cal., 2020).

[97] Amazon Best Sellers, https://www.amazon.com/Best-Sellers/zgbs.

[98] Hovenkamp, supra note 91, 2015-2016.

[99] Nicolas Petit, Big Tech and the Digital Economy, Oxford, Oxford University Press (2020), 224-225.

[100] For a recent analysis, see Zijun (June) Shi, Xiao Liu, Dokyun Lee, & Kannan Srinivasan, How Do Fast-Fashion Copycats Affect the Popularity of Premium Brands? Evidence from Social Media, 60 Journal of Marketing Research 1027 (2023).

[101] Lina M. Khan, Amazon’s Antitrust Paradox, 126 Yale Law Journal 710 (2017), 782.

[102] See Massimo Motta &Martin Peitz, Intervention Triggers and Underlying Theories of Harm, in Market Investigations. A New Competition Tool for Europe? (M. Motta, M. Peitz, & H. Schweitzer, eds.), Cambridge, Cambridge University Press (2022), 16, 59 (arguing that, while it is unclear to what extent products or ideas are worth protecting and/or can be protected from sherlocking and whether such cloning is really harmful to consumers, this is clearly an area where an antitrust investigation for abuse of dominant position would not help).

[103] Khan, supra note 101, 780 and 783 (arguing that Amazon’s conflicts of interest tarnish the neutrality of the competitive process and that the competitive implications are clear, as Amazon is exploiting the fact that some of its customers are also its rivals).

[104] Servizio Elettrico Nazionale, supra note 55, para. 85.

[105] Post Danmark I, supra note 55, para. 22.

[106] Iba?n?ez Colomo, supra note 55, 21-22.

[107] Id.

[108] See, e.g., DMA, supra note 4, Recital 5 (complaining that the scope of antitrust provisions is “limited to certain instances of market power, for example dominance on specific markets and of anti-competitive behaviour, and enforcement occurs ex post and requires an extensive investigation of often very complex facts on a case by case basis.”).

[109] U.S. Federal Trade Commission, et al. v. Amazon.com, Inc., supra note 23.

[110] Khan, supra note 101.

[111] Khan, supra note 22, 1003, referring to Amazon, Google, and Meta.

Continue reading
Antitrust & Consumer Protection

The FTC Should Not Enact a Deceptive or Unfair Marketing Earnings-Claims Rule

TOTM Back in February 2022, the Federal Trade Commission (FTC) announced an advance notice of proposed rulemaking (ANPRM) on “deceptive or unfair earnings claims.” According to the FTC… Read . . .

Back in February 2022, the Federal Trade Commission (FTC) announced an advance notice of proposed rulemaking (ANPRM) on “deceptive or unfair earnings claims.” According to the FTC…

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Tom Hazlett on the Benefits of Mergers

Presentations & Interviews ICLE Academic Affiliate Thomas Hazlett was a guest on CNBC’s Squawk Box to discuss his recent Wall Street Journal op-ed arguing that mergers often have . . .

ICLE Academic Affiliate Thomas Hazlett was a guest on CNBC’s Squawk Box to discuss his recent Wall Street Journal op-ed arguing that mergers often have consumer benefits. Video of the clip is embedded below.

Continue reading
Antitrust & Consumer Protection

Brian Albrecht on Market Power

Presentations & Interviews ICLE Chief Economist Brian Albrecht was a guest on the Lead-Lag Live podcast to discuss the issues of market concentration, inflation, and the role of . . .

ICLE Chief Economist Brian Albrecht was a guest on the Lead-Lag Live podcast to discuss the issues of market concentration, inflation, and the role of “superstar” firms in the economy. Audio of the full episode is embedded below.

Continue reading
Antitrust & Consumer Protection

A European Commission Challenge to iRobot’s Acquisition Is Unjustified and Would Harm Dynamic Competition

TOTM Once again, a major competition agency, the European Commission, appears poised to take an anticompetitive enforcement action—in this case, blocking Amazon’s acquisition of consumer robotic-manufacturer . . .

Once again, a major competition agency, the European Commission, appears poised to take an anticompetitive enforcement action—in this case, blocking Amazon’s acquisition of consumer robotic-manufacturer iRobot.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection