Showing 9 of 67 Publications in Barriers to Entry

Mi Mercado Es Su Mercado: The Flawed Competition Analysis of Mexico’s COFECE

Popular Media Mexico’s Federal Economic Competition Commission (COFECE, after its Spanish acronym) has published the preliminary report it prepared following its investigation of competition in the retail . . .

Mexico’s Federal Economic Competition Commission (COFECE, after its Spanish acronym) has published the preliminary report it prepared following its investigation of competition in the retail electronic-commerce market (e.g., Amazon). The report finds that: 

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

A Competition Law & Economics Analysis of Sherlocking

ICLE White Paper Abstract Sherlocking refers to an online platform’s use of nonpublic third-party business data to improve its own business decisions—for instance, by mimicking the successful products . . .

Abstract

Sherlocking refers to an online platform’s use of nonpublic third-party business data to improve its own business decisions—for instance, by mimicking the successful products and services of edge providers. Such a strategy emerges as a form of self-preferencing and, as with other theories about preferential access to data, it has been targeted by some policymakers and competition authorities due to the perceived competitive risks originating from the dual role played by hybrid platforms (acting as both referees governing their platforms, and players competing with the business they host). This paper investigates the competitive implications of sherlocking, maintaining that an outright ban is unjustified. First, the paper shows that, by aiming to ensure platform neutrality, such a prohibition would cover scenarios (i.e., the use of nonpublic third-party business data to calibrate business decisions in general, rather than to adopt a pure copycat strategy) that should be analyzed separately. Indeed, in these scenarios, sherlocking may affect different forms of competition (inter-platform v. intra-platform competition). Second, the paper argues that, in either case, the practice’s anticompetitive effects are questionable and that the ban is fundamentally driven by a bias against hybrid and vertically integrated players.

I. Introduction

The dual role some large digital platforms play (as both intermediary and trader) has gained prominence among the economic arguments used to justify the recent wave of regulation hitting digital markets around the world. Many policymakers have expressed concern about potential conflicts of interest among companies that have adopted this hybrid model and that also control important gateways for business users. In other words, the argument goes, some online firms act not only as regulators who set their platforms’ rules and as referees who enforce those rules, but also as market players who compete with their business users. This raises the fear that large platforms could reserve preferential treatment for their own services and products, to the detriment of downstream rivals and consumers. That, in turn, has led to calls for platform-neutrality rules.

Toward this aim, essentially all of the legislative initiatives undertaken around the world in recent years to enhance competition in digital markets have included anti-discrimination provisions that target various forms of self-preferencing. Self-preferencing, it has been said, serves as the symbol of the current competition-policy zeitgeist in digital markets.[1] Indeed, this conduct is considered functional to leveraging strategies that would grant gatekeepers the chance to entrench their power in core markets and extend it into associated markets.[2]

Against this background, so-called “sherlocking” has emerged as one form of self-preferencing. The term was coined roughly 20 years ago, after Apple updated its own app Sherlock (a search tool on its desktop-operating system) to mimic a third-party application called Watson, which was created by Karelia Software to complement the Apple tool’s earlier version.[3] According to critics of self-preferencing generally and sherlocking in particular, biased intermediation and related conflicts of interest allow gatekeepers to exploit their preferential access to business users’ data to compete against them by replicating successful products and services. The implied assumption is that this strategy is relevant to competition policy, even where no potential intellectual-property rights (IPRs) are infringed and no slavish imitation sanctionable under unfair-competition laws is detected. Indeed, under such theories, sherlocking would already be prevented by the enforcement of these rules.

To tackle perceived misuse of gatekeepers’ market position, the European Union’s Digital Markets Act (DMA) introduced a ban on sherlocking.[4] Similar concerns have also motivated requests for intervention in the United States,[5] Australia,[6] and Japan.[7] In seeking to address at least two different theories of gatekeepers’ alleged conflicts of interest, these proposed bans on exploiting access to business users’ data are not necessarily limited to the risk of product imitation, but may include any business decision whatsoever that a platform may make while relying on that data.

In parallel with the regulatory initiatives, the conduct at-issue has also been investigated in some antitrust proceedings, which appear to seek the very same twofold goal. In particular, in November 2020, the European Commission sent a statement of objections to Amazon that argued the company had infringed antitrust rules through the systematic use of nonpublic business data from independent retailers who sell on the Amazon online marketplace in order to benefit Amazon’s own retail business, which directly competes with those retailers.[8] A similar investigation was opened by the UK Competition and Markets Authority (CMA) in July 2022.[9]

Further, as part of the investigation opened into Apple’s App Store rule requiring developers to use Apple’s in-app purchase mechanism to distribute paid apps and/or paid digital content, the European Commission also showed interest in evaluating whether Apple’s conduct might disintermediate competing developers from relevant customer data, while Apple obtained valuable data about those activities and its competitors’ offers.[10] The European Commission and UK CMA likewise launched an investigation into Facebook Marketplace, with accusations that Meta used data gathered from advertisers in order to compete with them in markets where the company is active, such as classified ads.[11]

There are two primary reasons these antitrust proceedings are relevant. First, many of the prohibitions envisaged in regulatory interventions (e.g., DMA) clearly took inspiration from the antitrust investigations, thus making it important to explore the insights that competition authorities may provide to support an outright ban. Second, given that regulatory intervention will be implemented alongside competition rules (especially in Europe) rather than displace them,[12] sherlocking can be assessed at both the EU and national level against dominant players that are not eligible for “gatekeeper” designation under the DMA. For those non-gatekeeper firms, the practice may still be investigated by antitrust authorities and assessed before courts, aside from the DMA’s per se prohibition. And, of course, investigations and assessments of sherlocking could also be made even in those jurisdictions where there isn’t an outright ban.

The former sis well-illustrated by the German legislature’s decision to empower its national competition authority with a new tool to tackle abusive practices that are similar and functionally equivalent to the DMA.[13] Indeed, as of January 2021, the Bundeskartellamt may identify positions of particular market relevance (undertakings of “paramount significance for competition across markets”) and assess their possible anticompetitive effects on competition in those areas of digital ecosystems in which individual companies may have a gatekeeper function. Both the initiative’s aims and its list of practices are similar to the DMA. They are distinguished primarily by the fact that the German list is exhaustive, and the practices at-issue are not prohibited per se, but are subject to a reversal of the burden of proof, allowing firms to provide objective justifications. For the sake of this analysis, within the German list, one provision prohibits designated undertakings from “demanding terms and conditions that permit … processing data relevant for competition received from other undertakings for purposes other than those necessary for the provision of its own services to these undertakings without giving these undertakings sufficient choice as to whether, how and for what purpose such data are processed.”[14]

Unfortunately, none of the above-mentioned EU antitrust proceedings have concluded with a final decision that addresses the merits of sherlocking. This precludes evaluating whether the practice would have survived before the courts. Regarding the Apple investigation, the European Commission dropped the case over App Store rules and issued a new statement of objections that no longer mentions sherlocking.[15] Further, the European Commission and the UK CMA accepted the commitments offered by Amazon to close those investigations.[16] The CMA likewise accepted the commitments offered by Meta.[17]

Those outcomes can be explained by the DMA’s recent entry into force. Indeed, because of the need to comply with the new regulation, players designated as gatekeepers likely have lost interest in challenging antitrust investigations that target the very same conduct prohibited by the DMA.[18] After all, given that the DMA does not allow any efficiency defense against the listed prohibitions, even a successful appeal against an antitrust decision would be a pyrrhic victory. From the opposite perspective, the same applies to the European Commission, which may decide to save time, costs, and risks by dropping an ongoing case against a company designated as a gatekeeper under the DMA, knowing that the conduct under investigation will be prohibited in any case.

Nonetheless, despite the lack of any final decision on sherlocking, these antitrust assessments remain relevant. As already mentioned, the DMA does not displace competition law and, in any case, dominant platforms not designated as gatekeepers under the DMA still may face antitrust investigations over sherlocking. This applies even more for jurisdictions, such as the United States, that are evaluating DMA-like legislative initiatives (e.g., the American Innovation and Choice Online Act, or “AICOA”).

Against this background, drawing on recent EU cases, this paper questions the alleged anticompetitive implications of sherlocking, as well as claims that the practice fails to comply with existing antitrust rules.

First, the paper illustrates that prohibitions on the use of nonpublic third-party business data would cover two different theories that should be analyzed separately. Whereas a broader case involves all the business decisions adopted by a dominant platform because of such preferential access (e.g., the launch of new products or services, the development or cessation of existing products or services, the calibration of pricing and management systems), a more specific case deals solely with the adoption of a copycat strategy. By conflating these theories in support of a blanket ban that condemns any use of nonpublic third-party business data, EU antitrust authorities are fundamentally motivated by the same policy goal pursued by the DMA—i.e., to impose a neutrality regime on large online platforms. The competitive implications differ significantly, however, as adopting copycat strategies may only affect intra-brand competition, while using said data to improve other business decisions could also affect inter-platform competition.

Second, the paper shows that, in both of these scenarios, the welfare effects of sherlocking are unclear. Notably, exploiting certain data to better understand the market could help a platform to develop new products and services, to improve existing products and services, or more generally to be more competitive with respect to both business users and other platforms. As such outcomes would benefit consumers in terms of price and quality, any competitive advantage achieved by the hybrid platform could be considered unlawful only if it is not achieved on the merits. In a similar vein, if sherlocking is used by a hybrid platform to deliver replicas of its business users’ products and services, that would likely provide short-term procompetitive effects benefitting consumers with more choice and lower prices. In this case, the only competitive harm that would justify an antitrust intervention resides in (uncertain) negative long-term effects on innovation.

As a result, in any case, an outright ban of sherlocking, such as is enshrined in the DMA, is economically unsound since it would clearly harm consumers.

The paper is structured as follows. Section II describes the recent antitrust investigations of sherlocking, illustrating the various scenarios that might include the use of third-party business data. Section III investigates whether sherlocking may be considered outside the scope of competition on the merits for bringing competitive advantages to platforms solely because of their hybrid business model. Section IV analyzes sherlocking as a copycat strategy by investigating the ambiguous welfare effects of copying in digital markets and providing an antitrust assessment of the practice at issue. Section V concludes.

II. Antitrust Proceedings on Sherlocking: Platform Neutrality and Copycat Competition

Policymakers’ interest in sherlocking is part of a larger debate over potentially unfair strategies that large online platforms may deploy because of their dual role as an unavoidable trading partner for business users and a rival in complementary markets.

In this scenario, as summarized in Table 1, the DMA outlaws sherlocking, establishing that to “prevent gatekeepers from unfairly benefitting from their dual role,”[19] they are restrained from using, in competition with business users, “any data that is not publicly available that is generated or provided by those business users in the context of their use of the relevant core platform services or of the services provided together with, or in support of, the relevant core platform services, including data generated or provided by the customers of those business users.”[20] Recital 46 further clarifies that the “obligation should apply to the gatekeeper as a whole, including but not limited to its business unit that competes with the business users of a core platform service.”

A similar provision was included in the American Innovation and Choice Online Act (AICOA), which was considered, but not ultimately adopted, in the 117th U.S. Congress. AICOA, however, would limit the scope of the ban to the offer of products or services that would compete with those offered by business users.[21] Concerns about copycat strategies were also reported in the U.S. House of Representatives’ investigation of the state of competition in digital markets as supporting the request for structural-separation remedies and line-of-business restrictions to eliminate conflicts of interest where a dominant intermediary enters markets that place it in competition with dependent businesses.[22] Interestingly, however, in the recent complaint filed by the U.S. Federal Trade Commission (FTC) and 17 state attorneys general against Amazon that accuses the company of having deployed an interconnected strategy to block off every major avenue of competition (including price, product selection, quality, and innovation), there is no mention of sherlocking among the numerous unfair practices under investigation.[23]

Evaluating regulatory-reform proposals for digital markets, the Australian Competition and Consumer Commission (ACCC) also highlighted the risk of sherlocking, arguing that it could have an adverse effect on competition, notably on rivals’ ability to compete, when digital platforms exercise their strong market position to utilize nonpublic data to free ride on the innovation efforts of their rivals.[24] Therefore, the ACCC suggested adopting service-specific codes to address self-preferencing by, for instance, imposing data-separation requirements to restrain dominant app-store providers from using commercially sensitive data collected from the app-review process to develop their own apps.[25]

Finally, on a comparative note, it is also useful to mention the proposals advanced by the Japanese Fair Trade Commission (JFTC) in its recent market-study report on mobile ecosystems.[26] In order to ensure equal footing among competitors, the JFTC specified that its suggestion to prevent Google and Apple from using nonpublic data generated by other developers’ apps aims at pursuing two purposes. Such a ban would, indeed, concern not only use of the data for the purpose of developing competing apps, products, and services, but also its use for developing their own apps, products, and services.

TABLE 1: Legislative Initiatives and Proposals to Ban Sherlocking

As previously anticipated, sherlocking recently emerged as an antitrust offense in three investigations launched by the European Commission and the UK CMA.

In the first case, Amazon’s alleged reliance on marketplace sellers’ nonpublic business data has been claimed to distort fair competition on its platform and prevent effective competition. In its preliminary findings, the Commission argued that Amazon takes advantage of its hybrid business model, leveraging its access to nonpublic third-party sellers’ data (e.g., the number of ordered and shipped units of products; sellers’ revenues on the marketplace; the number of visits to sellers’ offers; data relating to shipping, to sellers’ past performance, and to other consumer claims on products, including the activated guarantees) to adjust its retail offers and strategic business decisions to the detriment of third-party sellers, which are direct competitors on the marketplace.[27] In particular, the Commission was concerned that Amazon uses such data for its decision to start and end sales of a product, for its pricing system, for its inventory-planning and management system, and to identify third-party sellers that Amazon’s vendor-recruitment teams should approach to invite them to become direct suppliers to Amazon Retail. To address the data-use concern, Amazon committed not to use nonpublic data relating to, or derived from, independent sellers’ activities on its marketplace for its retail business and not to use such data for the purposes of selling branded goods, as well as its private-label products.[28]

A parallel investigation ended with similar commitments in the UK.[29] According to the UK CMA, Amazon’s access to and use of nonpublic seller data could result in a competitive advantage for Amazon Retail arising from its operation of the marketplace, rather than from competition on the merits, and may lead to relevant adverse effects on competition. Notably, it was alleged this could result in a reduction in the scale and competitiveness of third-party sellers on the Amazon Marketplace; a reduction in the number and range of product offers from third-party sellers on the Amazon Marketplace; and/or less choice for consumers, due to them being offered lower quality goods and/or paying higher prices than would otherwise be the case.

It is also worth mentioning that, by determining that Amazon is an undertaking of paramount significance for competition across markets, the Bundeskartellamt emphasized the competitive advantage deriving from Amazon’s access to nonpublic data, such as Glance Views, sales figures, sale quantities, cost components of products, and reorder status.[30] Among other things, with particular regard to Amazon’s hybrid role, the Bundeskartellamt noted that the preferential access to competitively sensitive data “opens up the possibility for Amazon to optimize its own-brand assortment.”[31]

A second investigation involved Apple and its App Store rule.[32] According to the European Commission, the mandatory use of Apple’s own proprietary in-app purchase system (IAP) would, among other things, grant Apple full control over the relationship its competitors have with customers, thus disintermediating those competitors from customer data and allowing Apple to obtain valuable data about the activities and offers of its competitors.

Finally, Meta faced antitrust proceedings in both the EU and the UK.[33] The focus was on Facebook Marketplace—i.e., an online classified-ads service that allows users to advertise goods for sale. According to the European Commission and the CMA, Meta unilaterally imposes unfair trading conditions on competing online-classified ads services that advertise on Facebook or Instagram. These terms and conditions, which authorize Meta to use ads-related data derived from competitors for the benefit of Facebook Marketplace, are considered unjustified, as they impose an unnecessary burden on competitors and only benefit Facebook Marketplace. The suspicion is that Meta has used advertising data from Facebook Marketplace competitors for the strategic planning, product development, and launch of Facebook Marketplace, as well as for Marketplace’s operation and improvement.

Overall, these investigations share many features. The concerns about third-party business-data use, as well as about other forms of self-preferencing, revolve around the competitive advantages that accrue to a dominant platform because of its dual role. Such advantages are considered unfair, as they are not the result of the merits of a player, but derived purely and simply from its role as an important gateway to reach end users. Moreover, this access to valuable business data is not reciprocal. The feared risk is the marginalization of business users competing with gatekeepers on the gatekeepers’ platforms and, hence, the alleged harm to competition is the foreclosure of rivals in complementary markets (horizontal foreclosure).

The focus of these investigations was well-illustrated by the European Commission’s decision on Amazon’s practice.[34] The Commission’s concern was about the “data delta” that Amazon may exploit, namely the additional data related to third-party sellers’ listings and transactions that are not available to, and cannot be replicated by, the third-party sellers themselves, but are available to and used by Amazon Retail for its own retail operations.[35] Contrary to Amazon Retail—which, according to Commission’s allegations, would have full access to and would use such individual, real-time data of all its third-party sellers to calibrate its own retail decisions—sellers would have access only to their own individual listings and sales data. As a result, the Commission came to the (preliminary) conclusion that real-time access to and use of such volume, variety, and granularity of non-publicly available data from its retail competitors generates a significant competitive advantage for Amazon Retail in each of the different decisional processes that drive its retail operations.[36]

On a closer look, however, while antitrust authorities seem to target the use of nonpublic third-party business data as a single theory of harm, their allegations cover two different scenarios along the lines of what has already been examined with reference to the international legislative initiatives and proposals. Indeed, the Facebook Marketplace case does not involve an allegation of copying, as Meta is accused of gathering data from its business users to launch and improve its ads service, instead of reselling goods and services.

FIGURE 1: Sherlocking in Digital Markets

As illustrated above in Figure 1, while the claim in the latter scenario is that the preferential data use would help dominant players calibrate business decisions in general, the former scenario instead involves the use of such data for a pure copycat strategy of an entire product or service, or some of its specific features.

In both scenarios the aim of the investigations is to ensure platform neutrality. Accordingly, as shown by the accepted commitments, the envisaged solution for antitrust authorities is to impose  data-separation requirements to restrain dominant platforms from using third-party commercially sensitive data. Putting aside that these investigations concluded with commitments from the firms, however, their chances of success before a court differ significantly depending on whether they challenge a product-imitation strategy, or any business decision adopted because of the “data delta.”

A. Sherlocking and Unconventional Theories of Harm for Digital Markets

Before analyzing how existing competition-law rules could be applied to the various scenarios involving the use of third-party business data, it is worth providing a brief overview of the framework in which the assessment of sherlocking is conducted. As competition in the digital economy is increasingly a competition among ecosystems,[37] a lively debate has emerged on the capacity of traditional antitrust analysis to adequately capture the peculiar features of digital markets. Indeed, the combination of strong economies of scale and scope; indirect network effects; data advantages and synergies across markets; and portfolio effects all facilitate ecosystem development all contribute to making digital markets highly concentrated, prone to tipping, and not easily contestable.[38] As a consequence, it’s been suggested that addressing these distinctive features of digital markets requires an overhaul of the antitrust regime.

Such discussions require the antitrust toolkit and theories of harm to illustrate whether and how a particular practice, agreement, or merger is anticompetitive. Notably, at issue is whether traditional antitrust theories of harm are fit for purpose or whether novel theories of harm should be developed in response to the emerging digital ecosystems. The latter requires looking at the competitive impact of expanding, protecting, or strengthening an ecosystem’s position, and particularly whether such expansion serves to exploit a network of capabilities and to control access to key inputs and components.[39]

A significant portion of recent discussions around developing novel theories of harm to better address the characteristics of digital-business models and markets has been devoted to the topic of merger control—in part a result of the impressive number of acquisitions observed in recent years.[40] In particular, the focus has been on analyzing conglomerate mergers that involve acquiring a complementary or unrelated asset, which have traditionally been assumed to raise less-significant competition concerns.

In this regard, an ecosystem-based theory seems to have guided the Bundeskartellamt in its assessment of Meta’s acquisition of Kustomer[41] and by the CMA in Microsoft/Activision.[42] A more recent example is the European Commission’s decision to prohibit the proposed Booking/eTraveli merger, where the Commission explicitly noted that the transaction would have allowed Booking to expand its travel-services ecosystem.[43] The Commission’s concerns were related primarily to the so-called “envelopment” strategy, in which a prominent platform within a specific market broadens its range of services into other markets where there is a significant overlap of customer groups already served by the platform.[44]

Against this background, putative self-preferencing harms represent one of the European Commission’s primary (albeit contentious)[45] attempts to develop new theories of harm built on conglomerate platforms’ ability to bundle services or use data from one market segment to inform product development in another.[46] Originally formulated in the Google Shopping decision,[47] the theory of harm of (leveraging through) self-preferencing has subsequently inspired the DMA, which targets different forms of preferential treatment, including sherlocking.

In particular, it is asserting that platform may use self-preferencing to adopt a leveraging strategy with a twofold anticompetitive effect—that is, excluding or impeding rivals from competing with the platform (defensive leveraging) and extending the platform’s market power into associated markets (offensive leveraging). These goals can be pursued because of the unique role that some large digital platforms play. That is, they not only enjoy strategic market status by controlling ecosystems of integrated complementary products and services, which are crucial gateways for business users to reach end users, but they also perform a dual role as both a critical intermediary and a player active in complementors’ markets. Therefore, conflicts of interests may provide incentives for large vertically integrated platforms to favor their own products and services over those of their competitors.[48]

The Google Shopping theory of harm, while not yet validated by the Court of Justice of the European Union (CJEU),[49] has also found its way into merger analysis, as demonstrated by the European Commission’s recent assessment of iRobot/Amazon.[50] In its statement of objections, the Commission argued that the proposed acquisition of iRobot may give Amazon the ability and incentive to foreclose iRobot’s rivals by engaging in several foreclosing strategies to prevent them from selling robot vacuum cleaners (RVCs) on Amazon’s online marketplace and/or at degrading such rivals’ access to that marketplace. In particular, the Commission found that Amazon could deploy such self-preferencing strategies as delisting rival RVCs; reducing rival RVCs’ visibility in both organic and paid results displayed in Amazon’s marketplace; limiting access to certain widgets or commercially attractive labels; and/or raising the costs of iRobot’s rivals to advertise and sell their RVCs on Amazon’s marketplace.[51]

Sherlocking belongs to this framework of analysis and can be considered a form of self-preferencing, specifically because of the lack of reciprocity in accessing sensitive data.[52] Indeed, while gatekeeper platforms have access to relevant nonpublic third-party business data as a result of their role as unavoidable trading partners, they leverage this information exclusively, without sharing it with third-party sellers, thus further exacerbating an already uneven playing field.[53]

III. Sherlocking for Competitive Advantage: Hybrid Business Model, Neutrality Regimes, and Competition on the Merits

Insofar as prohibitions of sherlocking center on the competitive advantages that platforms enjoy because of their dual role—thereby allowing some players to better calibrate their business decisions due to their preferential access to business users’ data—it should be noted that competition law does not impose a general duty to ensure a level playing field.[54] Further, a competitive advantage does not, in itself, amount to anticompetitive foreclosure under antitrust rules. Rather, foreclosure must not only be proved (in terms of actual or potential effects) but also assessed against potential benefits for consumers in terms of price, quality, and choice of new goods and services.[55]

Indeed, not every exclusionary effect is necessarily detrimental to competition.[56] Competition on the merits may, by definition, lead to the departure from the market or the marginalization of competitors that are less efficient and therefore less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation.[57] Automatically classifying any conduct with exclusionary effects were as anticompetitive could well become a means to protect less-capable, less-efficient undertakings and would in no way protect more meritorious undertakings—thereby potentially hindering a market’s competitiveness.[58]

As recently clarified by the CJEU regarding the meaning of “competition on the merits,” any practice that, in its implementation, holds no economic interest for a dominant undertaking except that of eliminating competitors must be regarded as outside the scope of competition on the merits.[59] Referring to the cases of margin squeezes and essential facilities, the CJEU added that the same applies to practices that a hypothetical equally efficient competitor is unable to adopt because that practice relies on using resources or means inherent to the holding of such a dominant position.[60]

Therefore, while antitrust cases on sherlocking set out to ensure a level playing field and platform neutrality, and therefore center on the competitive advantages that a platform enjoys because of its dual role, mere implementing a hybrid business model does not automatically put such practices outside the scope of competition on the merits. The only exception, according to the interpretation provided in Bronner, is the presence of an essential facility—i.e., an input whose access should be considered indispensable, as there are no technical, legal, or economic obstacles capable of making it impossible, or even unreasonably difficult, to duplicate it.[61]

As a result, unless it is proved that the hybrid platform is an essential facility, sherlocking and other forms of self-preferencing cannot be considered prima facie outside the scope of competition on the merits, or otherwise unlawful. Rather, any assessment of sherlocking demands the demonstration of anticompetitive effects, which in turn requires finding an impact on efficient firms’ ability and incentive to compete. In the scenario at-issue, for instance, the access to certain data may allow a platform to deliver new products or services; to improve existing products or services; or more generally to compete more efficiently not only with respect to the platform’s business users, but also against other platforms. Such an increase in both intra-platform and inter-platform competition would benefit consumers in terms of lower prices, better quality, and a wider choice of new or improved goods and services—i.e., competition on the merits.[62]

In Facebook Marketplace, the European Commission and UK CMA challenged the terms and conditions governing the provision of display-advertising and business-tool services to which Meta required its business customers to sign up.[63] In their view, Meta abused its dominant position by imposing unfair trading conditions on its advertising customers, which authorized Meta to use ads-related data derived from the latter in a way that could afford Meta a competitive advantage on Facebook Marketplace that would not have arisen from competition on the merits. Notably, antitrust authorities argued that Meta’s terms and conditions were unjustified, disproportionate, and unnecessary to provide online display-advertising services on Meta’s platforms.

Therefore, rather than directly questioning the platform’s dual role or hybrid business model, the European Commission and UK CMA decided to rely on traditional case law which considers unfair those clauses that are unjustifiably unrelated to the purpose of the contract, unnecessarily limit the parties’ freedom, are disproportionate, or are unilaterally imposed or seriously opaque.[64] This demonstrates that, outside the harm theory of the unfairness of terms and conditions, a hybrid platform’s use of nonpublic third-party business data to improve its own business decisions is generally consistent with antitrust provisions. Hence, an outright ban would be unjustified.

IV. Sherlocking to Mimic Business Users’ Products or Services

The second, and more intriguing, sherlocking scenario is illustrated by the Amazon Marketplace investigations and regards the original meaning of sherlocking—i.e., where a data advantage is used by a hybrid platform to mimic its business users’ products or services.

Where sherlocking charges assert that the practice allows some platforms to use business users’ data to compete against them by replicating their products or services, it should not be overlooked that the welfare effects of such a copying strategy are ambiguous. While the practice could benefit consumers in the short term by lowering prices and increasing choice, it may discourage innovation over the longer term if third parties anticipate being copied whenever they deliver successful products or services. Therefore, the success of an antitrust investigation essentially relies on demonstrating a harm to innovation that would induce business users to leave the market or stop developing their products and services. In other words, antitrust authorities should be able to demonstrate that, by allowing dominant platforms to free ride on their business guests’ innovation efforts, sherlocking would negatively affect rivals’ ability to compete.

A. The Welfare Effects of Copying

The tradeoff between the short- and long-term welfare effects of copying has traditionally been analyzed in the context of the benefits and costs generated by intellectual-property protection.[65] In particular, the economic literature investigating the optimal life of patents[66] and copyrights[67] focuses on the efficient balance between dynamic benefits associated with innovation and the static costs of monopoly power granted by IPRs.

More recently, product imitation has instead been investigated in the different scenario of digital markets, where dominant platforms adopting a hybrid business model may use third-party sellers’ market data to design and promote their own products over their rivals’ offerings. Indeed, some studies report that large online platforms may attempt to protect their market position by creating “kill zones” around themselves—i.e., by acquiring, copying, or eliminating their rivals.[68] In such a novel setting, the welfare effects of copying are assessed regardless of the presence and the potential enforcement of IPRs, but within a strategy aimed at excluding rivals by exploiting the dual role of both umpire and player to get preferential access to sensitive data and free ride on their innovative efforts.[69]

Even in this context, however, a challenging tradeoff should be considered. Indeed, while in the short term, consumers may benefit from the platform’s imitation strategy in terms of lower prices and higher quality, they may be harmed in the longer term if third parties are discouraged from delivering new products and services. As a result, while there is empirical evidence on hybrid platforms successfully entering into third parties’ adjacent market segments, [70] the extant academic literature finds the welfare implications of such moves to be ambiguous.

A first strand of literature attempts to estimate the welfare impact of the hybrid business model. Notably, Andre Hagiu, Tat-How Teh, and Julian Wright elaborated a model to address the potential implications of an outright ban on platforms’ dual mode, finding that such a structural remedy may harm consumer surplus and welfare even where the platform would otherwise engage in product imitation and self-preferencing.[71] According to the authors, banning the dual mode does not restore the third-party seller’s innovation incentives or the effective price competition between products, which are the putative harms caused by imitation and self-preferencing. Therefore, the authors’ evaluation was that interventions specifically targeting product imitation and self-preferencing were preferable.

Germa?n Gutie?rrez suggested that banning the dual model would generate hardly any benefits for consumers, showing that, in the Amazon case, interventions that eliminate either the Prime program or product variety are likely to decrease welfare.[72]

Further, analyzing Amazon’s business model, Federico Etro found that the platform and consumers’ incentives are correctly aligned, and that Amazon’s business model of hosting sellers and charging commissions prevents the company from gaining through systematic self?preferencing for its private-label and first-party products.[73] In the same vein, on looking at its business model and monetization strategy, Patrick Andreoli-Versbach and Joshua Gans argued that Amazon does not have an obvious incentive to self-preference.[74] Indeed, Amazon’s profitability data show that, on average, the company’s operating margin is higher on third-party sales than on first-party retail sales.

Looking at how modeling details may yield different results with regard to the benefits and harms of the hybrid business model, Simon Anderson and O?zlem Bedre-Defoile maintain that the platform’s choice to sell its own products benefits consumers by lowering prices when a monopoly platform hosts competitive fringe sellers, regardless of the platform’s position as a gatekeeper, whether sellers have an alternate channel to reach consumers, or whether alternate channels are perfect or imperfect substitutes for the platform channel.[75] On the other hand, the authors argued that platform product entry might harm consumers when a big seller with market power sells on its own channel and also on the platform. Indeed, in that case, the platform setting a seller fee before the big seller prices its differentiated products introduces double markups on the big seller’s platform-channel price and leaves some revenue to the big seller.

Studying whether Amazon engages in self-preferencing on its marketplace by favoring its own brands in search results, Chiara Farronato, Andrey Fradkin, and Alexander MacKay demonstrate empirically that Amazon brands remain about 30% cheaper and have 68% more reviews than other similar products.[76] The authors acknowledge, however, that their findings do not imply that consumers are hurt by Amazon brands’ position in search results.

Another strand of literature specifically tackles the welfare effects of sherlocking. In particular, Erik Madsen and Nikhil Vellodi developed a theoretical framework to demonstrate that a ban on insider imitation can either stifle or stimulate innovation, depending on the nature of innovation.[77] Specifically, the ban could stimulate innovation for experimental product categories, while reducing innovation in incremental product markets, since the former feature products with a large chance of superstar demand and the latter generate mostly products with middling demand.

Federico Etro maintains that the tradeoffs at-issue are too complex to be solved with simple interventions, such as bans on dual mode, self-preferencing, or copycatting.[78] Indeed, it is difficult to conclude that Amazon entry is biased to expropriate third-party sellers or that bans on dual mode, self-preferencing, or copycatting would benefit consumers, because they either degrade services and product variety or induce higher prices or commissions.

Similar results are provided by Jay Pil Choi, Kyungmin Kim, and Arijit Mukherjee, who developed a tractable model of a platform-run marketplace where the platform charges a referral fee to the sellers for access to the marketplace, and may also subsequently launch its own private-label product by copying a seller.[79] The authors found that a policy to either ban hybrid mode or only prohibit information use for the launch of private-label products may produce negative welfare implications.

Further, Radostina Shopova argues that, when introducing a private label, the marketplace operator does not have incentive to distort competition and foreclose the outside seller, but does have an incentive to lower fees charged to the outside seller and to vertically differentiate its own product in order to protect the seller’s channel.[80] Even when the intermediary is able to perfectly mimic the quality of the outside seller and monopolize its product space, the intermediary prefers to differentiate its offer and chooses a lower quality for the private-label product. Accordingly, as the purpose of private labels is to offer a lower-quality version of products aimed at consumers with a lower willingness to pay, a marketplace operator does not have an incentive to distort competition in favor of its own product and foreclose the seller of the original higher-quality product.

In addition, according to Jean-Pierre Dubé, curbing development of private-label programs would harm consumers and Amazon’s practices amount to textbook retailing, as they follow an off-the-shelf approach to managing private-label products that is standard for many retail chains in the West.[81] As a result, singling out Amazon’s practices would set a double standard.

Interestingly, such findings about predictors and effects of Amazon’s entry in competition with third-party merchants on its own marketplace are confirmed by the only empirical study developed so far. In particular, analyzing the Home & Kitchen department of Germany’s version of Amazon Marketplace between 2016 and 2021, Gregory S. Crawford, Matteo Courthoud, Regina Seibel, and Simon Zuzek’s results suggest that Amazon’s entry strategy was more consistent with making Marketplace more attractive to consumers than expropriating third-party merchants.[82] Notably, the study showed that, comparing Amazon’s entry decisions with those of the largest third-party merchants, Amazon tends to enter low-growth and low-quality products, which is consistent with a strategy that seeks to make Marketplace more attractive by expanding variety, lessening third-party market power, and/or enhancing product availability. The authors therefore found that Amazon’s entry on Amazon Marketplace demonstrated no systematic adverse effects and caused a mild market expansion.

Massimo Motta and Sandro Shelegia explored interactions between copying and acquisitions, finding that the former (or the threat of copying) can modify the outcome of an acquisition negotiation.[83] According to their model, there could be both static and dynamic incentives for an incumbent to introduce a copycat version of a complementary product. The static rationale consists of lowering the price of the complementary product in order to capture more rents from it, while the dynamic incentive consists of harming a potential rival’s prospects of developing a substitute. The latter may, in turn, affect the direction the entrant takes toward innovation. Anticipating the incumbent’s copying strategy, the entrant may shift resources from improvements to compete with the incumbent’s primary product to developing complementary products.

Jingcun Cao, Avery Haviv, and Nan Li analyzed the opposite scenario—i.e., copycats that seek to mimic the design and user experience of incumbents’ successful products.[84] The authors find empirically that, on average, copycat apps do not have a significant effect on the demand for incumbent apps and that, as with traditional counterfeit products, they may generate a positive demand spillover toward authentic apps.

Massimo Motta also investigated the potential foreclosure effects of platforms adopting a copycat strategy committed to non-discriminatory terms of access for third parties (e.g., Apple App Store, Google Play, and Amazon Marketplace).[85] Notably, according to Motta, when a third-party seller is particularly successful and the platform is unable to raise fees and commissions paid by that seller, the platform may prefer to copy its product or service to extract more profits from users, rather than rely solely on third-party sales. The author acknowledged, however, that even though this practice may create an incentive for self-preferencing, it does not necessarily have anticompetitive effects. Indeed, the welfare effects of the copying strategy are a priori ambiguous.[86] While, on the one hand, the platform’s copying of a third-party product benefits consumers by increasing variety and competition among products, on the other hand, copying might be wasteful for society, in that it entails a fixed cost and may discourage innovation if rivals anticipate that they will be systematically copied whenever they have a successful product.[87] Therefore, introducing a copycat version of a product offered by a firm in an adjacent market might be procompetitive.

B. Antitrust Assessment: Competition, Innovation, and Double Standards

The economic literature has demonstrated that the rationale and welfare effects of sherlocking by hybrid platforms are definitively ambiguous. Against concerns about rivals’ foreclosure, some studies provide a different narrative, illustrating that such a strategy is more consistent with making the platform more attractive to consumers (by differentiating the quality and pricing of the offer) than expropriating business users.[88] Furthermore, copies, imitations, and replicas undoubtedly benefit consumers with more choice and lower prices.

Therefore, the only way to consider sherlocking anticompetitive is by demonstrating long-term deterrent effects on innovation (i.e., reducing rivals’ incentives to invest in new products and services) outweigh consumers’ short-term advantages.[89] Moreover, deterrent effects must not be merely hypothetical, as a finding of abuse cannot be based on a mere possibility of harm.[90] In any case, such complex tradeoffs are at odds with a blanket ban.[91]

Moreover, assessments of the potential impact of sherlocking on innovation cannot disregard the role of IPRs—which are, by definition, the main primary to promote innovation. From this perspective, intellectual-property protection is best characterized as another form of tradeoff. Indeed, the economic rationale of IPRs (in particular, of patents and copyrights) involves, among other things, a tradeoff between access and incentives—i.e., between short-term competitive restrictions and long-term innovative benefits.[92]

According to the traditional incentive-based theory of intellectual property, free riding would represent a dangerous threat that justifies the exclusive rights granted by intellectual-property protection. As a consequence, so long as copycat expropriation does not infringe IPRs, it should be presumed legitimate and procompetitive. Indeed, such free riding is more of an intellectual-property issue than a competitive concern.

In addition, to strike a fair balance between restricting competition and providing incentives to innovation, the exclusive rights granted by IPRs are not unlimited in terms of duration, nor in terms of lawful (although not authorized) uses of the protected subject matter. Under the doctrine of fair use, for instance, reverse engineering represents a legitimate way to obtain information about a firm’s product, even if the intended result is to produce a directly competing product that may steer customers away from the initial product and the patented invention.

Outside of reverse engineering, copying is legitimately exercised once IPRs expire, when copycat competitors can reproduce previously protected elements. As a result of the competitive pressure exerted by new rivals, holders of expired IPRs may react by seeking solutions designed to block or at least limit the circulation of rival products. They could, for example, request other IPRs to cover aspects or functionalities different from those previously protected. They could also bring (sometimes specious) legal action for infringement of the new IPR or for unfair competition by slavish imitation. For these reasons, there have been occasions where copycat competitors have received protection from antitrust authorities against sham litigation brought by IPR holders concerned about losing margins due to pricing pressure from copycats.[93]

Finally, within the longstanding debate on the intersection of intellectual-property protection and competition, EU antitrust authorities have traditionally been unsympathetic toward restrictions imposed by IPRs. The success of the essential-facility doctrine (EFD) is the most telling example of this attitude, as its application in the EU has been extended to IPRs. As a matter of fact, the EFD represents the main antitrust tool for overseeing intellectual property in the EU.[94]

After Microsoft, EU courts have substantially dismantled one of the “exceptional circumstances” previously elaborated in Magill and specifically introduced for cases involving IPRs, with the aim of safeguarding a balance between restrictions to access and incentives to innovate. Whereas the CJEU established in Magill that refusal to grant an IP license should be considered anticompetitive if it prevents the emergence of a new product for which there is potential consumer demand, in Microsoft, the General Court considered such a requirement met even when access to an IPR is necessary for rivals to merely develop improved products with added value.

Given this background, recent competition-policy concerns about sherlocking are surprising. To briefly recap, the practice at-issue increases competition in the short term, but may affect incentives to innovate in the long-term. With regard to the latter, however, the practice neither involves products protected by IPRs nor constitutes a slavish imitation that may be caught under unfair-competition laws.

The case of Amazon, which has received considerable media coverage, is illustrative of the relevance of IP protection. Amazon has been accused of cloning batteries, power strips, wool runner shoes, everyday sling bags, camera tripods, and furniture.[95] One may wonder what kind of innovation should be safeguarded in these cases against potential copies. Admittedly, such examples appear consistent with the findings of the already-illustrated empirical study conducted by Crawford et al. indicating that Amazon tends to enter low-quality products in order to expand variety on the Marketplace and to make it more attractive to consumers.

Nonetheless, if an IPR is involved, right holders are provided with proper means to protect their products against infringement. Indeed, one of the alleged targeted companies (Williams-Sonoma) did file a complaint for design and trademark infringement, claiming that Amazon had copied a chair (Orb Dining Chair) sold by its West Elm brand. According to Williams-Sonoma, the Upholstered Orb Office Chair—which Amazon began selling under its Rivet brand in 2018—was so similar that the ordinary observer would be confused by the imitation.[96] If, instead, the copycat strategy does not infringe any IPR, the potential impact on innovation might not be considered particularly worrisome—at least at first glance.

Further, neither the degree to which third-party business data is unavailable nor the degree to which they are relevant in facilitating copying are clear cut. For instance, in the case of Amazon, public product reviews supply a great deal of information[97] and, regardless of the fact that a third party is selling a product on the Marketplace, anyone can obtain an item for the purposes of reverse engineering.[98]

In addition, antitrust authorities are used to intervening against opportunistic behavior by IPR holders. European competition authorities, in particular, have never before seemed particularly responsive to the motives of inventors and creators versus the need to encourage maximum market openness.

It should also be noted that cloning is a common strategy in traditional markets (e.g., food products)[99] and has been the subject of longstanding controversies between high-end fashion brands and fast-fashion brands (e.g., Zara, H&M).[100] Furthermore, brick-and-mortar retailers also introduce private labels and use other brands’ sales records in deciding what to produce.[101]

So, what makes sherlocking so different and dangerous when deployed in digital markets as to push competition authorities to contradict themselves?[102]

The double standard against sherlocking reflects the same concern and pursues the same goal of the various other attempts to forbid any form of self-preferencing in digital markets. Namely, antitrust investigations of sherlocking are fundamentally driven by the bias against hybrid and vertically integrated players. The investigations rely on the assumption that conflicts of interest have anticompetitive implications and that, therefore, platform neutrality should be promoted to ensure the neutrality of the competitive process.[103] Accordingly, hostility toward sherlocking may involve both of the illustrated scenarios—i.e., the use of nonpublic third-party business data either in adopting any business decision, or just copycat strategies, in particular.

As a result, however, competition authorities end up challenging a specific business model, rather than the specific practice at-issue, which brings undisputed competitive benefits in terms of lower prices and wider consumer choice, and which should therefore be balanced against potential exclusionary risks. As the CJEU has pointed out, the concept of competition on the merits:

…covers, in principle, a competitive situation in which consumers benefit from lower prices, better quality and a wider choice of new or improved goods and services. Thus, … conduct which has the effect of broadening consumer choice by putting new goods on the market or by increasing the quantity or quality of the goods already on offer must, inter alia, be considered to come within the scope of competition on the merits.[104]

Further, in light of the “as-efficient competitor” principle, competition on the merits may lead to “the departure from the market, or the marginalization of, competitors that are less efficient and so less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation.”[105]

It has been correctly noted that the “as-efficient competitor” principle is a reminder of what competition law is about and how it differs from regulation.[106] Competition law aims to protect a process, rather than engineering market structures to fulfill a particular vision of how an industry is to operate.[107] In other words, competition law does not target firms on the basis of size or status and does not infer harm from (market or bargaining) power or business model. Therefore, neither the dual role played by some large online platforms nor their preferential access to sensitive business data or their vertical integration, by themselves, create a competition problem. Competitive advantages deriving from size, status, power, or business model cannot be considered per se outside the scope of competition on the merits.

Some policymakers have sought to resolve these tensions in how competition law regards sherlocking by introducing or envisaging an outright ban. These initiatives and proposals have clearly been inspired by antitrust investigations, but they did so for the wrong reasons. Instead of taking stock of the challenging tradeoffs between short-term benefits and long-term risks that an antitrust assessment of sherlocking requires, they blamed competition law for not providing effective tools to achieve the policy goal of platform neutrality.[108] Therefore, the regulatory solution is merely functional to bypass the traditional burden of proof of antitrust analysis and achieve what competition-law enforcement cannot provide.

V. Conclusion

The bias against self-preferencing strikes again. Concerns about hybrid platforms’ potential conflicts of interest have led policymakers to seek prohibitions to curb different forms of self-preferencing, making the latter the symbol of the competition-policy zeitgeist in digital markets. Sherlocking shares this fate. Indeed, the DMA outlaws any use of business users’ nonpublic data and similar proposals have been advanced in the United States, Australia, and Japan. Further, like other forms of self-preferencing, such regulatory initiatives against sherlocking have been inspired by previous antitrust proceedings.

Drawing on these antitrust investigations, the present research shows the extent to which an outright ban on sherlocking is unjustified. Notably, the practice at-issue includes two different scenarios: the broad case in which a gatekeeper exploits its preferential access to business users’ data to better calibrate all of its business decisions and the narrow case in which such data is used to adopt a copycat strategy. In either scenario, the welfare effects and competitive implications of sherlocking are unclear.

Indeed, the use of certain data by a hybrid platform to improve business decisions generally should be classified as competition on the merits, and may yield an increase in both intra-platform (with respect to business users) and inter-platform (with respect to other platforms) competition. This would benefit consumers in terms of lower prices, better quality, and a wider choice of new or improved goods and services. In a similar vein, if sherlocking is used to deliver replicas of business users’ products or services, the anti-competitiveness of such a strategy may only result from a cumbersome tradeoff between short-term benefits (i.e., lower prices and wider choice) and negative long-term effects on innovation.

An implicit confirmation of the difficulties encountered in demonstrating the anti-competitiveness of sherlocking comes from the recent complaint issued by the FTC against Amazon.[109] Current FTC Chairwoman Lina Khan devoted a significant portion of her previous academic career to questioning Amazon’s practices (including the decision to introduce its own private labels inspired by third-party products)[110] and to supporting the adoption of structural-separation remedies to tackle platforms’ conflicts of interest that induce them to exploit their “systemic informational advantage (gleaned from competitors)” to thwart rivals and strengthen their own position by introducing replica products.[111] Despite these premises and although the FTC’s complaint targets numerous practices belonging to what has been described as an interconnected strategy to block off every major avenue of competition, however, sherlocking is surprisingly off the radar.

Regulatory initiatives to ban sherlocking in order to ensure platform neutrality with respect to business users and a level playing field among rivals would sacrifice undisputed procompetitive benefits on the altar of policy goals that competition rules are not meant to pursue. Sherlocking therefore appears to be a perfect case study of the side effects of unwarranted interventions in digital markets.

[1] Giuseppe Colangelo, Antitrust Unchained: The EU’s Case Against Self-Preferencing, 72 GRUR International 538 (2023).

[2] Jacques Cre?mer, Yves-Alexandre de Montjoye, & Heike Schweitzer, Competition Policy for the Digital Era (2019), 7, https://op.europa.eu/en/publication-detail/-/publication/21dc175c-7b76-11e9-9f05-01aa75ed71a1/language-en (all links last accessed 3 Jan. 2024); UK Digital Competition Expert Panel, Unlocking Digital Competition, (2019) 58, available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[3] You’ve Been Sherlocked, The Economist (2012), https://www.economist.com/babbage/2012/07/13/youve-been-sherlocked.

[4] Regulation (EU) 2022/1925 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (2022), OJ L 265/1, Article 6(2).

[5] U.S. S. 2992, American Innovation and Choice Online Act (AICOA) (2022), Section 3(a)(6), available at https://www.klobuchar.senate.gov/public/_cache/files/b/9/b90b9806-cecf-4796-89fb-561e5322531c/B1F51354E81BEFF3EB96956A7A5E1D6A.sil22713.pdf. See also U.S. House of Representatives, Subcommittee on Antitrust, Commercial, and Administrative Law, Investigation of Competition in Digital Markets, Majority Staff Reports and Recommendations (2020), 164, 362-364, 378, available at https://democrats-judiciary.house.gov/uploadedfiles/competition_in_digital_markets.pdf.

[6] Australian Competition and Consumer Commission, Digital Platform Services Inquiry Report on Regulatory Reform (2022), 125, https://www.accc.gov.au/about-us/publications/serial-publications/digital-platform-services-inquiry-2020-2025/digital-platform-services-inquiry-september-2022-interim-report-regulatory-reform.

[7] Japan Fair Trade Commission, Market Study Report on Mobile OS and Mobile App Distribution (2023), https://www.jftc.go.jp/en/pressreleases/yearly-2023/February/230209.html.

[8] European Commission, 10 Nov. 2020, Case AT.40462, Amazon Marketplace; see Press Release, Commission Sends Statement of Objections to Amazon for the Use of Non-Public Independent Seller Data and Opens Second Investigation into Its E-Commerce Business Practices, European Commission (2020), https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2077.

[9] Press Release, CMA Investigates Amazon Over Suspected Anti-Competitive Practices, UK Competition and Markets Authority (2022), https://www.gov.uk/government/news/cma-investigates-amazon-over-suspected-anti-competitive-practices.

[10] European Commission, 16 Jun. 2020, Case AT.40716, Apple – App Store Practices.

[11] Press Release, Commission Sends Statement of Objections to Meta over Abusive Practices Benefiting Facebook Marketplace, European Commission (2022), https://ec.europa.eu/commission/presscorner/detail/en/ip_22_7728; Press Release, CMA Investigates Facebook’s Use of Ad Data, UK Competition and Markets Authority (2021), https://www.gov.uk/government/news/cma-investigates-facebook-s-use-of-ad-data.

[12] DMA, supra note 4, Recital 10 and Article 1(6).

[13] GWB Digitalization Act, 18 Jan. 2021, Section 19a. On risks of overlaps between the DMA and the competition law enforcement, see Giuseppe Colangelo, The European Digital Markets Act and Antitrust Enforcement: A Liaison Dangereuse, 47 European Law Review 597.

[14] GWB, supra note 13, Section 19a (2)(4)(b).

[15] Press Release, Commission Sends Statement of Objections to Apple Clarifying Concerns over App Store Rules for Music Streaming Providers, European Commission (2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_1217.

[16] European Commission, 20 Dec. 2022, Case AT.40462; Press Release, Commission Accepts Commitments by Amazon Barring It from Using Marketplace Seller Data, and Ensuring Equal Access to Buy Box and Prime, European Commission (2022), https://ec.europa.eu/commission/presscorner/detail/en/ip_22_7777; UK Competition and Markets Authority, 3 Nov. 2023, Case No. 51184, https://www.gov.uk/cma-cases/investigation-into-amazons-marketplace.

[17] UK Competition and Markets Authority, 3 Nov. 2023, Case AT.51013, https://www.gov.uk/cma-cases/investigation-into-facebooks-use-of-data.

[18] See, e.g., Gil Tono & Lewis Crofts (2022), Amazon Data Commitments Match DMA Obligations, EU’s Vestager Say, mLex (2022), https://mlexmarketinsight.com/news/insight/amazon-data-commitments-match-dma-obligation-eu-s-vestager-says (reporting that Commissioner Vestager stated that Amazon’s data commitments definitively appear to match what would be asked within the DMA).

[19] DMA, supra note 4, Recital 46.

[20] Id., Article 6(2) (also stating that, for the purposes of the prohibition, non-publicly available data shall include any aggregated and non-aggregated data generated by business users that can be inferred from, or collected through, the commercial activities of business users or their customers, including click, search, view, and voice data, on the relevant core platform services or on services provided together with, or in support of, the relevant core platform services of the gatekeeper).

[21] AICOA, supra note 5.

[22] U.S. House of Representatives, supra note 5; see also Lina M. Khan, The Separation of Platforms and Commerce, 119 Columbia Law Review 973 (2019).

[23] U.S. Federal Trade Commission, et al. v. Amazon.com, Inc., Case No. 2:23-cv-01495 (W.D. Wash., 2023).

[24] Australian Competition and Consumer Commission, supra note 6, 125.

[25] Id., 124.

[26] Japan Fair Trade Commission, supra note 7, 144.

[27] European Commission, supra note 8. But see also Amazon, Supporting Sellers with Tools, Insights, and Data (2021), https://www.aboutamazon.eu/news/policy/supporting-sellers-with-tools-insights-and-data (claiming that the company is just using aggregate (rather than individual) data: “Just like our third-party sellers and other retailers across the world, Amazon also uses data to run our business. We use aggregated data about customers’ experience across the store to continuously improve it for everyone, such as by ensuring that the store has popular items in stock, customers are finding the products they want to purchase, or connecting customers to great new products through automated merchandising.”)

[28] European Commission, supra note 16.

[29] UK Competition and Markets Authority, supra notes 9 and 16.

[30] Bundeskartellamt, 5 Jul. 2022, Case B2-55/21, paras. 493, 504, and 518.

[31] Id., para. 536.

[32] European Commission, supra note 10.

[33] European Commission, supra note 11; UK Competition and Markets Authority, supra note 11.

[34] European Commission, supra note 16. In a similar vein, see also UK Competition and Markets Authority, supra note 16, paras. 4.2-4.7.

[35] European Commission, supra note 16, para. 111.

[36] Id., para. 123.

[37] Cre?mer, de Montjoye, & Schweitzer, supra note 2, 33-34.

[38] See, e.g., Marc Bourreau, Some Economics of Digital Ecosystems, OECD Hearing on Competition Economics of Digital Ecosystems (2020), https://www.oecd.org/daf/competition/competition-economics-of-digital-ecosystems.htm; Amelia Fletcher, Digital Competition Policy: Are Ecosystems Different?, OECD Hearing on Competition Economics of Digital Ecosystems (2020).

[39] See, e.g., Cristina Caffarra, Matthew Elliott, & Andrea Galeotti, ‘Ecosystem’ Theories of Harm in Digital Mergers: New Insights from Network Economics, VoxEU (2023), https://cepr.org/voxeu/columns/ecosystem-theories-harm-digital-mergers-new-insights-network-economics-part-1 (arguing that, in merger control, the implementation of an ecosystem theory of harm would require assessing how a conglomerate acquisition can change the network of capabilities (e.g., proprietary software, brand, customer-base, data) in order to evaluate how easily competitors can obtain alternative assets to those being acquired); for a different view, see Geoffrey A. Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 George Mason Law Review 1281(2021).

[40] See, e.g., Viktoria H.S.E. Robertson, Digital merger control: adapting theories of harm, (forthcoming) European Competition Journal; Caffarra, Elliott, & Galeotti, supra note 39; OECD, Theories of Harm for Digital Mergers (2023), available at www.oecd.org/daf/competition/theories-of-harm-for-digital-mergers-2023.pdf; Bundeskartellamt, Merger Control in the Digital Age – Challenges and Development Perspectives (2022), available at https://www.bundeskartellamt.de/SharedDocs/Publikation/EN/Diskussions_Hintergrundpapiere/2022/Working_Group_on_Competition_Law_2022.pdf?__blob=publicationFile&v=2; Elena Argentesi, Paolo Buccirossi, Emilio Calvano, Tomaso Duso, Alessia Marrazzo, & Salvatore Nava, Merger Policy in Digital Markets: An Ex Post Assessment, 17 Journal of Competition Law & Economics 95 (2021); Marc Bourreau & Alexandre de Streel, Digital Conglomerates and EU Competition Policy (2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3350512.

[41] Bundeskartellamt, 11 Feb. 2022, Case B6-21/22, https://www.bundeskartellamt.de/SharedDocs/Entscheidung/EN/Fallberichte/Fusionskontrolle/2022/B6-21-22.html;jsessionid=C0837BD430A8C9C8E04D133B0441EB95.1_cid362?nn=4136442.

[42] UK Competition and Markets Authority, Microsoft / Activision Blizzard Merger Inquiry (2023), https://www.gov.uk/cma-cases/microsoft-slash-activision-blizzard-merger-inquiry.

[43] See European Commission, Commission Prohibits Proposed Acquisition of eTraveli by Booking (2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_4573 (finding that a flight product is a crucial growth avenue in Booking’s ecosystem, which revolves around its hotel online-travel-agency (OTA) business, as it would generate significant additional traffic to the platform, thus allowing Booking to benefit from existing customer inertia and making it more difficult for competitors to contest Booking’s position in the hotel OTA market).

[44] Thomas Eisenmann, Geoffrey Parker, & Marshall Van Alstyne, Platform Envelopment, 32 Strategic Management Journal 1270 (2011).

[45] See, e.g., Colangelo, supra note 1, and Pablo Iba?n?ez Colomo, Self-Preferencing: Yet Another Epithet in Need of Limiting Principles, 43 World Competition 417 (2020) (investigating whether and to what extent self-preferencing could be considered a new standalone offense in EU competition law); see also European Commission, Digital Markets Act – Impact Assessment Support Study (2020), 294, https://op.europa.eu/en/publication-detail/-/publication/0a9a636a-3e83-11eb-b27b-01aa75ed71a1/language-en (raising doubts about the novelty of this new theory of harm, which seems similar to the well-established leveraging theories of harm of tying and bundling, and margin squeeze).

[46] European Commission, supra note 45, 16.

[47] European Commission, 27 Jun. 2017, Case AT.39740, Google Search (Shopping).

[48] See General Court, 10 Nov. 2021, Case T-612/17, Google LLC and Alphabet Inc. v. European Commission, ECLI:EU:T:2021:763, para. 155 (stating that the general principle of equal treatment obligates vertically integrated platforms to refrain from favoring their own services as opposed to rival ones; nonetheless, the ruling framed self-preferencing as discriminatory abuse).

[49] In the meantime, however, see Opinion of the Advocate General Kokott, 11 Jan. 2024, Case C-48/22 P, Google v. European Commission, ECLI:EU:C:2024:14, paras. 90 and 95 (arguing that the self-preferencing of which Google is accused constitutes an independent form of abuse, albeit one that exhibits some proximity to cases involving margin squeezing).

[50] European Commission, Commission Sends Amazon Statement of Objections over Proposed Acquisition of iRobot (2023), https://ec.europa.eu/commission/presscorner/detail/en/IP_23_5990.

[51] The same concerns and approach have been shared by the CMA, although it reached a different conclusion, finding that the new merged entity would not have incentive to self-preference its own branded RVCs: see UK Competition and Markets Authority, Amazon / iRobot Merger Inquiry – Clearance Decision (2023), paras. 160, 188, and 231, https://www.gov.uk/cma-cases/amazon-slash-irobot-merger-inquiry.

[52] See European Commission, supra note 45, 304.

[53] Id., 313-314 (envisaging, among potential remedies, the imposition of a duty to make all data used by the platform for strategic decisions available to third parties); see also Désirée Klinger, Jonathan Bokemeyer, Benjamin Della Rocca, & Rafael Bezerra Nunes, Amazon’s Theory of Harm, Yale University Thurman Arnold Project (2020), 19, available at https://som.yale.edu/sites/default/files/2022-01/DTH-Amazon.pdf.

[54] Colangelo, supra note 1; see also Oscar Borgogno & Giuseppe Colangelo, Platform and Device Neutrality Regime: The New Competition Rulebook for App Stores?, 67 Antitrust Bulletin 451 (2022).

[55] See Court of Justice of the European Union (CJEU), 12 May 2022, Case C-377/20, Servizio Elettrico Nazionale SpA v. Autorità Garante della Concorrenza e del Mercato, ECLI:EU:C:2022:379; 19 Apr. 2018, Case C-525/16, MEO v. Autoridade da Concorrência, ECLI:EU:C:2018:270; 6 Sep. 2017, Case C-413/14 P, Intel v. Commission, ECLI:EU:C:2017:632; 6 Oct. 2015, Case C-23/14, Post Danmark A/S v. Konkurrencerådet (Post Danmark II), ECLI:EU:C:2015:651; 27 Mar. 2012, Case C-209/10, Post Danmark A/S v Konkurrencera?det (Post Danmark I), ECLI: EU:C:2012:172; for a recent overview of the EU case law, see also Pablo Iba?n?ez Colomo, The (Second) Modernisation of Article 102 TFEU: Reconciling Effective Enforcement, Legal Certainty and Meaningful Judicial Review, SSRN (2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4598161.

[56] CJEU, Intel, supra note 55, paras. 133-134.

[57] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 73.

[58] Opinion of Advocate General Rantos, 9 Dec. 2021, Case C?377/20, Servizio Elettrico Nazionale SpA v. Autorità Garante della Concorrenza e del Mercato, ECLI:EU:C:2021:998, para. 45.

[59] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 77.

[60] Id., paras. 77, 80, and 83.

[61] CJEU, 26 Nov.1998, Case C-7/97, Oscar Bronner GmbH & Co. KG v. Mediaprint Zeitungs- und Zeitschriftenverlag GmbH & Co. KG, Mediaprint Zeitungsvertriebsgesellschaft mbH & Co. KG and Mediaprint Anzeigengesellschaft mbH & Co. KG, ECLI:EU:C:1998:569.

[62] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 85.

[63] European Commission, supra note 11; UK Competition and Markets Authority, supra note 17, paras. 2.6, 4.3, and 4.7.

[64] See, e.g., European Commission, Case COMP D3/34493, DSD, para. 112 (2001) OJ L166/1; affirmed in GC, 24 May 2007, Case T-151/01, DerGru?nePunkt – Duales System DeutschlandGmbH v. European Commission, ECLI:EU:T:2007:154 and CJEU, 16 Jul. 2009, Case C-385/07 P, ECLI:EU:C:2009:456; European Commission, Case IV/31.043, Tetra Pak II, paras. 105–08, (1992) OJ L72/1; European Commission, Case IV/29.971, GEMA III, (1982) OJ L94/12; CJUE, 27 Mar. 1974, Case 127/73, Belgische Radio en Televisie e socie?te? belge des auteurs, compositeurs et e?diteurs v. SV SABAM and NV Fonior, ECLI:EU:C:1974:25, para. 15; European Commission, Case IV/26.760, GEMA II, (1972) OJ L166/22; European Commission, Case IV/26.760, GEMA I, (1971) OJ L134/15.

[65] See, e.g., Richard A. Posner, Intellectual Property: The Law and Economics Approach, 19 The Journal of Economic Perspectives 57 (2005).

[66] See, e.g., Richard Gilbert & Carl Shapiro, Optimal Patent Length and Breadth, 21 The RAND Journal of Economics 106 (1990); Pankaj Tandon, Optimal Patents with Compulsory Licensing, 90 Journal of Political Economy 470 (1982); Frederic M. Scherer, Nordhaus’ Theory of Optimal Patent Life: A Geometric Reinterpretation, 62 American Economic Review 422 (1972); William D. Nordhaus, Invention, Growth, and Welfare: A Theoretical Treatment of Technological Change, Cambridge, MIT Press (1969).

[67] See, e.g., Hal R. Varian, Copying and Copyright, 19 The Journal of Economic Perspectives 121 (2005); William R. Johnson, The Economics of Copying, 93 Journal of Political Economy 158 (1985); Stephen Breyer, The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies, and Computer Programs, 84 Harvard Law Review 281 (1970).

[68] Sai Krishna Kamepalli, Raghuram Rajan, & Luigi Zingales, Kill Zone, NBER Working Paper No. 27146 (2022), http://www.nber.org/papers/w27146; Massimo Motta & Sandro Shelegia, The “Kill Zone”: Copying, Acquisition and Start-Ups’ Direction of Innovation, Barcelona GSE Working Paper Series Working Paper No. 1253 (2021), https://bse.eu/research/working-papers/kill-zone-copying-acquisition-and-start-ups-direction-innovation; U.S. House of Representatives, Subcommittee on Antitrust, Commercial, and Administrative Law, supra note 8, 164; Stigler Committee for the Study of Digital Platforms, Market Structure and Antitrust Subcommittee (2019) 54, https://research.chicagobooth.edu/stigler/events/single-events/antitrust-competition-conference/digital-platforms-committee; contra, see Geoffrey A. Manne, Samuel Bowman, & Dirk Auer, Technology Mergers and the Market for Corporate Control, 86 Missouri Law Review 1047 (2022).

[69] See also Howard A. Shelanski, Information, Innovation, and Competition Policy for the Internet, 161 University of Pennsylvania Law Review 1663 (2013), 1999 (describing as “forced free riding” the situation occurring when a platform appropriates innovation by other firms that depend on the platform for access to consumers).

[70] See Feng Zhu & Qihong Liu, Competing with Complementors: An Empirical Look at Amazon.com, 39 Strategic Management Journal 2618 (2018).

[71] Andrei Hagiu, Tat-How Teh, and Julian Wright, Should Platforms Be Allowed to Sell on Their Own Marketplaces?, 53 RAND Journal of Economics 297 (2022), (the model assumes that there is a platform that can function as a seller and/or a marketplace, a fringe of small third-party sellers that all sell an identical product, and an innovative seller that has a better product in the same category as the fringe sellers and can invest more in making its product even better; further, the model allows the different channels (on-platform or direct) and the different sellers to offer different values to consumers; therefore, third-party sellers (including the innovative seller) can choose whether to participate on the platform’s marketplace, and whenever they do, can price discriminate between consumers that come to it through the marketplace and consumers that come to it through the direct channel).

[72] See Germa?n Gutie?rrez, The Welfare Consequences of Regulating Amazon (2022), available at http://germangutierrezg.com/Gutierrez2021_AMZ_welfare.pdf (building an equilibrium model where consumers choose products on the Amazon platform, while third-party sellers and Amazon endogenously set prices of products and platform fees).

[73] See Federico Etro, Product Selection in Online Marketplaces, 30 Journal of Economics & Management Strategy 614 (2021), (relying on a model where a marketplace such as Amazon provides a variety of products and can decide, for each product, whether to monetize sales by third-party sellers through a commission or become a seller on its platform, either by commercializing a private label version or by purchasing from a vendor and resell as a first party retailer; as acknowledged by the author, a limitation of the model is that it assumes that the marketplace can set the profit?maximizing commission on each product; if this is not the case, third-party sales would be imperfectly monetized, which would increase the relative profitability of entry).

[74] Patrick Andreoli-Versbach & Joshua Gans, Interplay Between Amazon Store and Logistics, SSRN (2023) https://ssrn.com/abstract=4568024.

[75] Simon Anderson & O?zlem Bedre-Defolie, Online Trade Platforms: Hosting, Selling, or Both?, 84 International Journal of Industrial Organization 102861 (2022).

[76] Chiara Farronato, Andrey Fradkin, & Alexander MacKay, Self-Preferencing at Amazon: Evidence From Search Rankings, NBER Working Paper No. 30894 (2023), http://www.nber.org/papers/w30894.

[77] See Erik Madsen & Nikhil Vellodi, Insider Imitation, SSRN (2023) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3832712 (introducing a two-stage model where the platform publicly commits to an imitation policy and the entrepreneur observes this policy and chooses whether to innovate: if she chooses not to, the game ends and both players earn profits normalized to zero; otherwise, the entrepreneur pays a fixed innovation cost to develop the product, which she then sells on a marketplace owned by the platform).

[78] Federico Etro, The Economics of Amazon, SSRN (2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4307213.

[79] Jay Pil Choi, Kyungmin Kim, & Arijit Mukherjee, “Sherlocking” and Information Design by Hybrid Platforms, SSRN (2023), https://ssrn.com/abstract=4332558 (the model assumes that the platform chooses its referral fee at the beginning of the game and that the cost of entry is the same for both the seller and the platform).

[80] Radostina Shopova, Private Labels in Marketplaces, 89 International Journal of Industrial Organization 102949 (2023), (the model assumes that the market structure is given exogenously and that the quality of the seller’s product is also exogenous; therefore, the paper does not investigate how entry by a platform affects the innovation incentives of third-party sellers).

[81] Jean-Pierre Dube?, Amazon Private Brands: Self-Preferencing vs Traditional Retailing, SSRN (2022) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4205988.

[82] Gregory S. Crawford, Matteo Courthoud, Regina Seibel, & Simon Zuzek, Amazon Entry on Amazon Marketplace, CEPR Discussion Paper No. 17531 (2022), https://cepr.org/publications/dp17531.

[83] Motta & Shelegia, supra note 68.

[84] Jingcun Cao, Avery Haviv, & Nan Li, The Spillover Effects of Copycat Apps and App Platform Governance, SSRN (2023), https://ssrn.com/abstract=4250292.

[85] Massimo Motta, Self-Preferencing and Foreclosure in Digital Markets: Theories of Harm for Abuse Cases, 90 International Journal of Industrial Organization 102974 (2023).

[86] Id.

[87] Id.

[88] See, e.g., Crawford, Courthoud, Seibel, & Zuzek, supra note 82; Etro, supra note 78; Shopova, supra note 80.

[89] Motta, supra note 85.

[90] Servizio Elettrico Nazionale, supra note 55, paras. 53-54; Post Danmark II, supra note 55, para. 65.

[91] Etro, supra note 78; see also Herbert Hovenkamp, The Looming Crisis in Antitrust Economics, 101 Boston University Law Review 489 (2021), 543, (arguing that: “Amazon’s practice of selling both its own products and those of rivals in close juxtaposition almost certainly benefits consumers by permitting close price comparisons. When Amazon introduces a product such as AmazonBasics AAA batteries in competition with Duracell, prices will go down. There is no evidence to suggest that the practice is so prone to abuse or so likely to harm consumers in other ways that it should be categorically condemned. Rather, it is an act of partial vertical integration similar to other practices that the antitrust laws have confronted and allowed in the past.”)

[92] On the more complex economic rationale of intellectual property, see, e.g., William M. Landes & Richard A. Posner, The Economic Structure of Intellectual Property Law, Cambridge, Harvard University Press (2003).

[93] See, e.g., Italian Competition Authority, 18 Jul. 2023 No. 30737, Case A538 – Sistemi di sigillatura multidiametro per cavi e tubi, (2023) Bulletin No. 31.

[94] See CJEU, 6 Apr. 1995, Joined Cases C-241/91 P and 242/91 P, RTE and ITP v. Commission, ECLI:EU:C:1995:98; 29 Apr. 2004, Case C-418/01, IMS Health GmbH & Co. OHG v. NDC Health GmbH & Co. GH, ECLI:EU:C:2004:257; General Court, 17 Sep. 2007, Case T-201/04, Microsoft v. Commission, ECLI:EU:T:2007:289; CJEU, 16 Jul. 2015, Case C-170/13, Huawei Technologies Co. Ltd v. ZTE Corp., ECLI:EU:C:2015:477.

[95] See, e.g., Dana Mattioli, How Amazon Wins: By Steamrolling Rivals and Partners, Wall Street Journal (2022), https://www.wsj.com/articles/amazon-competition-shopify-wayfair-allbirds-antitrust-11608235127; Aditya Kalra & Steve Stecklow, Amazon Copied Products and Rigged Search Results to Promote Its Own Brands, Documents Show, Reuters (2021), https://www.reuters.com/investigates/special-report/amazon-india-rigging.

[96] Williams-Sonoma, Inc. v. Amazon.Com, Inc., Case No. 18-cv-07548 (N.D. Cal., 2018). The suit was eventually dismissed, as the parties entered into a settlement agreement: Williams-Sonoma, Inc. v. Amazon.Com, Inc., Case No. 18-cv-07548-AGT (N.D. Cal., 2020).

[97] Amazon Best Sellers, https://www.amazon.com/Best-Sellers/zgbs.

[98] Hovenkamp, supra note 91, 2015-2016.

[99] Nicolas Petit, Big Tech and the Digital Economy, Oxford, Oxford University Press (2020), 224-225.

[100] For a recent analysis, see Zijun (June) Shi, Xiao Liu, Dokyun Lee, & Kannan Srinivasan, How Do Fast-Fashion Copycats Affect the Popularity of Premium Brands? Evidence from Social Media, 60 Journal of Marketing Research 1027 (2023).

[101] Lina M. Khan, Amazon’s Antitrust Paradox, 126 Yale Law Journal 710 (2017), 782.

[102] See Massimo Motta &Martin Peitz, Intervention Triggers and Underlying Theories of Harm, in Market Investigations. A New Competition Tool for Europe? (M. Motta, M. Peitz, & H. Schweitzer, eds.), Cambridge, Cambridge University Press (2022), 16, 59 (arguing that, while it is unclear to what extent products or ideas are worth protecting and/or can be protected from sherlocking and whether such cloning is really harmful to consumers, this is clearly an area where an antitrust investigation for abuse of dominant position would not help).

[103] Khan, supra note 101, 780 and 783 (arguing that Amazon’s conflicts of interest tarnish the neutrality of the competitive process and that the competitive implications are clear, as Amazon is exploiting the fact that some of its customers are also its rivals).

[104] Servizio Elettrico Nazionale, supra note 55, para. 85.

[105] Post Danmark I, supra note 55, para. 22.

[106] Iba?n?ez Colomo, supra note 55, 21-22.

[107] Id.

[108] See, e.g., DMA, supra note 4, Recital 5 (complaining that the scope of antitrust provisions is “limited to certain instances of market power, for example dominance on specific markets and of anti-competitive behaviour, and enforcement occurs ex post and requires an extensive investigation of often very complex facts on a case by case basis.”).

[109] U.S. Federal Trade Commission, et al. v. Amazon.com, Inc., supra note 23.

[110] Khan, supra note 101.

[111] Khan, supra note 22, 1003, referring to Amazon, Google, and Meta.

Continue reading
Antitrust & Consumer Protection

Scale and Antitrust: Where Is the Harm?

TL;DR tl;dr Background: In the U.S. Justice Department’s (DOJ) recent suit against Google and the Federal Trade Commission’s (FTC) latest complaint against Amazon, both antitrust agencies . . .

tl;dr

Background: In the U.S. Justice Department’s (DOJ) recent suit against Google and the Federal Trade Commission’s (FTC) latest complaint against Amazon, both antitrust agencies allege these large technology firms behave anti-competitively by preventing their rivals from reaching the “scale” needed to compete effectively.

But… achieving scale or a large customer base does not, in itself, violate antitrust law. Private companies also owe no duty to allow their competitors to reach scale. For example, Google is not required to allow Bing to gain more users so that Bing’s quality can improve. Google and Amazon’s competition for users at the expense of competitors is central to the competitive process. To make an effective antitrust case, the agencies must delineate how Amazon and Google allegedly abuse their size in ways that harm competition and consumers.

KEY TAKEAWAYS

‘SCALE’ LACKS PRECISION IN ANTITRUST

Antitrust regulators often cite “scale” in recent complaints against large tech companies. Instead of throwing that particular term around loosely, the enforcement agencies should detail precisely how firms allegedly abuse scale to harm rivals. 

Does scale unfairly raise barriers to entry? Does it impose costs on competitors? In both of the cases cited above, the alleged harm is the direct costs imposed on competitors, not the firm’s scale. After all, scale can be just another way of describing the firm that produces the highest-quality product at the lowest price. Without greater clarity, enforcement agencies would be unable to substantiate antitrust claims centered on “scale.”

To prevail in court, the agencies must articulate precise mechanisms of competitive injury from scale. Broad assertions about nebulous “scale advantages” are unlikely to demonstrate concrete anticompetitive effects. 

SCALE ALONE IS NOT AN ANTITRUST HARM

It has long been recognized that simply “achieving scale” and becoming a large firm with significant market share or production capacity does not constitute an antitrust violation. No law prohibits a company from growing large through legal competitive means. The agencies know this. The FTC argues that its complaint against Amazon is “not for being big.”

While scale can potentially be abused, it also confers significant consumer advantages. Basic economic principles demonstrate the benefits of size or scale, which may allow larger firms to reduce average costs and become more efficient. These cost savings can then be passed on to consumers through lower prices. Larger firms may also be able to make more substantial investments in innovation and product development. And network effects in technology platforms show how scale can improve service quality by attracting more users. 

Scale only becomes an issue if it is leveraged to restrain trade unfairly or in ways that harm consumers. The restraint is the harm, not the scale.

PREVENTING SCALE IS NOT AN ANTITRUST HARM 

Preventing a competitor from achieving greater size and scale is not inherently an antitrust violation either. Companies routinely take business from one another through price competition, product improvements, or other means that may limit rivals’ growth. This is a normal part of market competition. 

For example, if Amazon achieves sufficient scale that allows it to offer better prices or selection than smaller e-commerce websites, that may necessarily limit those competitors’ scale. But this does not constitute an antitrust harm; it is, instead, simply vigorous competition. An antitrust violation requires the firm to take specific actions to restrain trade or artificially raise rivals’ costs. Similar arguments hold for the DOJ’s case against Google over the company paying to be the default search engine on various mobile devices. 

Unless the agencies can demonstrate precisely how a company has abused its position to undermine rivals’ scale unfairly—rather than winning business through competition on the merits—their complaints will struggle to establish antitrust liability.

COMPETITION INCREASES CONCENTRATION, WHICH MAY LOOK LIKE SCALE

Regulators often assume that large scale enables anticompetitive behavior to harm smaller rivals. Economic analysis, however, demonstrates that scale can benefit consumers and simultaneously increase concentration through competition.

Firms that achieve significant scale can leverage resulting efficiencies to reduce costs and prices. Scale enables investments in R&D, specialized assets, advertising, and other drivers of innovation and productive efficiency. By passing cost savings on to consumers, scaled firms often gain share at the expense of higher-cost producers.

As search and switching costs fall, consumers flock to the lowest-cost and highest-quality offerings. Competition redirects purchases toward scaled companies with superior productivity and lower prices stemming from economies of scale. This reallocates market share to efficient large firms, raising concentration.

Greater competition and the competitive advantages of scale are thus entirely consistent with increased concentration. Size alone does not imply anticompetitive behavior. Regulators should evaluate specific evidence of abuse, rather than assume that scale harms competition simply because it leads to concentration.

For more on this issue, see Brian Albrecht’s posts “Is Amazon’s Scale a Harm?” and “Competition Increases Concentration,” both at Truth on the Market

Continue reading
Antitrust & Consumer Protection

A Brief History of the US Drug Approval Process, and the Birth of Accelerated Approval

TOTM This is the second post about the U.S. drug-approval process; the first post is here. It will explore how the Food and Drug Administration (FDA) arose, . . .

This is the second post about the U.S. drug-approval process; the first post is here. It will explore how the Food and Drug Administration (FDA) arose, how disasters drove its expansion and regulatory oversight, and how the epidemic of the human immunodeficiency virus (HIV) changed the approval processes.

Read the full piece here.

Continue reading
Innovation & the New Economy

Gatekeeping, the DMA, and the Future of Competition Regulation

TOTM The European Commission late last month published the full list of its “gatekeeper” designations under the Digital Markets Act (DMA). Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft—the six . . .

The European Commission late last month published the full list of its “gatekeeper” designations under the Digital Markets Act (DMA). Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft—the six designated gatekeepers—now have six months to comply with the DMA’s list of obligations and restrictions with respect to their core platform services (CPS), or they stand to face hefty fines and onerous remedies (see here and here for our initial reactions).

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Is Amazon’s Scale a Harm?

TOTM Under the leadership of its professional anti-Amazoner Chair Lina Khan, the Federal Trade Commission (FTC) has finally filed its antitrust complaint against Amazon. No, not . . .

Under the leadership of its professional anti-Amazoner Chair Lina Khan, the Federal Trade Commission (FTC) has finally filed its antitrust complaint against Amazon. No, not the complaint about how it’s unfair to take six clicks to cancel your Prime membership. This is the big one. It mostly revolves around sellers needing to use Amazon’s fulfillment services to be part of Amazon Prime and lowering reach rankings if products are priced lower on other sites.

Instead of covering the arguments in the complaint, I want to use the complaint as an example of how I use the basics of supply and demand to sort through one of the arguments made by the FTC. Nothing about the use of price theory implies certain policy conclusions about the case. I’m just trying to be transparent, as I’ve done in the past, about how I use economics to reason about these important questions. Besides self-indulgence, the hope is that the examples help readers do the same.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

What Is a Barrier to Entry?

TOTM Why do monopolies exist? Many textbooks point to barriers to entry as a cause of monopolies. Tyler Cowen and Alex Tabarrok’s textbook says: “In addition to patents, . . .

Why do monopolies exist? Many textbooks point to barriers to entry as a cause of monopolies.

Tyler Cowen and Alex Tabarrok’s textbook says: “In addition to patents, government regulation and economies of scale, monopolies may be created whenever there is a significant barrier to entry, something that raises the cost to new firms of entering the industry.” Greg Mankiw’s textbook goes as far as to say: “The fundamental cause of monopoly is barriers to entry.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

ICLE Response to the AI Accountability Policy Request for Comment

Regulatory Comments I. Introduction: How Do You Solve a Problem Like ‘AI’? On behalf of the International Center for Law & Economics (ICLE), we thank the National . . .

I. Introduction: How Do You Solve a Problem Like ‘AI’?

On behalf of the International Center for Law & Economics (ICLE), we thank the National Telecommunications and Information Administration (NTIA) for the opportunity to respond to this AI Accountability Policy Request for Comment (RFC).

A significant challenge that emerges in discussions concerning accountability and regulation for artificial intelligence is the broad and often ambiguous definition of “AI” itself. This is demonstrated in the RFC’s framing:

This Request for Comment uses the terms AI, algorithmic, and automated decision systems without specifying any particular technical tool or process. It incorporates NIST’s definition of an ‘‘AI system,’’ as ‘‘an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.’’  This Request’s scope and use of the term ‘‘AI’’ also encompasses the broader set of technologies covered by the Blueprint: ‘‘automated systems’’ with ‘‘the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.’’[1]

As stated, the RFC’s scope could be read to cover virtually all software.[2] But it is essential to acknowledge that, for the purposes of considering potential regulation, we lack a definition of AI that is either sufficiently broad as to cover all or even most areas of concern, and sufficiently focused as to be a useful lens for analysis. That is to say, what we think of as AI encompasses a significant diversity of discrete technologies that will be put to a huge number of potential uses.

One useful recent comparison is with the approach the Obama administration took in its deliberations over nanotechnology regulation in 2011.[3] Following years of consultation and debate, the administration opted for a parsimonious, context-specific approach precisely because “nanotechnology” is not really a single technology. In that proceeding, the administration ultimately recognized that it was not the general category of “nanotechnology” that was relevant, nor the fact that nanotechnologies are those that operate at very small scales, but rather the means by and degree to which certain tools grouped under the broad heading of “nanotechnology” could “alter the risks and benefits of a specific application.”[4] This calls to mind Judge Frank Easterbrook’s famous admonition that a “law of cyberspace” would be no more useful than a dedicated “law of the horse.”[5] Indeed, we believe Easterbrook’s observation applies equally to the creation of a circumscribed “law of AI.”

While there is nothing inherently wrong with creating a broad regulatory framework to address a collection of loosely related subjects, there is a danger that the very breadth of such a framework might over time serve to foreclose more fruitful and well-fitted forms of regulation.

A second concern in the matter immediately at hand is, as mentioned above, the potential for AI regulation to be formulated so broadly as to encompass essentially all software. Whether by design or accident, this latter case runs a number of risks. First, since the scope of the regulation will potentially cover a much broader subject, the narrow discussion of “AI” will miss many important aspects of broader software regulation, and will, as a consequence, create an ill-fitted legal regime. Second, by sweeping in a far wider range of tools into such a regulation than the drafters publicly acknowledge, the democratic legitimacy of the process is undermined.

A.      The Danger of Regulatory Overaggregation

The current hype surrounding AI has been driven by popular excitement, as well as incentives for media to capitalize on that excitement. While this is understandable, it arguably has led to oversimplification in public discussions about the underlying technologies. In reality, AI is an umbrella term that encompasses a diverse range of technologies, each with its own unique characteristics and applications.

For instance, relatively lower-level technologies like large language models (LLMs)[6] differ significantly from diffusion techniques.[7] At the level of applications, recommender systems can employ a wide variety of different machine-learning (or even more basic statistical) techniques.[8] All of these techniques collectively called “AI” also differ from the wide variety of algorithms employed by search engines, social media, consumer software, video games, streaming services, and so forth, although each also contains software “smarts,” so to speak, that could theoretically be grouped under the large umbrella of “AI.”

And none of the foregoing bear much resemblance at all to what the popular imagination conjures when we speak of AI—that is, artificial general intelligence (AGI), which some experts argue may not even be achievable.[9]

Attempting to create a single AI regulatory scheme commits what we refer to as “regulatory overaggregation”—sweeping together a disparate set of more-or-less related potential regulatory subjects under a single category in a manner that overfocuses on the abstract term and obscures differences among the subjects. The domains of “privacy rights” and “privacy regulation” are illustrative of the dangers inherent in this approach. There are, indeed, many potential harms (both online and offline) that implicate the concept of “privacy,” but the differences among these recommend examining closely the various contexts that attend each.

Individuals often invoke their expectation of “privacy,” for example, in contexts where they want to avoid the public revelation of personal or financial information. This sometimes manifests as the assertion of a right to control data as a form of quasi-property, or as a form of a right to anti-publicity (that is, a right not to be embarrassed publicly). Indeed, writing in 1890 with his law partner Samuel D. Warren, future Supreme Court Justice Louis Brandeis posited a “right to privacy” as akin to a property right.[10] Warren & Brandeis argued that privacy is not merely a matter of seclusion, but extends to the individual’s control over their personal information.[11] This “right to be let alone” delineates a boundary against unwarranted intrusion, which can be seen as a form of intangible property right.[12]

This framing can be useful as an abstract description of a broad class of interests and concerns, but it fails to offer sufficient specificity to describe actionable areas of law. Brandeis & Warren were concerned primarily with publicity;[13] that is, with a property right to control one’s public identity as a public figure. This, in turn, implicates a wide range of concerns, from an individual’s interest in commercialization of their public image to their options for mitigating defamation, as well as technologies that range from photography to website logging to GPS positioning.

But there are clearly other significant public concerns that fall broadly under the heading of “privacy” that cannot be adequately captured by the notion of controlling a property right “to be let alone.” Consider, for example, the emerging issue of “revenge porn.” It is certainly a privacy harm in the Brandeisian sense that it implicates the property right not to have one’s private images distributed without consent. But that framing fails to capture the full extent of potential harms, such as emotional distress and reputational damage.[14] Similarly, cases in which an individual’s cellphone location data are sold to bounty hunters are not primarily about whether a property right has been violated, as they raise broader issues concerning potential abuses of power, stalking, and even physical safety.[15]

These examples highlight some of the ways that, in failing to take account of the distinct facts and contexts that can attend privacy harms, an overaggregated “law of privacy” may tend to produce regulations insufficiently tailored to address those diverse harms.

By contrast, the domain of intellectual property (IP) may serve as an instructive counterpoint to the overaggregated nature of privacy regulation. IP encompasses a vast array of distinct legal constructs, including copyright, patents, trade secrets, trademarks, and moral rights, among others. But in the United States—and indeed, in most jurisdictions around the world—there is no overarching “law of intellectual property” that gathers all of these distinct concerns under a singular regulatory umbrella. Instead, legislation is specific to each area, resulting in copyright-specific acts, patent-specific acts, and so forth. This approach acknowledges that, within IP law, each IP construct invokes unique rights, harms, and remedies that warrant a tailored legislative focus.

The similarity of some of these areas does lend itself to conceptual borrowing, which has tended to enrich the legislative landscape. For example, U.S. copyright law has imported doctrines from patent law.[16] Despite such cross-pollination, copyright law and patent law remain distinct. In this way, intellectual property demonstrates the advantages of focusing on specific harms and remedies. This could serve as a valuable model for AI, where the harms and remedies are equally diverse and context dependent.

If AI regulations are too broad, they may inadvertently encompass any algorithm used in commercially available software, effectively stifling innovation and hindering technological advancements. This is no less true of good-faith efforts to craft laws in any number of domains that nonetheless suffer from a host of unintended consequences.[17]

At the same time, for a regulatory regime covering such a broad array of varying technologies to be intelligible, it is likely inevitable that tradeoffs made to achieve administrative efficiency will cause at least some real harms to be missed. Indeed, NTIA acknowledges this in the RFC:

Commentators have raised concerns about the validity of certain accountability measures. Some audits and assessments, for example, may be scoped too narrowly, creating a ‘‘false sense’’ of assurance. Given this risk, it is imperative that those performing AI accountability tasks are sufficiently qualified to provide credible evidence that systems are trustworthy.[18]

To avoid these unintended consequences, it is crucial to develop a more precise understanding of AI and its various subdomains, and to focus any regulatory efforts toward addressing specific harms that would not otherwise be captured by existing laws. The RFC declares that its aim is “to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.”[19] As we discuss below, rather than promulgate a set of recommendations about the use of AI, NTIA should focus on cataloguing AI technologies and creating useful taxonomies that regulators and courts can use when they identify tangible harms.

II. AI Accountability and Cost-Benefit Analysis

The RFC states that:

The most useful audits and assessments of these systems, therefore, should extend beyond the technical to broader questions about governance and purpose. These might include whether the people affected by AI systems are meaningfully consulted in their design and whether the choice to use the technology in the first place was well-considered.[20]

It is unlikely that consulting all of the people potentially affected by a set of technological tools could fruitfully contribute to the design of any regulatory system other than one that simply bans those tools.[21] Any intelligible accountability framework must be dedicated to evaluating the technology’s real-world impacts, rather than positing thought experiments about speculative harms. Where tangible harms can be identified, such evaluations should encompass existing laws that focus on those harms and how various AI technologies might alter how existing law would apply. Only in cases where the impact of particular AI technologies represents a new kind of harm, or raises concerns that fall outside existing legal regimes, should new regulatory controls be contemplated.

AI technologies will have diverse applications and consequences, with the potential for both beneficial and harmful outcomes. Rather than focus on how to constrain either AI developers or the technology itself, the focus should be on how best to mitigate or eliminate any potential negative consequences to individuals or society.

NTIA asks:

AI accountability measures have been proposed in connection with many different goals, including those listed below. To what extent are there tradeoffs among these goals?[22]

This question acknowledges that, fundamentally, AI accountability comes down to cost-benefit analysis. In conducting such analysis, we urge that the NTIA and any other agencies be sure to account not only for potential harms, but to take very seriously the massive benefits these technologies might provide.

A.      The Law Should Identify and Address Tangible Harms, Incorporating Incremental Changes

To illustrate the challenges inherent to tailoring regulation of a new technology like AI to address the ways that it might generally create harm, it could be useful to analogize to a different existing technology: photography. If camera technology were brand new, we might imagine a vast array of harms that could arise from its use. But it should be obvious that creating an overarching accountability framework for all camera technology is absurd. Instead, laws of general applicability should address harmful uses of cameras, such as the invasion of privacy rights posed by surreptitious filming. Even where a camera is used in the commission of a crime—e.g., surveilling a location in preparation to commit a burglary—it is not typically the technology itself that is the subject of legal concern; rather, it is the acts of surveillance and burglary.

Even where we can identify a tangible harm that a new technology facilitates, the analysis is not complete. Instead, we need to balance the likelihood of harmful uses of that technology with the likelihood of nonharmful (or beneficial) uses of that technology. Copyright law provides an apt example.

Sony,[23] often referred to as the “Betamax case,” was a landmark U.S. Supreme Court case in 1984 that centered on Sony’s Betamax VCR—the first consumer device that could record television shows for later viewing, a concept now referred to as time-shifting.[24] Plaintiffs alleged that, by manufacturing and selling the Betamax VCRs, Sony was secondarily liable for copyright infringement carried out by its customers when they recorded television shows.[25] In a 5-4 decision, the Supreme Court ruled in favor of Sony, holding that the use of the Betamax VCR to record television shows for later personal viewing constituted “fair use” under U.S. copyright law.[26]

Critical for our purposes here was that the Court found that Sony could not be held liable for contributory infringement because the Betamax VCR was capable of “substantial noninfringing uses.”[27] This is to say that, faced with a new technology (recording relatively high-quality copies of television shows and movies at home), the Court recognized that, while the Betamax might facilitate some infringement, it would be inappropriate to apply a presumption against its use.

Sony and related holdings did not declare that using VCRs to infringe copyright was acceptable. Indeed, copyright enforcement for illegal reproduction has continued apace, even when using new technologies capable of noninfringing uses.[28] At the same time, the government did not create a new regulatory and licensing regime to govern the technology, despite the fact that it was a known vector for some illicit activity.

Note, the Sony case is also important for its fair-use analysis, and is widely cited for the proposition that so-called “time shifting” is permissible. That is not central to our point here, particularly as there is no analogue to fair use proposed in the AI context. But even here, it represents how the law adapts to develop doctrines that excuse conduct that would otherwise be a violation. In the case of copyright, unauthorized reproduction is infringement, period.[29] Fair use is raised as an affirmative defense[30] to excuse some unauthorized reproduction because courts have long recognized that, when viewed case-by-case, application of legal rules need to be tailored to make room for unexpected fact patterns where acts that would otherwise be considered violations yield some larger social benefit.

We are not suggesting the development of a fair-use doctrine for AI, but are instead insisting that AI accountability and regulation must be consistent with the case-by-case approach that has characterized the common law for centuries. Toward that end, it would be best for law relevant to AI to emerge through that same bottom-up, case-by-case process. To the extent that any new legislation is passed, it should be incremental and principles-based, thereby permitting the emergence of law that best fits particular circumstances and does not conflict with other principles of common law.

By contrast, there are instances where the law has recognized that certain technologies are more likely to be used for criminal purposes and should be strictly regulated. For example, many jurisdictions have made possession of certain kinds of weapons—e.g., nunchaku, shuriken “throwing stars,” and switchblade knives—per se illegal, despite possible legal uses (such as martial-arts training).[31] Similarly, although there is a strong Second Amendment protection for firearms in the United States, it is illegal for a felon to possess a firearm.[32] The reason these prohibitions developed is because it was deemed that possession of these devices in most contexts had no other possible use than the violation of the law. But these sorts of technologies are the exception, not the rule. Many chemicals that can be easily used as poisons are nonetheless available as, e.g., cleaning agents or fertilizers.

1.        The EU AI Act: An overly broad attempt to regulate AI

Nonetheless, some advocate regulating AI by placing new technologies into various broad categories of risk, each with their own attendant rules. For example, as proposed by the European Commission, the EU’s AI Act would regulate the use of AI systems that ostensibly pose risks to health, safety, and fundamental rights.[33] The proposal defines AI systems broadly to include essentially any software, and sorts them into three risk levels: unacceptable, high, and limited risk.[34] Unacceptable-risk systems are prohibited outright, while high-risk systems are subject to strict requirements, including mandatory conformity assessments.[35] Limited-risk systems face certain requirements related to adequate documentation and transparency.[36]

The AI Act defines AI so broadly that it would apply even to ordinary general-purpose software, as well as software that uses machine learning but does not pose significant risks.[37] The plain terms of the AI Act could be read to encompass common office applications, spam filters, and recommendation engines, thus potentially imposing considerable compliance burdens on businesses for their use of software that provides benefits dramatically greater than any expected costs.[38] A recently proposed amendment would “ban the use of facial recognition in public spaces, predictive policing tools, and to impose transparency measures on generative AI applications OpenAI’s ChatGPT.”[39]

This approach constitutes a hodge-podge of top-down tech policing and one-off regulations. The AI Act starts with the presumption that regulators can design an abstract, high-level set of categories that capture the risk from “AI” and then proceeds to force arbitrary definitions of particular “AI” implementations into those categories. This approach may get some things right and some things wrong, but none of what good it does will be with principled consistency. For example, it might be the case that “predictive policing” is a problem that merits per se prohibition, but is it really an AI problem? What happens if the police get exceptionally good at using publicly available data and spreadsheets to approximate 80% of what they are able to do with AI? Or even just 50% efficacy? Is it the use of AI that is a harm, or is it the practice itself?

Similarly, a requirement that firms expose the sources on which they train their algorithms might be good in some contexts, but useless or harmful in others.[40] Certainly, it can make sense when thinking about current publicly available generative tools that create images and video, and have no ability to point to a license or permission for their training data. Such cases have a high likelihood of copyright infringement. But should every firm be expected to do this? Surely there will be many cases where firms use their own internal data, or data not subject to property-rights protection at all, but where exposing those sources reveals sensitive internal information, like know-how or other trade secrets. In those cases, a transparency obligation could have a chilling effect.

By contrast, it seems hard to believe that every use of public facial recognition should be banned. For instance, what if local authorities had limited access to facial recognition to find lost children or victims of trafficking?

More broadly, a strict transparency requirement could essentially make advanced machine-learning techniques illegal. By their nature, machine-learning systems and applications that employ LLMs make inferences and predictions that are, very often, not replicable.[41] That is, by their very nature they are not reviewable in a way that would be easily explained to a human in a transparency review. This means that strong transparency obligations could make it legally untenable to employ those techniques.

The broad risk-based approach taken by the AI Act faces difficult enforcement hurdles as well, as demonstrated by the EU’s proposal to essentially ban the open-source community from providing access to generative models.[42] In other words, not only do the proposed amendments seek to prohibit large companies such as OpenAI, Google, Anthropic, Amazon, Microsoft, and IBM from offering API access to generative AI models, but they would also prohibit open-source developers and distributors such as GitHub from doing the same.[43] Moreover, the prohibitions have extraterritorial effects; for example, the EU might seek to impose large fines on U.S. companies for permitting access to their models in the United States, on grounds that those models could be imported into the EU by third parties.[44] These provisions reflect not only an attempt to control the distribution of AI technology but also the wider implications that such attempts would essentially require steering worldwide innovation down a narrow, heavily regulated path.

2.        Focus on the harm and the wrongdoers, not the innovators

None of the foregoing is to suggest that it is impossible for AI to be misused. Where it is misused, there should be actionable legal consequences. For example, if a real-estate developer intentionally used AI tools to screen out individuals on the basis of protected characteristics from purchasing homes, that should be actionable. If a criminal found a novel way to use Chat GPT to commit fraud, that should be actionable. If generative AI is used to create “deep fakes” that further some criminal plot, that should be actionable. But in all those cases, it is not the AI itself that is the relevant unit of legal analysis, but the action of the criminal and the harm he causes.

To try to build a regulatory framework that makes it impossible for bad actors to misuse AI will be ultimately fruitless. Bad actors will always find ways to misuse tools, and heavy-handed regulatory requirements (or even strong suggestions of such) might chill the development of useful tools that could generate an enormous amount of social welfare.

B.      Do Not Neglect the Benefits

A major complication in parsing the wisdom of potential AI regulation is that the technology remains largely in development. Indeed, this is the impetus for many of the calls to “do something” before it is “too late.”[45] The fear that some express is that, unless a wise regulator intervenes in the development process, the technology will inevitably develop in ways that yield more harm than good.[46]

But trying to regulate AI in accordance with the precautionary principle would almost certainly stifle development and dampen the tremendous, but unknowable, good that would emerge as these technologies mature and we find unique uses for them. Moreover, precautionary regulation, even in high-risk industries like nuclear power, can lead to net harms to social welfare.[47]

It is important here to distinguish two broad categories of concern about AI. First, there is the generalized concern about AGI, expressed as fear that we are inadvertently creating a super intelligence with the power to snuff out human life at its whim. We reject this fear as a legitimate basis for new regulatory frameworks, although we concede that it is theoretically possible that this presumption may need to be revisited as AI technologies progress. None of the technologies currently under consideration are anywhere close to AGI. They are essentially just advanced prediction engines, whether the predictions concern text or pixels.[48] It seems highly unlikely that we will accidentally stumble onto AGI by plugging a few thousand prediction engines into one another.

There are more realistic concerns that these very impressive technologies will be misused to further discrimination and crime, or will have such a disruptive impact on areas like employment that they will quickly generate tremendous harms. When contemplating harms that could occur, however, it is also necessary to recognize that many significant benefits could also be generated. Moreover, as with earlier technologies, economic disruptions will provide both challenges and opportunities. It is easy to see the immediate effect on the jobs of content writers, for instance, posed by ChatGPT, but less easy to measure the benefits that will be realized by firms that can deploy this technology to “in-source” tasks.

Firms often face what is called the “make-or-buy” decision. A firm that decides to purchase the services of an outside designer or copywriter has determined that doing so is more efficient than developing that talent in-house. But the fact that many firms employ a particular mix of outsourced and in-house talent to fulfill their business needs does not suggest a universally optimal solution to the make-or-buy problem. All we can do is describe how, under current conditions, firms solve this problem.

AI will surely augment the basis on which firms deal with the make-or-buy decision. Pre-AI, it might have made sense to outsource a good deal of work that was not core to a firm’s mission. Post-AI, it might be the case that the firm can afford to hire additional workers who can utilize AI tools to more quickly and affordably manage the work that had been previously outsourced. Thus, the ability of AI tools to shift the make-or-buy decision, in itself, says nothing about the net welfare effects to society. Arguments could very well be made for either side. If history is any guide, however, it appears likely that AI tools will allow firms to do more with less, while also enabling more individuals to start new businesses with less upfront expense.

Moreover, by freeing capital from easily automated tasks, existing firms and new entrepreneurs could better focus on their core business missions. Excess investments previously made in supporting, for example, the creation of marketing content could be repurposed into R&D-intensive work. Simplistic static analyses of the substitution power of AI tools will almost surely mislead us, and make us neglect the larger social welfare that could be gained from organizations improving their efficiency with AI tools.

Economists have consistently found that dynamic competition—characterized by firms vying to deliver novel and enhanced products and services to consumers—contributes significantly more to economic growth than static competition, where technology is held constant, and firms essentially compete solely on price. As Joseph Schumpeter noted:

[I]t is not [price] competition which counts but the competition from the new commodity, the new technology, the new source of supply, the new type of organization…. This kind of competition is as much more effective than the other as a bombardment is in comparison with forcing a door, and so much more important that it becomes a matter of comparative indifference whether competition in the ordinary sense functions more or less promptly; the powerful lever that in the long run expands output and brings down prices is in any case made of other stuff.[49]

Technological advancements yield substantial welfare benefits for consumers, and there is a comprehensive body of scholarly work substantiating the contributions of technological innovation to economic growth and societal welfare. [50] There is also compelling evidence that technological progress engenders extensive spillovers not fully appropriated by the innovators.[51] Business-model innovations—such as advancements in organization, production, marketing, or distribution—can similarly result in extensive welfare gains.[52]

AI tools obviously are delivering a new kind of technological capability for firms and individuals. The disruptions they will bring will similarly spur business-model innovation as firms scramble to find innovative ways to capitalize on the technology. The potential economic dislocations can, in many cases, amount to reconstitution: a person who was a freelance content writer can be shifted to a different position that manages the output of generative AI and provides human edits to ensure that content makes sense and is based in fact. In many other cases, the dislocations will likely lead to increased opportunities for workers of all sorts.

With this in mind, policymakers need to consider how to identify those laws and regulations that are most likely to foster this innovation, while also enabling courts and regulators to adequately deal with potential harms. Although it is difficult to prescribe particular policies to boost innovation, there is strong evidence about what sorts of policies should be avoided. Most importantly, regulation of AI should avoid inadvertently destroying those technologies.[53] As Adam Thierer has argued, “if public policy is guided at every turn by the fear of hypothetical worst-case scenarios and the precautionary mindset, then innovation becomes less likely.”[54]

Thus, policymakers must be cautious to avoid unduly restricting the range of AI tools that compete for consumer acceptance. Key to fostering investment and innovation is not merely the endorsement of technological advancement, but advocacy for policies that empower innovators to execute and commercialize their technology.

By contrast, consider again the way that some EU lawmakers want to treat “high risk” algorithms under the AI Act. According to recently proposed amendments, if a “high risk” algorithm learns something beyond what its developers expect it to learn, the algorithm would need to undergo a conformity assessment.[55]

One of the prime strengths of AI tools is their capacity for unexpected discoveries, offering potential insights and solutions that might not have been anticipated by human developers. As the Royal Society has observed:

Machine learning is a branch of AI that enables computer systems to perform specific tasks intelligently. Traditional approaches to programming rely on hardcoded rules, which set out how to solve a problem, step-by-step. In contrast, machine learning systems are set a task, and given a large amount of data to use as examples (and non-examples) of how this task can be achieved, or from which to detect patterns. The system then learns how best to achieve the desired output.[56]

By labeling unexpected behavior as inherently risky and necessitating regulatory review, we risk stifling this serendipitous aspect of AI technologies, potentially curtailing their capacity for innovation. It could contribute to a climate of regulatory caution that hampers swift progress in discovering the full potential and utility of AI tools.

C.     AI Regulation Should Follow the Model of Common Law

In a recent hearing of the U.S. Senate Judiciary Committee, OpenAI CEO Sam Altman suggested that the United States needs a central “AI regulator.”[57] As a general matter, we expect this would be unnecessarily duplicative. As we have repeatedly emphasized, the right approach to regulating AI is not the establishment of an overarching regulatory framework, but a careful examination of how AI technologies will variously interact with different parts of the existing legal system. We are not alone in this; former Special Assistant to the President for Technology and Competition Policy Tim Wu recently opined that federal agencies would be well-advised to rely on existing law and enhance that law where necessary in order to catch unexpected situations that may arise from the use of AI tools.[58]

As Judge Easterbrook famously wrote in the context of what was then called “cyberspace,” we do not need a special law for AI any more than we need a “law of the horse.”[59]

1.        An AI regulator’s potential effects on competition

More broadly, there are risks to competition that attend creating a centralized regulator for a new technology like AI. As an established player in the AI market, OpenAI might favor a strong central regulator because of the potential that such an agency could act in ways that hinder the viability of new entrants.[60] In short, an incumbent often can gain by raising its rivals’ regulatory costs, or by manipulating the relationship between its industry’s average and marginal costs. This dynamic can create strong strategic incentives for industry incumbents to promote regulation.

Economists and courts have long studied actions that generate or amplify market dominance by placing competitors at a disadvantage, especially by raising rivals’ costs.[61] There exist numerous strategies to put competitors at a disadvantage or push them out of the market without needing to compete on price. While antitrust action focuses on private actors and their ability to raises rival’s costs, it is well-accepted that “lobbying legislatures or regulatory agencies to create regulations that disadvantage rivals” has similar effects.[62]

Suppose a new regulation costs $1 million in annual compliance costs. Only companies that are sufficiently large and profitable will be able to cover those costs, which keeps out newcomers and smaller competitors. This effect of keeping out smaller competitors by raising their costs may more than offset the regulatory burden on the incumbent. New entrants typically produce on a smaller scale, and therefore find it more difficult to spread increased costs over a large number of units. This makes it harder for them to compete with established firms like OpenAI, which can absorb these costs more easily due to their larger scale of production.

This type of cost increase can often look benign. In the United Mine Workers vs. Pennington[63] case, a coal corporation was alleged to have conspired with the union representing its workforce to establish higher wage rates. How could higher wages be anticompetitive? This seemingly contradictory conclusion came from University of California at Berkeley economist Oliver Williamson, who interpreted the action to be an effort to maximize profits by raising entry barriers.[64] Using a model with a dominant incumbent and a fringe of other competitors, he demonstrated that wage-rate increases could lead to profit maximization if they escalated the fringe’s costs more than they did the dominant firm’s costs. Intuitively, while the dominant firm is dominant, the market price is determined by the marginal producers and the dominant company’s price is determined by the prices of its competitors. If a regulation raises the competitors’ per-unit costs by $2, the dominant company will be able to raise its price by as much as $2 per unit. Even if the regulation hurts the dominant firm, so long as its price increase exceeds its additional cost, the dominant firm can profit from the regulation.

As a result, while regulations might increase costs for OpenAI, they also serve to protect it from potential competition by raising the barriers to entry. In this sense, regulation can be seen as a strategic tool for incumbent firms to maintain or strengthen their market position. None of this analysis rests on OpenAI explicitly wanting to raise its rivals’ costs. That is just the competitive implication of such regulations. Thus, while there may be many benign reasons for a firm like OpenAI to call for regulation in good faith, the ultimate lesson presented by the economics of regulation should counsel caution when imposing strong centralized regulations on a nascent industry.

2.        A central licensing regulator for AI would be a mistake

NTIA asks:

Are there ways in which accountability mechanisms are unlikely to further, and might even frustrate, the development of trustworthy AI? Are there accountability mechanisms that unduly impact AI innovation and the competitiveness of U.S. developers?[65]

We are not alone in the  belief that imposing a licensing regime would present just such a barrier to innovation.[66] In the recent Senate hearings, the idea of a central regulator was endorsed as means to create and administer a licensing regime.[67] Perhaps in some narrow applications of particular AI technologies, there could be specific contexts in which licensing is appropriate (e.g., in providing military weapons), but broadly speaking, we believe this is inadvisable. Owing to the highly diverse nature of AI technologies, trying to license AI development is a fraught exercise, as NTIA itself acknowledges:

A developer training an AI tool on a customer’s data may not be able to tell how that data was collected or organized, making it difficult for the developer to assure the AI system. Alternatively, the customer may use the tool in ways the developer did not foresee or intend, creating risks for the developer wanting to manage downstream use of the tool. When responsibility along this chain of AI system development and deployment is fractured, auditors must decide whose data and which relevant models to analyze, whose decisions to examine, how nested actions fit together, and what is within the audit’s frame.[68]

Rather than design a single regulation to cover AI, ostensibly administered through a single licensing regime, NTIA should acknowledge the broad set of industries currently seeking to employ a diverse range of AI products that differ in fundamental ways. The implications of AI deployment in health care, for instance, vastly differ from those in transportation. A centralized AI regulator might struggle to comprehend the nuances and intricacies of each distinct industry, thus potentially leading to ineffective or inappropriate licensing requirements.

Analogies have been drawn between AI and sectors like railroads and nuclear power, which have dedicated regulators.[69] These sectors, however, are more homogenous and discrete than the AI industry (if such an industry even exists, apart from the software industry more generally). AI is much closer to a general-purpose tool, like chemicals or combustion engines. We do not enact central regulators to license every aspect of the development and use of chemicals, but instead allow different agencies to treat their use differently as is appropriate for the context. For example, the Occupational Safety and Health Administration (OSHA) will regulate employee exposure to dangerous substances encountered in the workplace, while various consumer-protection boards will regulate the adulteration of goods.

The notion of licensing implies that companies would need to obtain permission prior to commercializing a particular piece of code. This could introduce undesirable latency into the process of bringing AI technologies to market (or, indeed, even of correcting errors in already-deployed products). Given the expansive potential to integrate AI technologies into diverse products and services, this delay could significantly impede technological progress and innovation. Given the strong global interest in the subject, such delays threaten to leave the United States behind its more energetic competitors in the race for AI innovation.

As in other consumer-protection regimes, a better approach would be to eschew licensing and instead create product-centric and harm-centric frameworks that other sectoral regulators or competition authorities could incorporate into their tailored rules for goods and services.

For instance, safety standards for medical devices should be upheld, irrespective of whether AI is involved. This product-centric regulatory approach would ensure that the desired outcomes of safety, quality, and effectiveness are achieved without stymieing innovation. With their deep industry knowledge and experience, sectoral regulators will generally be better positioned to address the unique challenges and considerations posed by AI technology deployed within their spheres of influence.

NTIA alludes to one of the risks of an overaggregated regulator when it notes that:

For some trustworthy AI goals, it will be difficult to harmonize standards across jurisdictions or within a standard- setting body, particularly if the goal involves contested moral and ethical judgements. In some contexts, not deploying AI systems at all will be the means to achieve the stated goals.[70]

Indeed, the institutional incentives that drive bureaucratic decision making often converge on this solution of preventing unexpected behavior by regulated entities.[71] But at what cost? If a regulator is unable to imagine how to negotiate the complicated tradeoffs among interested parties across all AI-infused technologies, it will act to slow or prevent the technology from coming to market. This will make us all worse off, and will only strengthen the position of our competitors on the world stage.

D.      The Impossibility of Explaining Complexity

NTIA notes that:

According to NIST, ‘‘trustworthy AI’’ systems are, among other things, ‘‘valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with their harmful bias managed.’’[72]

And in the section titled “Accountability Inputs and Transparency, NTIA asks a series of questions designed to probe what can be considered a realistic transparency obligation for developers and deployers of AI systems. We urge NTIA to resist the idea that AI systems be “explainable,” for the reasons set forth herein.

One of the significant challenges in AI accountability is making AI systems explainable to users. It is crucial to acknowledge that providing a clear explanation of how an AI model—such as an LLM or a diffusion model—arrives at a specific output is an inherently complex task, and may not be possible at all. As the UK Royal Society has noted in its paper on AI explainability:

Much of the recent excitement about advances in AI has come as a result of advances in statistical techniques. These approaches – including machine learning – often leverage vast amounts of data and complex algorithms to identify patterns and make predictions. This complexity, coupled with the statistical nature of the relationships between inputs that the system constructs, renders them difficult to understand, even for expert users, including the system developers. [73]

These models are designed with intricate architectures and often rely on vast troves of data to arrive at outputs, which can make it nearly impossible to reverse-engineer the process. Due to these complexities, it may be unfeasible to make AI fully explainable to users. Moreover, users themselves often do not value explainability, and may be largely content with a “black box” system when it consistently provides accurate results.[74]

Instead, to the extent that regulators demand visibility into AIs, the focus should be on the transparency of the AI-development process, system inputs, and the general guidelines for AI that developers use in preparing their models. Ultimately, we suspect that, even here, such measures will do little to resolve the inherent complexity in understanding how AI tools produce their outputs.

In a more limited sense, we should consider the utility in transparency of AI-infused technology for most products and consumers. NTIA asks:

Given the likely integration of generative AI tools such as large language models (e.g., ChatGPT) or other general-purpose AI or foundational models into downstream products, how can AI accountability mechanisms inform people about how such tools are operating and/or whether the tools comply with standards for trustworthy AI?[75]

As we note above, the proper level of analysis for AI technologies is the product into which they are incorporated. But even there, we need to ask whether it matters to an end user whether a product they are using relies on ChatGPT or a different algorithm for predictively generating text. If the product malfunctions, what matters is the malfunction and the accountability for the product. Most users do not really care whether a developer writes a program using C++ or Java, and neither should they explicitly care whether he incorporates a generative AI algorithm to predict text, or uses some other method of statistical analysis. The presence of an AI component becomes analytically necessary when diagnosing how something went wrong, but ex ante, it is likely irrelevant from a consumer’s perspective.

Thus, it may be the case that a more fruitful avenue for NTIA to pursue would be to examine how a strict-liability or product-liability legal regime might be developed for AI. These sorts of legal frameworks put the onus on AI developers to ensure that their products behave appropriately­. Such legal frameworks also provide consumers with reassurance that they have recourse if and when they are harmed by a product that contains AI technology. Indeed, it could very well be the case that overemphasizing “trust” in AI systems could end up misleading users in important contexts.[76] This would strengthen the case for a predictable liability regime.

1.        The deepfakes problem demonstrates that we do not need a new body of law

The phenomenon of generating false depictions of individuals using advanced AI techniques—commonly called “deepfakes”—is undeniably concerning, particularly when it can be used to create detrimental false public statements,[77] facilitate fraud,[78] or create nonconsensual pornography.[79] But while deepfakes use modern technological tools, they are merely the most recent iteration of the age-old problem of forgery. Importantly, existing law already equips us with the tools needed to address the challenges posed by deepfakes, rendering many recent legislative proposals at the state level both unnecessary and potentially counterproductive. Consider one of the leading proposals offered by New York State.[80]

Existing laws in New York and at the federal level provide remedies for individuals aggrieved by deepfakes, and they do so within a legal system that has already worked to incorporate the context of these harms, as well as the restrictions of the First Amendment and related defenses. For example, defamation laws can be applied where a deepfake falsely suggests an individual has posed for an explicit photograph or video.[81] New York law also acknowledges the tort of intentional infliction of emotional distress, which likely could be applied to the unauthorized use of a person’s likeness in explicit content.[82] In addition, the tort of unjust enrichment can be brought to bear where appropriate, as can the Lanham Act §43(a), which prohibits false advertising and implied false endorsements.[83] Furthermore, victims may hold copyright in the photograph or video used in a deepfake, presenting grounds for an infringement action.[84]

Thus, while advanced deepfakes are new, the harms they can cause and the law’s ability to address those harms is not novel. Legislation that attempts to carve out new categories of harms in these situations are, at best, reinventing the wheel and, at worst, risk creating confusing tensions in the existing legal system.

III.      The Role of NTIA in AI Accountability

NTIA asks if “the lack of a federal law focused on AI systems [is] a barrier to effective AI accountability?”[85] In short, no, this is not a barrier, so long as the legal system is allowed to evolve to incorporate the novel challenges raised by AI technologies.

As noted in the previous section, there is a need to develop standards, both legal and technical. As we are in the early days of AI technology, the exact contours of the various legal changes that might be needed to incorporate AI tools into existing law remain unclear. At this point, we would urge NTIA—to the extent that it wants to pursue regulatory, licensing, transparency, and other similar obligations—to develop a series of workshops through which leading technology and legal experts could confer on developing a vision for how such legal changes would work in practice.

By gathering stakeholders and fostering an ongoing dialogue, NTIA can help to create a collaborative environment in which organizations can share knowledge, experiences, and innovations to address AI accountability and its associated challenges. By promoting industry collaboration, NTIA could also help build a foundation of trust and cooperation among organizations involved in AI development and deployment. This, in turn, will facilitate the establishment of standards and best practices that address specific concerns, while mitigating the risk of overregulation that could stifle innovation and progress. In this capacity, NTIA should focus on encouraging the development of context-specific best practices that prioritize the containment of identifiable harms. By fostering a collaborative atmosphere, the agency can support a dynamic and adaptive AI ecosystem that is capable of addressing evolving challenges while safeguarding the societal benefits of AI advancements.

In addressing AI accountability, it is essential for NTIA to adopt a harm-focused framework that targets the negative impacts of AI systems rather than the technology itself. This approach would recognize that AI technology can have diverse applications, with consequences that will depend on the context in which they are used. By prioritizing the mitigation of specific harms, NTIA can ensure that regulations are tailored to address real-world outcomes and provide a more targeted and effective regulatory response.

A harm-focused framework also acknowledges that different AI technologies pose differing levels of risk and potential for misuse. NTIA can play a proactive role in guiding the creation of policies that reflect these nuances, striking a balance between encouraging innovation and ensuring the responsible development and use of AI. By centering the discussion on actual harms and their causes, NTIA can foster meaningful dialogue among stakeholders and facilitate the development of industry best practices designed to minimize negative consequences.

Moreover, this approach ensures that AI accountability policies are consistent with existing laws and regulations, as it emphasizes the need to assess AI-related harms within the context of the broader legal landscape. By aligning AI accountability measures with other established regulatory frameworks, the NTIA can provide clear guidance to AI developers and users, while avoiding redundancy and conflicting regulations. Ultimately, a harm-focused framework allows the NTIA to better address the unique challenges posed by AI technology and foster an assurance ecosystem that prioritizes safety, ethics, and legal compliance without stifling innovation.

IV.    Conclusion

Another risk of the current AI hysteria is that fatigue will set in, and the public will become numbed to potential harms. Overall, this may shrink the public’s appetite for the kinds of legal changes that will be needed to address those actual harms that do emerge. News headlines that push doomsday rhetoric and a community of experts all too eager to respond to the market incentives for apocalyptic projections only exacerbate the risk of that outcome. A recent one-line letter, signed by AI scientists and other notable figures, highlights the problem:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.[86]

Novel harms absolutely will emerge from products that employ AI, as has been the case for every new technology. The introduction of automobiles created new risks of harm from high-speed auto-related deaths, for example. But rhetoric about AI being an existential risk on the level of a pandemic or nuclear war is irresponsible.

Perhaps one of the most important positions NTIA can assume, therefore, is that of a calm, collected expert agency that helps restrain the worst impulses to regulate AI out of existence due to blind fear.

In essence, the key challenge confronting policymakers lies in navigating the dichotomy of mitigating actual risks presented by AI, while simultaneously safeguarding the substantial benefits it offers. It is undeniable that the evolution of AI will bring about disruption and may provide a conduit for malevolent actors, just as technologies like the printing press and the internet have done in the past. This does not, however, merit taking an overly cautious stance that would suppress the potential benefits of AI.

As we formulate policy, it is crucial to eschew dystopian science-fiction narratives and instead ground our approach in realistic scenarios. The proposition that computer systems, even those as advanced as AI tools, could spell the end of humanity lacks substantial grounding.

The current state of affairs represents a geo-economic competition to harness the benefits of AI in myriad domains. Contrary to fears that AI poses an existential risk, the real danger may well lie in attempts to overly regulate and stifle the technology’s potential. The indiscriminate imposition of regulations could inadvertently thwart AI advancements, resulting in a loss of potential benefits that could be far more detrimental to social welfare.

[1] AI Accountability Policy Request for Comment, Docket No. 230407-0093, 88 FR 22433, National Telecommunications and Information Administration (Apr. 14, 2023) (“RFC”).

[2] Indeed, this approach appears to be the default position of many policymakers around the world. See, e.g., Mikolaj Barczentewicz, EU’s Compromise AI Legislation Remains Fundamentally Flawed, Truth on the Market (Feb. 8, 2022), https://truthonthemarket.com/2022/02/08/eus-compromise-ai-legislation-remains-fundamentally-flawed; The fundamental flaw of this approach is that, while AI techniques use statistics, “statistics also includes areas of study which are not concerned with creating algorithms that can learn from data to make predictions or decisions. While many core concepts in machine learning have their roots in data science and statistics, some of its advanced analytical capabilities do not naturally overlap with these disciplines.” See, Explainable AI: The Basics, The Royal Society (2019) at 7 available at https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf (“Royal Society Briefing”).

[3] John P. Holdren, Cass R. Sunstein, & Islam A. Siddiqui, Memorandum for the Heads of Executive Departments and Agencies, Executive Office of the White House (Jun. 9, 2011), available at https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/for-agencies/nanotechnology-regulation-and-oversight-principles.pdf.

[4] Id.

[5] Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. L. Forum 207 (1996).

[6] LLMs are a type of artificial-intelligence model designed to parse and generate human language at a highly sophisticated level. The deployment of LLMs has driven progress in fields such as conversational AI, automated content creation, and improved language understanding across a multitude of applications, even suggesting that these models might represent an initial step toward the achievement of artificial general intelligence (AGI). See Alejandro Pen?a et al., Leveraging Large Language Models for Topic Classification in the Domain of Public Affairs, arXiv (Jun. 5, 2023), https://arxiv.org/abs/2306.02864v1.

[7] Diffusion models are a type of generative AI built from a hierarchy of denoising autoencoders, which can achieve state-of-the-art results in such tasks as class-conditional image synthesis, super-resolution, inpainting, colorization, and stroke-based synthesis. Unlike other generative models, these likelihood-based models do not exhibit mode collapse and training instabilities. By leveraging parameter sharing, they can model extraordinarily complex distributions of natural images without necessitating billions of parameters, as in autoregressive models. See Robin Rombach et al., High-Resolution Image Synthesis with Latent Diffusion Models, arXiv (Dec. 20, 2021), https://arxiv.org/abs/2112.10752.

[8] Recommender systems are advanced tools currently used across a wide array of applications, including web services, books, e-learning, tourism, movies, music, e-commerce, news, and television programs, where they provide personalized recommendations to users. Despite recent advancements, there is a pressing need for further improvements and research in order to offer more efficient recommendations that can be applied across a broader range of applications. See Deepjyoti Roy & Mala Dutta, A Systematic Review and Research Perspective on Recommender Systems, 9 J. Big Data 59 (2022), available at https://journalofbigdata.springeropen.com/counter/pdf/10.1186/s40537-022-00592-5.pdf.

[9] AGI refers to hypothetical future AI systems that possess the ability to understand or learn any intellectual task that a human being can do. While the realization of AGI remains uncertain, it is distinct from the more specialized AI systems currently in use. For a skeptical take on the possibility of AGI, see Roger Penrose, The Emperor’s New Mind (Oxford Univ. Press 1989).

[10] Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193 (1890).

[11] Id. at 200.

[12] Id. at 193.

[13] Id. at 196-97.

[14] Notably, courts do try to place a value on emotional distress and related harms. But because these sorts of violations are deeply personal, attempts to quantify such harms in monetary terms are rarely satisfactory to the parties involved.

[15] Martin Giles, Bounty Hunters Tracked People Secretly Using US Phone Giants’ Location Data, MIT Tech. Rev. (Feb. 7, 2019), https://www.technologyreview.com/2019/02/07/137550/bounty-hunters-tracked-people-secretly-using-us-phone-giants-location-data.

[16] See, e.g., Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S. 417, 439 (1984) (The Supreme Court imported the doctrine of “substantial noninfringing uses” into copyright law from patent law).

[17] A notable example is how the Patriot Act, written to combat terrorism, was ultimately used to take down a sitting governor in a prostitution scandal. See Noam Biale, Eliot Spitzer: From Steamroller to Steamrolled, ACLU, Oct. 29, 2007, https://www.aclu.org/news/national-security/eliot-spitzer-steamroller-steamrolled.

[18] RFC at 22437.

[19] Id. at 22433.

[20] Id. at 22436.

[21] Indeed, the RFC acknowledges that, even as some groups are developing techniques to evaluate AI systems for bias or disparate impact, “It should be recognized that for some features of trustworthy AI, consensus standards may be difficult or impossible to create.” RFC at 22437. Arguably, this problem is inherent to constructing an overaggregated regulator, particularly one that will be asked to consulting a broad public on standards and rulemaking.

[22] Id. at 22439.

[23] Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S.at 417.

[24] Id.

[25] Id.

[26] Id. at 456.

[27] Id.

[28] See, e.g., Defendant Indicted for Camcording Films in Movie Theaters and for Distributing the Films on Computer Networks First Prosecution Under Newly-Enacted Family Entertainment Copyright Act, U.S. Dept of Justice (Aug. 4, 2005), available at https://www.justice.gov/archive/criminal/cybercrime/press-releases/2005/salisburyCharge.htm.

[29] 17 U.S.C. 106.

[30] See 17 U.S.C. 107; Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 590 (1994) (“Since fair use is an affirmative defense, its proponent would have difficulty carrying the burden of demonstrating fair use without favorable evidence about relevant markets.”).

[31] See, e.g., N.Y. Penal Law § 265.01; Wash. Rev. Code Ann. § 9.41.250; Mass. Gen. Laws Ann. ch. 269, § 10(b).

[32] See, e.g., 18 U.S.C.A. § 922(g).

[33] Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206 final. The latest proposed text of the AI Act is available at https://www.europarl.europa.eu/doceo/document/A-9-2023-0188_EN.html.

[34] Id. at amendment 36 recital 14.

[35] Id.

[36] Id.

[37] See e.g., Mikolaj Barczentewicz, supra note 2.

[38] Id.

[39] Foo Yun Chee, Martin Coulter & Supantha Mukherjee, EU Lawmakers’ Committees Agree Tougher Draft AI Rules, Reuters (May 11, 2023), https://www.reuters.com/technology/eu-lawmakers-committees-agree-tougher-draft-ai-rules-2023-05-11.

[40] See infra at notes 71-77 and accompanying text.

[41] Explainable AI: The Basics, supra note 2 at 8.

[42] See e.g., Delos Prime, EU AI Act to Target US Open Source Software, Technomancers.ai (May 13, 2023), https://technomancers.ai/eu-ai-act-to-target-us-open-source-software.

[43] Id.

[44] To be clear, it is not certain how such an extraterritorial effect will be obtained, and this is just a proposed amendment to the law. Likely, there will need to be some form of jurisdictional hook, i.e., that this applies only to firms with an EU presence.

[45]  Eliezer Yudkowsky, Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, Time (Mar. 29, 2023), https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough.

[46] See, e.g., Kiran Stacey, UK Should Play Leading Role on Global AI Guidelines, Sunak to Tell Biden, The Guardian (May 31, 2023), https://www.theguardian.com/technology/2023/may/31/uk-should-play-leading-role-in-developing-ai-global-guidelines-sunak-to-tell-biden.

[47] See, e.g., Matthew J. Neidell, Shinsuke Uchida & Marcella Veronesi, The Unintended Effects from Halting Nuclear Power Production: Evidence from Fukushima Daiichi Accident, NBER Working Paper 26395 (2022), https://www.nber.org/papers/w26395 (Japan abandoning nuclear energy in the wake of the Fukushima disaster led to decreased energy consumption, which in turn led to increased mortality).

[48] See, e.g., Will Knight, Some Glimpse AGI in ChatGPT. Others Call It a Mirage, Wired (Apr. 10, 2023), https://www.wired.com/story/chatgpt-agi-intelligence (“GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input.”)

[49] Joseph A. Schumpeter, Capitalism, Socialism And Democracy 74 (1976).

[50] See, e.g., Jerry Hausman, Valuation of New Goods Under Perfect and Imperfect Competition, in The Economics Of New Goods 209–67 (Bresnahan & Gordon eds., 1997).

[51] William D. Nordhaus, Schumpeterian Profits in the American Economy: Theory and Measurement, NBER Working Paper No. 10433 (Apr. 2004) at 1, http://www.nber.org/papers/w10433 (“We conclude that only a miniscule fraction of the social returns from technological advances over the 1948-2001 period was captured by producers, indicating that most of the benefits of technological change are passed on to consumers rather than captured by producers.”).

[52] See generally Oliver E. Williamson, Markets And Hierarchies, Analysis And Antitrust Implications: A Study In The Economics Of Internal Organization (1975).

[53] See, e.g., Nassim Nicholas Taleb, Antifragile: Things That Gain From Disorder (2012) (“In action, [via negativa] is a recipe for what to avoid, what not to do.”).

[54] Adam Thierer, Permissionless Innovation: The Continuing Case For Comprehensive Technological Freedom (2016).

[55] See, e.g., Artificial Intelligence Act, supra note 33, at amendment 112 recital 66.

[56] Explainable AI: The Basics, supra note 2 at 6.

[57] Cecilia Kang, OpenAI’s Sam Altman Urges A.I. Regulation in Senate Hearing, NY Times (May 16, 2023), https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html; see also Mike Solana & Nick Russo, Regulate Me, Daddy, Pirate Wires (May 23, 2023), https://www.piratewires.com/p/regulate-me-daddy.

[58] Cristiano Lima, Biden’s Former Tech Adviser on What Washington is Missing about AI, The Washington Post (May 30, 2023), https://www.washingtonpost.com/politics/2023/05/30/biden-former-tech-adviser-what-washington-is-missing-about-ai.

[59] Frank H. Easterbrook, supra note 5.

[60]  See Lima, supra note 58 (“I’m not in favor of an approach that would create heavy compliance costs for market entry and that would sort of regulate more abstract harms.”)

[61] Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73:2 Am. Econ. R. 267, 267–71 (1983), http://www.jstor.org/stable/1816853.

[62] Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36:1 J. Indus. Econ. 19 (1987), https://doi.org/10.2307/2098594.

[63] United Mine Workers of Am. v. Pennington, 381 U.S. 657, 661 (1965).

[64] Oliver E. Williamson, Wage Rates as a Barrier to Entry: The Pennington Case in Perspective, 82:1 Q. J. Econ. 85 (1968), https://doi.org/10.2307/1882246.

[65] RFC at 22439.

[66] See, e.g., Lima, supra note 58 (“Licensing regimes are the death of competition in most places they operate”).

[67] Kang, supra note 57; Oversight of A.I.: Rules for Artificial Intelligence: Hearing Before the Subcomm. on Privacy, Technology, and the Law of the S. Comm. on the Judiciary, 118th Cong. (2023) (statement of Sam Altman, at 11), available at https://www.judiciary.senate.gov/download/2023-05-16-testimony-altman.

[68] RFC at 22437.

[69] See, e.g., Transcript: Senate Judiciary Subcommittee Hearing on Oversight of AI, Tech Policy Press (May 16, 2023), https://techpolicy.press/transcript-senate-judiciary-subcommittee-hearing-on-oversight-of-ai (“So what I’m trying to do is make sure that you just can’t go build a nuclear power plant. Hey Bob, what would you like to do today? Let’s go build a nuclear power plant. You have a nuclear regulatory commission that governs how you build a plant and is licensed.”)

[70] RFC at 22438.

[71] See, e.g., Raymond J. March, The FDA and the COVID?19: A Political Economy Perspective, 87(4) S. Econ. J. 1210, 1213-16 (2021), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8012986 (discussing the political economy that drives incentives of bureaucratic agencies in the context of the FDA’s drug-approval process).

[72] RFC at 22434.

[73] Explainable AI: The Basics, supra, note 2 at 12.

[74] Id. at 20.

[75] Id. at 22439.

[76] Explainable AI: The Basics, supra note 2 at 22. (“Not only is the link between explanations and trust complex, but trust in a system may not always be a desirable outcome. There is a risk that, if a system produces convincing but misleading explanations, users might develop a false sense of confidence or understanding, mistakenly believing it is trustworthy as a result.”)

[77] Kate Conger, Hackers’ Fake Claims of Ukrainian Surrender Aren’t Fooling Anyone. So What’s Their Goal?, NY Times (Apr. 5, 2022), https://www.nytimes.com/2022/04/05/us/politics/ukraine-russia-hackers.html.

[78] Pranshu Verma, They Thought Loved Ones Were Calling for Help. It Was an AI Scam, The Washington Post (Mar. 5, 2023), https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam.

[79] Video: Deepfake Porn Booms in the Age of A.I., NBC News (Apr. 28, 2023), https://www.nbcnews.com/now/video/deepfake-porn-booms-in-the-age-of-a-i-171726917562.

[80] S5857B, NY State Senate (2018), https://www.nysenate.gov/legislation/bills/2017/s5857/amendment/b.

[81] See, e.g., Rejent v. Liberation Publications, Inc., 197 A.D.2d 240, 244–45 (1994); see also, Leser v. Penido, 62 A.D.3d 510, 510–11 (2009).

[82] See, e.g., Howell v. New York Post Co,. 612 N.E.2d 699 (1993).

[83] See, e.g., Mandarin Trading Ltd. v. Wildenstein, 944 N.E.2d 1104 (2011); 15 U.S.C. §1125(a).

[84] 17 U.S.C. 106.

[85] RFC at 22440.

[86] Statement on AI Risk, Center for AI Safety, https://www.safe.ai/statement-on-ai-risk (last visited Jun. 7, 202).

Continue reading
Innovation & the New Economy

No More Kings? Due Process and Regulation Without Representation Under the UK Competition Bill

TOTM What should a competition law for 21st century look like? This point is debated across many jurisdictions. The Digital Markets, Competition, and Consumers Bill (DMCC) would change . . .

What should a competition law for 21st century look like? This point is debated across many jurisdictions. The Digital Markets, Competition, and Consumers Bill (DMCC) would change UK competition law’s approach to large platforms. The bill’s core point is to place the UK Competition and Markets Authority’s (CMA) Digital Markets Unit (DMU) on a statutory footing with relaxed evidentiary standards to regulate so-called “Big Tech” firms more easily. This piece considers some areas to watch as debate regarding the bill unfold.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection