Showing Latest Publications

Measuring the Openness of AI Foundation Models: Competition and Policy Implications

Scholarship Abstract This paper presents the first comprehensive evaluation of AI foundation model licenses as drivers of innovation commons. It introduces a novel methodology for assessing . . .

Abstract

This paper presents the first comprehensive evaluation of AI foundation model licenses as drivers of innovation commons. It introduces a novel methodology for assessing the openness of AI foundation models and applies this approach across prominent models such as OpenAI’s GPT-4, Meta’s Llama 3, Google’s Gemini, Mistral’s 8x7B, and MidJourney’s V6. The results yield practical policy recommendations and focal points for competition agencies.

Read at SSRN.

 

Continue reading
Innovation & the New Economy

When Protection Becomes Overreach

Popular Media The Federal Trade Commission has shifted focus in recent months away from its traditional goal of protecting consumers and toward one of protecting workers instead. . . .

The Federal Trade Commission has shifted focus in recent months away from its traditional goal of protecting consumers and toward one of protecting workers instead. Most recently, the agency issued a rule that would ban essentially all noncompete agreements in employment contracts. The rule would forbid new noncompetes, and in existing contracts, only noncompetes covering senior executives would remain enforceable.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

How Should We Measure Competition?

TOTM Competition is the driving force behind the success of markets. It’s hard to imagine a thriving market economy without the presence of competitive forces. But . . .

Competition is the driving force behind the success of markets. It’s hard to imagine a thriving market economy without the presence of competitive forces.

But how do we actually measure competition? I use the term all the time, but do we actually have a measure of it? This question is more complex than it may seem at first glance.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

ICLE Comments to UK Competition and Markets Authority on AI Partnerships

Regulatory Comments Executive Summary We thank the Competition and Markets Authority (CMA) for this invitation to comment (ITC) on partnerships and other arrangements involving artificial intelligence (AI).[1] . . .

Executive Summary

We thank the Competition and Markets Authority (CMA) for this invitation to comment (ITC) on partnerships and other arrangements involving artificial intelligence (AI).[1] The International Center for Law & Economics (ICLE) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

In our comments, we express concern that policymakers’ current concerns about competition in AI industries may be unwarranted. This is particularly true of the notion that incumbent digital platforms may use strategic partnerships with AI firms to insulate themselves from competition, including the three transactions that are central to the current ITC:

  1. Amazon’s partnership with Anthropic;
  2. Microsoft’s partnership with Mistral AI; and,
  3. Microsoft’s hiring of former Inflection AI employees (including, notably, founder Mustafa Suleyman) and related arrangements with the company.

Indeed, publicly available information suggests these transactions may not warrant merger-control investigation, let alone the heightened scrutiny that comes with potential Phase II proceedings. At the very least, given the AI industry’s competitive landscape, there is little to suggest these transactions merit closer scrutiny than similar deals in other sectors.

Overenforcement in the field of generative AI paradoxically could engender the very harms that policymakers currently seek to avert. As we explain in greater detail below, preventing so-called “big tech” firms from competing in these markets (for example, by threatening competition intervention as soon as they build strategic relationships with AI startups) may thwart an important source of competition needed to keep today’s leading generative-AI firms in check. In short, competition in AI markets is important[2], but trying naïvely to hold incumbent (in adjacent markets) tech firms back out of misguided fears they will come to dominate this space is likely to do more harm than good.

At a more granular level, there are important reasons to believe these agreements will have no negative impact on competition and they may, in fact, benefit consumers—e.g., by enabling those startups to raise capital and deploy their services at an even larger scale. In other words, they do not bear any of the prima facie traits of “killer acquisitions” or even of the acquisition of “nascent potential competitors”.[3]

Most importantly, these partnerships all involve the acquisition of minority stakes that do not entail any change of control over the target companies. Amazon, for instance, will not have “ownership control” of Anthropic. The precise amount of shares acquired has not been made public, but a reported investment of $4 billion in a company valued at $18.4 billion does not give Amazon a majority stake or sufficient voting rights to control the company or its competitive strategy. [4] It has also been reported that the deal will not give Amazon any seats on the Anthropic board or special voting rights (such as the power to veto some decisions).[5] There is thus little reason to believe Amazon has acquired indirect or de facto control over Anthropic.

Microsoft’s investment in Mistral AI is even smaller, in both absolute and relative terms. Microsoft is reportedly investing only $16 million in a company valued at $2.1 billion.[6] This represents less than 1% of Mistral’s equity, making it all but impossible for Microsoft to exert any significant control or influence over Mistral AI’s competitive strategy. Likewise, there have been no reports of Microsoft acquiring seats on Mistral AI’s board or special voting rights. We can therefore be confident that the deal will not affect competition in AI markets.

Much of the same applies to Microsoft’s dealings with Inflection AI. Microsoft hired two of the company’s three founders (which currently does not fall under the scope of merger laws), and also paid $620 million for nonexclusive rights to sell access to the Inflection AI model through its Azure Cloud.[7] Admittedly, the latter could entail (depending on deal’s specifics) some limited control over Inflection AI’s competitive strategy, but there is currently no evidence to suggest this will be the case.

Finally, none of these deals entails any competitively significant behavioral commitments from the target companies. There are no reports of exclusivity agreements or other commitments that would restrict third parties’ access to these firms’ underlying AI models. Again, this means the deals are extremely unlikely to negatively impact the competitive landscape in these markets.

At a more macro level, how the CMA deals with these proposed partnerships could have important ramifications for the UK economy. On the one hand, competition authorities (including the CMA) may be tempted to avoid the mistakes they arguably made during the formative years of what have become today’s largest online platforms.[8] The argument is that tougher enforcement may have reduced the high levels of concentration we see in these markets (the counterpoint is that these markets present features that naturally lead to relatively high levels of concentration and that this concentration benefits consumers in several ways[9]).

Unfortunately, this urge to curtail false negatives may come at the expense of judicial errors that hobble the UK economy. Discussing the EU’s AI Act during a recent interview, French President Emmanuel Macron implicitly suggested the UK is in a unique position to attract AI (and other tech) investments away from the European Union. In his words:

We can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea…

When I look at France, it is probably the first country in terms of artificial intelligence in continental Europe. We are neck and neck with the British. They will not have this regulation on foundational models. But above all, we are all very far behind the Chinese and the Americans. [10]

To capitalise on this opportunity, however, the UK must foster a fertile environment for startup activity. The CMA’s approach to merger review in the AI industry is a small, but important, part of this picture. Looking at AI partnerships in an even-handed manner would signal a commitment to evidence-based policymaking that creates legal certainty for startups. For instance, sound merger-review principles would assure founders that corporate acquisition will remain a viable exit strategy in all but exceptional circumstances.

Of course, none of this is to say that established competition-law principles should play second fiddle to broader geopolitical ambitions. It does, however, suggest that the cost of false positives is particularly high in key industries like AI.

In short, how the CMA approaches these AI partnerships is of pivotal importance for both UK competition policy and the country’s broader economic ambitions. The CMA should therefore look at these partnerships with an open mind, despite the important political and reputational pressure to be seen as “doing something” in this cutting-edge industry. Generative AI is already changing the ways that many firms do business and improving employee productivity in many industries.[11] The technology is also increasingly useful in the field of scientific research, where it has enabled new complex models that expand scientists’ reach.[12] And while sensible enforcement is of vital importance to maintain competition and consumer welfare, it must be grounded in empirical evidence.

In the remainder of these comments, we will discuss the assumptions that underpin calls for heightened competition scrutiny in AI industries, and explain why they are unfounded. The big picture is that AI markets have grown rapidly, and new players are thriving. This would suggest that competition is intense. If incumbent firms could easily leverage their dominance into burgeoning generative-AI markets, we would not have seen the growth of such AI “unicorns” as OpenAI, Midjourney, and Anthropic, to name but a few. Furthermore, AI platforms developed by incumbent data collectors—such as Meta’s Llama or Google’s Bard, recently relaunched as Gemini—have struggled to gain traction. Of course, this is not to say that competition enforcers shouldn’t care about generative AI markets, but rather that there is currently no apparent need for increased competition scrutiny in these markets.

The comments proceed as follows. Section I summarises recent calls for competition intervention in generative-AI markets. Section II argues that many of these calls are underpinned by fears of data-related incumbency advantages (often referred to as “data-network effects”), including in the context of mergers. Section III explains why these effects are unlikely to play a meaningful role in generative-AI markets. Section IV concludes by offering five key takeaways to help policymakers better weigh the tradeoffs inherent to competition intervention (including merger control) in generative-AI markets.

I. Calls for Intervention in AI Markets

It was once (and frequently) said that Google’s “data monopoly” was unassailable: “If ‘big data’ is the oil of the information economy, Google has Standard Oil-like monopoly dominance—and uses that control to maintain its dominant position”.[13] Similar claims of data dominance have been attached to nearly all large online platforms, including Facebook (Meta), Amazon, and Uber.[14]

While some of these claims continue even today (for example, “big data” is a key component of the U.S. Justice Department’s (DOJ) Google Search and adtech antitrust suits),[15] a shiny new data target has emerged in the form of generative artificial intelligence (AI). The launch of ChatGPT in November 2022, as well as the advent of AI image-generation services like Midjourney and Dall-E, have dramatically expanded the public’s conception of what is—and what might be—possible to achieve with generative-AI technologies built on massive datasets.

While these services remain both in the early stages of mainstream adoption and in the throes of rapid, unpredictable technological evolution, they nevertheless already appear to be on the radar of competition policymakers around the world. Several antitrust enforcers appear to believe that, by acting now, they can avoid the “mistakes” that were purportedly made during the formative years of Web 2.0.[16] These mistakes, critics assert, include failing to appreciate the centrality of data in online markets, as well as letting mergers go unchecked and allowing early movers to entrench their market positions.[17] As Lina Khan, chair of the U.S. Federal Trade Commission (FTC), put it: “we are still reeling from the concentration that resulted from Web 2.0, and we don’t want to repeat the mis-steps of the past with AI”.[18]

This response from the competition-policy world is deeply troubling. Rather than engage in critical self-assessment and adopt an appropriately restrained stance, the enforcement community appears to be champing at the bit. Rather than assessing their prior assumptions based on the current technological moment, enforcers’ top priority appears to be figuring out how to rapidly and almost reflexively deploy existing competition tools to address the presumed competitive failures presented by generative AI.[19]

It is increasingly common for competition enforcers to argue that so-called “data-network effects” serve not only to entrench incumbents in those markets where the data is collected, but also to confer similar, self-reinforcing benefits in adjacent markets. Several enforcers have, for example, prevented large online platforms from acquiring smaller firms in adjacent markets, citing the risk that they could use their vast access to data to extend their dominance into these new markets.[20]

They have also launched consultations to ascertain the role that data plays in AI competition. For instance, in an ongoing consultation, the European Commission asks: “What is the role of data and what are its relevant characteristics for the provision of generative AI systems and/or components, including AI models?”[21] Unsurprisingly, the FTC has likewise been bullish about the risks posed by incumbents’ access to data. In comments submitted to the U.S. Copyright Office, for example, the FTC argued that:

The rapid development and deployment of AI also poses potential risks to competition. The rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms. These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.[22]

Certainly, it stands to reason that the largest online platforms—including Alphabet, Meta, Apple, and Amazon—should have a meaningful advantage in the burgeoning markets for generative-AI services. After all, it is widely recognised that data is an essential input for generative AI.[23] This competitive advantage should be all the more significant, given that these firms have been at the forefront of AI technology for more than a decade. Over this period, Google’s DeepMind and AlphaGo and Meta’s NLLB-200 have routinely made headlines.[24] Apple and Amazon also have vast experience with AI assistants, and all of these firms use AI technology throughout their platforms.[25]

Contrary to what one might expect, however, the tech giants have, to date, been largely unable to leverage their vast data troves of data to outcompete startups like OpenAI and Midjourney. At the time of writing, OpenAI’s ChatGPT appears to be, by far, the most successful chatbot,[26] despite the large tech platforms’ apparent access to far more (and more up-to-date) data.

In these comments, we suggest that there are important lessons to glean from these developments, if only enforcers would stop to reflect. The meteoric rise of consumer-facing AI services should offer competition enforcers and policymakers an opportunity for introspection. As we explain, the rapid emergence of generative-AI technology may undercut many core assumptions of today’s competition-policy debates, which have focused largely on the rueful after-effects of the purported failure of 20th-century antitrust to address the allegedly manifest harms of 21st-century technology. These include the notions that data advantages constitute barriers to entry and can be leveraged to project dominance into adjacent markets; that scale itself is a market failure to be addressed by enforcers; and that the use of consumer data is inherently harmful to those consumers.

II. Data-Network Effects Theory and Enforcement

Proponents of more extensive intervention by competition enforcers into digital markets often cite data-network effects as a source of competitive advantage and barrier to entry (though terms like “economies of scale and scope” may offer more precision).[27] The crux of the argument is that “the collection and use of data creates a feedback loop of more data, which ultimately insulates incumbent platforms from entrants who, but for their data disadvantage, might offer a better product”.[28] This self-reinforcing cycle purportedly leads to market domination by a single firm. Thus, it is argued, e.g., that Google’s “ever-expanding control of user personal data, and that data’s critical value to online advertisers, creates an insurmountable barrier to entry for new competition”.[29]

But it is important to note the conceptual problems these claims face. Because data can be used to improve products’ quality and/or to subsidise their use, treating the possession of data as an entry barrier suggests that any product improvement or price reduction made by an incumbent could be a problematic entry barrier to any new entrant. This is tantamount to an argument that competition itself is a cognizable barrier to entry. Of course, it would be a curious approach to antitrust if competition were treated as a problem, as it would imply that firms should under-compete—i.e., should forego consumer-welfare enhancements—in order to inculcate a greater number of firms in a given market, simply for its own sake.[30]

Meanwhile, actual economic studies of data-network effects have been few and far between, with scant empirical evidence to support the theory.[31] Andrei Hagiu and Julian Wright’s theoretical paper offers perhaps the most comprehensive treatment of the topic to date.[32] The authors ultimately conclude that data-network effects can be of differing magnitudes and have varying effects on firms’ incumbency advantage.[33] They cite Grammarly (an AI writing-assistance tool) as a potential example: “As users make corrections to the suggestions offered by Grammarly, its language experts and artificial intelligence can use this feedback to continue to improve its future recommendations for all users”.[34]

This is echoed by other economists who contend that “[t]he algorithmic analysis of user data and information might increase incumbency advantages, creating lock-in effects among users and making them more reluctant to join an entrant platform”.[35] Crucially, some scholars take this logic a step further, arguing that platforms may use data from their “origin markets” in order to enter and dominate adjacent ones:

First, as we already mentioned, data collected in the origin market can be used, once the enveloper has entered the target market, to provide products more efficiently in the target market. Second, data collected in the origin market can be used to reduce the asymmetric information to which an entrant is typically subject when deciding to invest (for example, in R&D) to enter a new market. For instance, a search engine could be able to predict new trends from consumer searches and therefore face less uncertainty in product design.[36]

This possibility is also implicit in Hagiu and Wright’s paper.[37] Indeed, the authors’ theoretical model rests on an important distinction between within-user data advantages (that is, having access to more data about a given user) and across-user data advantages (information gleaned from having access to a wider user base). In both cases, there is an implicit assumption that platforms may use data from one service to gain an advantage in another market (because what matters is information about aggregate or individual user preferences, regardless of its origin).

Our review of the economic evidence suggests that several scholars have, with varying degrees of certainty, raised the possibility that incumbents may leverage data advantages to stifle competitors in their primary market or in adjacent ones (be it via merger or organic growth). As we explain below, however, there is ultimately little evidence to support such claims. Policymakers have, however, been keenly receptive to these limited theoretical findings, basing multiple decisions on these theories, often with little consideration given to the caveats that accompany them.[38]

Indeed, it is remarkable that, in its section on “[t]he data advantage for incumbents”, the “Furman Report” created for the UK government cited only two empirical economic studies, and they offer directly contradictory conclusions with respect to the question of the strength of data advantages.[39] Nevertheless, the Furman Report concludes that data “may confer a form of unmatchable advantage on the incumbent business, making successful rivalry less likely”,[40] and adopts without reservation “convincing” evidence from non-economists that have no apparent empirical basis.[41]

In the Google/Fitbit merger proceedings, the European Commission found that the combination of data from Google services with that of Fitbit devices would reduce competition in advertising markets:

Giving [sic] the large amount of data already used for advertising purposes that Google holds, the increase in Google’s data collection capabilities, which goes beyond the mere number of active users for which Fitbit has been collecting data so far, the Transaction is likely to have a negative impact on the development of an unfettered competition in the markets for online advertising.[42]

As a result, the Commission cleared the merger on the condition that Google refrain from using data from Fitbit devices for its advertising platform.[43] The Commission also appears likely to focus on similar issues in its ongoing investigation of Microsoft’s investment into OpenAI.[44]

Along similar lines, in its complaint to enjoin Meta’s purchase of Within Unlimited—makers of the virtual-reality (VR) fitness app Supernatural—the FTC relied on, among other things, the fact that Meta could leverage its data about VR-user behavior to inform its decisions and potentially outcompete rival VR-fitness apps: “Meta’s control over the Quest platform also gives it unique access to VR user data, which it uses to inform strategic decisions”.[45]

The DOJ’s twin cases against Google also implicate data leveraging and data barriers to entry. The agency’s adtech complaint charges that “Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry”.[46] Similarly, in its Google Search complaint, the agency argues that:

Google’s anticompetitive practices are especially pernicious because they deny rivals scale to compete effectively. General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms.[47]

Finally, updated merger guidelines published in recent years by several competition enforcers cite the acquisition of data as a potential source of competition concerns. For instance, the FTC and DOJ’s 2023 guidelines state that “acquiring data that helps facilitate matching, sorting, or prediction services may enable the platform to weaken rival platforms by denying them that data”.[48] Likewise, the CMA itself warns against incumbents acquiring firms in order to obtain their data and foreclose other rivals:

Incentive to foreclose rivals…

7.19(e) Particularly in complex and dynamic markets, firms may not focus on short term margins but may pursue other objectives to maximise their long-run profitability, which the CMA may consider. This may include… obtaining access to customer data….[49]

In short, competition authorities around the globe have been taking an increasingly aggressive stance on data-network effects. Among the ways this has manifested is in enforcement decisions based on fears that data collected by one platform might confer a decisive competitive advantage in adjacent markets. Unfortunately, these concerns rest on little to no empirical evidence, either in the economic literature or the underlying case records.

III. Data-Incumbency Advantages in Generative-AI Markets

Given the assertions detailed in the previous section, it would be reasonable to assume that firms such as Google, Meta, and Amazon should be in pole position to dominate the burgeoning market for generative AI. After all, these firms have not only been at the forefront of the field for the better part of a decade, but they also have access to vast troves of data, the likes of which their rivals could only dream when they launched their own services. Thus, the authors of the Furman Report caution that “to the degree that the next technological revolution centres around artificial intelligence and machine learning, then the companies most able to take advantage of it may well be the existing large companies because of the importance of data for the successful use of these tools”.[50]

To date, however, this is not how things have unfolded—although it bears noting these markets remain in flux and the competitive landscape is susceptible to change. The first significantly successful generative-AI service was arguably not from either Meta—which had been working on chatbots for years and had access to, arguably, the world’s largest database of actual chats—or Google. Instead, the breakthrough came from a previously unknown firm called OpenAI.

OpenAI’s ChatGPT service currently holds an estimated 60% of the market (though reliable numbers are somewhat elusive).[51] It broke the record for the fastest online service to reach 100 million users (in only a couple of months), more than four times faster than previous record holder TikTok.[52] Based on Google Trends data, ChatGPT is nine times more popular worldwide than Google’s own Bard service, and 14 times more popular in the United States.[53] In April 2023, ChatGPT reportedly registered 206.7 million unique visitors, compared to 19.5 million for Google’s Bard.[54] In short, at the time we are writing, ChatGPT appears to be the most popular chatbot. The entry of large players such as Google Bard or Meta AI appear to have had little effect thus far on its market position.[55]

The picture is similar in the field of AI-image generation. As of August 2023, Midjourney, Dall-E, and Stable Diffusion appear to be the three market leaders in terms of user visits.[56] This is despite competition from the likes of Google and Meta, who arguably have access to unparalleled image and video databases by virtue of their primary platform activities.[57]

This raises several crucial questions: how have these AI upstarts managed to be so successful, and is their success just a flash in the pan before Web 2.0 giants catch up and overthrow them? While we cannot answer either of these questions dispositively, we offer what we believe to be some relevant observations concerning the role and value of data in digital markets.

A first important observation is that empirical studies suggest that data exhibits diminishing marginal returns. In other words, past a certain point, acquiring more data does not confer a meaningful edge to the acquiring firm. As Catherine Tucker put it following a review of the literature: “Empirically there is little evidence of economies of scale and scope in digital data in the instances where one would expect to find them”.[58]

Likewise, following a survey of the empirical literature on this topic, Geoffrey Manne and Dirk Auer conclude that:

Available evidence suggests that claims of “extreme” returns to scale in the tech sector are greatly overblown. Not only are the largest expenditures of digital platforms unlikely to become proportionally less important as output increases, but empirical research strongly suggests that even data does not give rise to increasing returns to scale, despite routinely being cited as the source of this effect.[59]

In other words, being the firm with the most data appears to be far less important than having enough data. Moreover, this lower bar may be accessible to far more firms than one might initially think possible. Furthermore, obtaining sufficient data could become easier still—that is, the volume of required data could become even smaller—with technological progress. For instance, synthetic data may provide an adequate substitute to real-world data,[60] or may even outperform real-world data.[61] As Thibault Schrepel and Alex Pentland surmise:

[A]dvances in computer science and analytics are making the amount of data less relevant every day. In recent months, important technological advances have allowed companies with small data sets to compete with larger ones.[62]

Indeed, past a certain threshold, acquiring more data might not meaningfully improve a service, where other improvements (such as better training methods or data curation) could have a large impact. In fact, there is some evidence that excessive data impedes a service’s ability to generate results appropriate for a given query: “[S]uperior model performance can often be achieved with smaller, high-quality datasets than massive, uncurated ones. Data curation ensures that training datasets are devoid of noise, irrelevant instances, and duplications, thus maximizing the efficiency of every training iteration”.[63]

Consider, for instance, a user who wants to generate an image of a basketball. Using a model trained on an indiscriminate range and number of public photos in which a basketball appears surrounded by copious other image data, the user may end up with an inordinately noisy result. By contrast, a model trained with a better method on fewer, more carefully selected images could readily yield far superior results.[64] In one important example:

[t]he model’s performance is particularly remarkable, given its small size. “This is not a large language model trained on the whole Internet; this is a relatively small transformer trained for these tasks,” says Armando Solar-Lezama, a computer scientist at the Massachusetts Institute of Technology, who was not involved in the new study…. The finding implies that instead of just shoving ever more training data into machine-learning models, a complementary strategy might be to offer AI algorithms the equivalent of a focused linguistics or algebra class.[65]

Platforms’ current efforts are thus focused on improving the mathematical and logical reasoning of large language models (LLMs), rather than maximizing training datasets.[66]

Two points stand out. The first is that firms like OpenAI rely largely on publicly available datasets—such as GSM8K—to train their LLMs.[67] Second, the real challenge to create cutting-edge AI is not so much in collecting data, but rather in creating innovative AI-training processes and architectures:

[B]uilding a truly general reasoning engine will require a more fundamental architectural innovation. What’s needed is a way for language models to learn new abstractions that go beyond their training data and have these evolving abstractions influence the model’s choices as it explores the space of possible solutions.

We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.[68]

Furthermore, it is worth noting that the data most relevant to startups in a given market may not be those held by large incumbent platforms in other markets, but rather data specific to the market in which the startup is active or, even better, to the given problem it is attempting to solve:

As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use—they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market and mounted their successful challenges—not before.[69]

The bottom line is that data is not the be-all and end-all that many in competition circles make it out to be. While data often may confer marginal benefits, there is little sense that these benefits are ultimately decisive.[70] As a result, incumbent platforms’ access to vast numbers of users and troves of data in their primary markets might only marginally affect their competitiveness in AI markets.

A related observation is that firms’ capabilities and other features of their products arguably play a more important role than the data they own.[71] Examples of this abound in digital markets. Google overthrew Yahoo in search, despite initially having access to far fewer users and far less data; Google and Apple overcame Microsoft in the smartphone operating system market, despite having comparatively tiny ecosystems (at the time) to leverage; and TikTok rose to prominence despite intense competition from incumbents like Instagram, which had much larger user bases. In each of these cases, important product-design decisions (such as the PageRank algorithm, recognizing the specific needs of mobile users,[72] and TikTok’s clever algorithm) appear to have played a far more significant role than initial user and data endowments (or lack thereof).

All of this suggests that the early success of OpenAI likely has more to do with its engineering decisions than with what data it did or did not possess. Going forward, OpenAI and its rivals’ ability to offer and monetise compelling stores offering custom versions of their generative-AI technology will arguably play a much larger role than (and contribute to) their ownership of data.[73] In other words, the ultimate challenge is arguably to create a valuable platform, of which data ownership is a consequence, but not a cause.

It is also important to note that, in those instances where it is valuable, data does not just fall from the sky. Instead, it is through smart business and engineering decisions that firms can generate valuable information (which does not necessarily correlate with owning more data).

For instance, OpenAI’s success with ChatGPT is often attributed to its more efficient algorithms and training models, which arguably have enabled the service to improve more rapidly than its rivals.[74] Likewise, the ability of firms like Meta and Google to generate valuable data for advertising arguably depends more on design decisions that elicit the right data from users, rather than the raw number of users in their networks.

Put differently, setting up a business so as to extract and organise the right information is more important than simply owning vast troves of data.[75] Even in those instances where high-quality data is an essential parameter of competition, it does not follow that having vaster databases or more users on a platform necessarily leads to better information for the platform.

Indeed, if data ownership consistently conferred a significant competitive advantage, these new firms would not be where they are today. This does not, of course, mean that data is worthless. Rather, it means that competition authorities should not assume that the mere possession of data is a dispositive competitive advantage, absent compelling empirical evidence to support such a finding. In this light, the current wave of decisions and competition-policy pronouncements that rely on data-related theories of harm are premature.

IV. Five Key Takeaways: Reconceptualizing the Role of Data in Generative-AI Competition

As we explain above, data network effects are not the source of barriers to entry that they are sometimes made out to be. The picture is far more nuanced. Indeed, as economist Andres Lerner demonstrated almost a decade ago (and the assessment is only truer today):

Although the collection of user data is generally valuable for online providers, the conclusion that such benefits of user data lead to significant returns to scale and to the entrenchment of dominant online platforms is based on unsupported assumptions. Although, in theory, control of an “essential” input can lead to the exclusion of rivals, a careful analysis of real-world evidence indicates that such concerns are unwarranted for many online businesses that have been the focus of the “big data” debate.[76]

While data can be an important part of the competitive landscape, incumbents’ data advantages are far less pronounced than today’s policymakers commonly assume. In that respect, five main lessons emerge:

  1. Data can be (very) valuable, but beyond a certain threshold, those benefits tend to diminish. In other words, having the most data is less important than having enough;
  2. The ability to generate valuable information does not depend on the number of users or the amount of data a platform has previously acquired;
  3. The most important datasets are not always proprietary;
  4. Technological advances and platforms’ engineering decisions affect their ability to generate valuable information, and this effect swamps effects stemming from the amount of data they own; and
  5. How platforms use data is arguably more important than what data or how much data they own.

These lessons have important ramifications for policy debates over the competitive implications of data in technologically evolving areas.

First, it is not surprising that startups, rather than incumbents, have taken an early lead in generative AI (and in Web 2.0 before it). After all, if data-incumbency advantages are small or even nonexistent, then smaller and more nimble players may have an edge over established tech platforms. This is all the more likely given that, despite significant efforts, the biggest tech platforms were unable to offer compelling generative-AI chatbots and image-generation services before the emergence of ChatGPT, Dall-E, Midjourney, etc.

This failure suggests that, in a process akin to Clayton Christensen’s “innovator’s dilemma”,[77] something about the incumbent platforms’ existing services and capabilities was holding them back in those markets. Of course, this does not necessarily mean that those same services or capabilities could not become an advantage when the generative-AI market starts addressing issues of monetisation and scale.[78] But it does mean that assumptions about a firm’s market power based on its possession of data are off the mark.

Another important implication is that, paradoxically, policymakers’ efforts to prevent Web 2.0 platforms from competing freely in generative-AI markets may ultimately backfire and lead to less, not more, competition. Indeed, OpenAI is currently acquiring a sizeable lead in generative AI. While competition authorities might like to think that other startups will emerge and thrive in this space, it is important not to confuse desires with reality. While there currently exists a vibrant AI-startup ecosystem, there is at least a case to be made that the most significant competition for today’s AI leaders will come from incumbent Web 2.0 platforms—although nothing is certain at this stage.

Policymakers should beware not to stifle that competition on the misguided assumption that competitive pressure from large incumbents is somehow less valuable to consumers than that which originates from smaller firms. This is particularly relevant in the context of merger control. An acquisition (or an “acqui-hire”) by a “big tech” company does not only, in principle, entail a minor risk to harm competition (it is not a horizontal merger[79]) but could create a stronger competitor to the current market leaders.

Finally, even if there were a competition-related market failure to be addressed in the field of generative AI (which is anything but clear), the remedies under contemplation may do more harm than good. Some of the solutions that have been put forward have highly ambiguous effects on consumer welfare. Scholars have shown that, e.g., mandated data sharing—a solution championed by EU policymakers, among others—may sometimes dampen competition in generative-AI markets.[80] This is also true of legislation like the General Data Protection Regulation (GDPR), which makes it harder for firms to acquire more data about consumers—assuming such data is, indeed, useful to generative-AI services.[81]

In sum, it is a flawed understanding of the economics and practical consequences of large agglomerations of data that lead competition authorities to believe that data-incumbency advantages are likely to harm competition in generative AI markets—or even in the data-intensive Web 2.0 markets that preceded them. Indeed, competition or regulatory intervention to “correct” data barriers and data network and scale effects is liable to do more harm than good.

 

[1] CMA Seeks Views on AI Partnerships and Other Arrangements, Competition and Markets Authority (24 Apr. 2024), https://www.gov.uk/government/news/cma-seeks-views-on-ai-partnerships-and-other-arrangements.

[2] AI, of course, is not a market (at least not a relevant antitrust market). Within the realm of what is being called “AI”, companies can offer myriad products and services, and specific relevant markets would need to be defined before assessing harm to competition in specific cases.

[3] OECD, Start-ups, Killer Acquisitions and Merger Control (2020), available at https://web-archive.oecd.org/2020-10-16/566931-start-ups-killer-acquisitions-and-merger-control-2020.pdf.

[4] Kate Rooney & Hayden Field, Amazon Spends $2.75 Billion on AI Startup Anthropic in Its Largest Venture Investment Yet, CNBC (27 Mar. 2024), https://www.cnbc.com/2024/03/27/amazon-spends-2point7b-on-startup-anthropic-in-largest-venture-investment.html.

[5] Id.

[6] Tom Warren, Microsoft Partners with Mistral in Second AI Deal Beyond OpenAI, The Verge (26 Feb. 2024), https://www.theverge.com/2024/2/26/24083510/microsoft-mistral-partnership-deal-azure-ai.

[7] Mark Sullivan, Microsoft’s Inflection AI Grab Likely Cost More Than $1 Billion, Says An Insider (Exclusive), Fast Company  (26 Mar. 2024), https://www.fastcompany.com/91069182/microsoft-inflection-ai-exclusive; see also, Mustafa Suleyman, DeepMind and Inflection Co-Founder, Joins Microsoft to Lead Copilot, Microsoft Corporate Blogs (19 Mar. 2024), https://blogs.microsoft.com/blog/2024/03/19/mustafa-suleyman-deepmind-and-inflection-co-founder-joins-microsoft-to-lead-copilot; Krystal Hu & Harshita Mary Varghese, Microsoft Pays Inflection $ 650 Mln in Licensing Deal While Poaching Top Talent, Source Says, Reuters (21 Mar. 2024), https://www.reuters.com/technology/microsoft-agreed-pay-inflection-650-mln-while-hiring-its-staff-information-2024-03-21; The New Inflection: An Important Change to How We’ll Work, Inflection (Mar. 19, 2024), https://inflection.ai/the-new-inflection; Julie Bort, Here’s How Microsoft Is Providing a ‘Good Outcome’ for Inflection AI VCs, as Reid Hoffman Promised, Tech Crunch (21 Mar. 2024), https://techcrunch.com/2024/03/21/microsoft-inflection-ai-investors-reid-hoffman-bill-gates.

[8] See Rana Foroohar, The Great US-Europe Antitrust Divide, Financial Times (5 Feb. 2024), https://www.ft.com/content/065a2f93-dc1e-410c-ba9d-73c930cedc14 (quoting FTC Chair Lina Khan “we are still reeling from the concentration that resulted from Web 2.0, and we don’t want to repeat the mis-steps of the past with AI”).

[9] See, e.g., Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo. Mason L. Rev. 1279, 1294 (2021). (“But while these increasing returns can cause markets to become more concentrated, they also imply that it is often more efficient to have a single firm serve the entire market. For instance, to a first approximation, network effects, which are one potential source of increasing returns, imply that it is more valuable-not just to the platform, but to the users themselves-for all users to be present on the same network or platform. In other words, fragmentation—de-concentration—may be more of a problem than monopoly in markets that exhibit network effects and increasing returns to scale. Given this, it is far from clear that antitrust authorities should try to prevent consolidation in markets that exhibit such characteristics, nor is it self-evident that these markets somehow produce less consumer surplus than markets that do not exhibit such increasing returns”.)

[10] Javier Espinoza & Leila Abboud, EU’s New AI Act Risks Hampering Innovation, Warns Emmanuel Macron, Financial Times (11 Dec. 2023), https://www.ft.com/content/9339d104-7b0c-42b8-9316-72226dd4e4c0.

[11] See, e.g., Michael Chui, et al., The Economic Potential of Generative AI: The Next Productivity Frontier, McKinsey (14 Jun. 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-AI-the-next-productivity-frontier.

[12] See, e.g., Zhuoran Qiao, Weili Nie, Arash Vahdat, Thomas F. Miller III, & Animashree Anandkumar, State-Specific Protein–Ligand Complex Structure Prediction with a Multiscale Deep Generative Model, 6 Nature Machine Intelligence, 195-208 (2024); see also Jaemin Seo, Sang Kyeun Kim, Azarakhsh Jalalvand, Rory Conlin, Andrew Rothstein, Joseph Abbate, Keith Erickson, Josiah Wai, Ricardo Shousha, & Egemen Kolemen, Avoiding Fusion Plasma Tearing Instability with Deep Reinforcement Learning, 626 Nature, 746-751 (2024).

[13] Nathan Newman, Taking on Google’s Monopoly Means Regulating Its Control of User Data, Huffington Post (24 Sep. 2013), http://www.huffingtonpost.com/nathan-newman/taking-on-googlesmonopol_b_3980799.html.

[14] See, e.g., Lina Khan & K. Sabeel Rahman, Restoring Competition in the U.S. Economy, in Untamed: How to Check Corporate, Financial, and Monopoly Power (Nell Abernathy, Mike Konczal, & Kathryn Milani, eds., 2016), at 23 (“From Amazon to Google to Uber, there is a new form of economic power on display, distinct from conventional monopolies and oligopolies…, leverag[ing] data, algorithms, and internet-based technologies… in ways that could operate invisibly and anticompetitively”.); Mark Weinstein, I Changed My Mind—Facebook Is a Monopoly, Wall St. J. (1 Oct. 2021), https://www.wsj.com/articles/facebook-is-monopoly-metaverse-users-advertising-platforms-competition-mewe-big-tech-11633104247 (“[T]he glue that holds it all together is Facebook’s monopoly over data…. Facebook’s data troves give it unrivaled knowledge about people, governments—and its competitors”.).

[15] See, generally, Abigail Slater, Why “Big Data” Is a Big Deal, The Reg. Rev. (6 Nov. 2023), https://www.theregreview.org/2023/11/06/slater-why-big-data-is-a-big-deal; Amended Complaint at ¶36, United States v. Google, 1:20-cv-03010- (D.D.C. 2020); Complaint at ¶37, United States v. Google, 1:23-cv-00108 (E.D. Va. 2023), https://www.justice.gov/opa/pr/justice-department-sues-google-monopolizing-digital-advertising-technologies (“Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry”.).

[16] See, e.g., Press Release, Commission Launches Calls for Contributions on Competition in Virtual Worlds and Generative AI, European Commission (9 Jan. 2024), https://ec.europa.eu/commission/presscorner/detail/en/IP_24_85; Krysten Crawford, FTC’s Lina Khan Warns Big Tech over AI, SIEPR (3 Nov. 2020), https://siepr.stanford.edu/news/ftcs-lina-khan-warns-big-tech-over-ai (“Federal Trade Commission Chair Lina Khan delivered a sharp warning to the technology industry in a speech at Stanford on Thursday: Antitrust enforcers are watching what you do in the race to profit from artificial intelligence”.) (emphasis added).

[17] See, e.g., John M. Newman, Antitrust in Digital Markets, 72 Vand. L. Rev. 1497, 1501 (2019) (“[T]he status quo has frequently failed in this vital area, and it continues to do so with alarming regularity. The laissez-faire approach advocated for by scholars and adopted by courts and enforcers has allowed potentially massive harms to go unchecked”.); Bertin Martins, Are New EU Data Market Regulations Coherent and Efficient?, Bruegel Working Paper 21/23 (2023), https://www.bruegel.org/working-paper/are-new-eu-data-market-regulations-coherent-and-efficient (“Technical restrictions on access to and re-use of data may result in failures in data markets and data-driven services markets”.); Valéria Faure-Muntian, Competitive Dysfunction: Why Competition Law Is Failing in a Digital World, The Forum Network (24 Feb. 2021), https://www.oecd-forum.org/posts/competitive-dysfunction-why-competition-law-is-failing-in-a-digital-world.

[18] See Foroohar, supra note 8.

[19] See, e.g., Press Release, European Commission, supra note 16.

[20] See infra, Section II. Commentators have also made similar claims; see, e.g., Ganesh Sitaram & Tejas N. Narechania, It’s Time for the Government to Regulate AI. Here’s How, Politico (15 Jan. 2024) (“All that cloud computing power is used to train foundation models by having them “learn” from incomprehensibly huge quantities of data. Unsurprisingly, the entities that own these massive computing resources are also the companies that dominate model development. Google has Bard, Meta has LLaMa. Amazon recently invested $4 billion into one of OpenAI’s leading competitors, Anthropic. And Microsoft has a 49 percent ownership stake in OpenAI — giving it extraordinary influence, as the recent board struggles over Sam Altman’s role as CEO showed”.).

[21] Press Release, European Commission, supra note 16.

[22] Comment of U.S. Federal Trade Commission to the U.S. Copyright Office, Artificial Intelligence and Copyright, Docket No. 2023-6 (30 Oct. 2023), at 4, https://www.ftc.gov/legal-library/browse/advocacy-filings/comment-federal-trade-commission-artificial-intelligence-copyright (emphasis added).

[23] See, e.g., Joe Caserta, Holger Harreis, Kayvaun Rowshankish, Nikhil Srinidhi, & Asin Tavakoli, The Data Dividend: Fueling Generative AI, McKinsey Digital (15 Sep. 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-data-dividend-fueling-generative-ai (“Your data and its underlying foundations are the determining factors to what’s possible with generative AI”.).

[24] See, e.g., Tim Keary, Google DeepMind’s Achievements and Breakthroughs in AI Research, Techopedia (11 Aug. 2023), https://www.techopedia.com/google-deepminds-achievements-and-breakthroughs-in-ai-research; See, e.g., Will Douglas Heaven, Google DeepMind Used a Large Language Model to Solve an Unsolved Math Problem, MIT Technology Review (14 Dec. 2023), https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set; see also, A Decade of Advancing the State-of-the-Art in AI Through Open Research, Meta (30 Nov. 2023), https://about.fb.com/news/2023/11/decade-of-advancing-ai-through-open-research; see also, 200 Languages Within a Single AI Model: A Breakthrough in High-Quality Machine Translation, Meta, https://ai.meta.com/blog/nllb-200-high-quality-machine-translation (last visited 18 Jan. 2023).

[25] See, e.g., Jennifer Allen, 10 Years of Siri: The History of Apple’s Voice Assistant, Tech Radar (4 Oct. 2021), https://www.techradar.com/news/siri-10-year-anniversary; see also Evan Selleck, How Apple Is Already Using Machine Learning and AI in iOS, Apple Insider (20 Nov. 2023), https://appleinsider.com/articles/23/09/02/how-apple-is-already-using-machine-learning-and-ai-in-ios; see also, Kathleen Walch, The Twenty Year History Of AI At Amazon, Forbes (19 July 2019), https://www.forbes.com/sites/cognitiveworld/2019/07/19/the-twenty-year-history-of-ai-at-amazon.

[26] See infra Section III.

[27] See, e.g., Cédric Argenton & Jens Prüfer, Search Engine Competition with Network Externalities, 8 J. Comp. L. & Econ. 73, 74 (2012).

[28] John M. Yun, The Role of Big Data in Antitrust, in The Global Antitrust Institute Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg, eds., 11 Nov. 2020) at 233, https://gaidigitalreport.com/2020/08/25/big-data-and-barriers-to-entry/#_ftnref50; see also, e.g., Robert Wayne Gregory, Ola Henfridsson, Evgeny Kaganer, & Harris Kyriakou, The Role of Artificial Intelligence and Data Network Effects for Creating User Value, 46 Acad. of Mgmt. Rev. 534 (2020), final pre-print version at 4, http://wrap.warwick.ac.uk/134220) (“A platform exhibits data network effects if, the more that the platform learns from the data it collects on users, the more valuable the platform becomes to each user”.); see also, Karl Schmedders, José Parra-Moyano, & Michael Wade, Why Data Aggregation Laws Could be the Answer to Big Tech Dominance, Silicon Republic (6 Feb. 2024), https://www.siliconrepublic.com/enterprise/data-ai-aggregation-laws-regulation-big-tech-dominance-competition-antitrust-imd.

[29] Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. Reg. 401, 409 (2014) (emphasis added); see also id. at 420 & 423 (“While there are a number of network effects that come into play with Google, [“its intimate knowledge of its users contained in its vast databases of user personal data”] is likely the most important one in terms of entrenching the company’s monopoly in search advertising…. Google’s overwhelming control of user data… might make its dominance nearly unchallengeable”.).

[30] See also Yun, supra note 28 at 229 (“[I]nvestments in big data can create competitive distance between a firm and its rivals, including potential entrants, but this distance is the result of a competitive desire to improve one’s product”.).

[31] For a review of the literature on increasing returns to scale in data (this topic is broader than data-network effects) see Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo Mason L. Rev. 1281, 1344 (2021).

[32] Andrei Hagiu & Julian Wright, Data-Enabled Learning, Network Effects, and Competitive Advantage, 54 RAND J. Econ. 638 (2023).

[33] Id. at 639. The authors conclude that “Data-enabled learning would seem to give incumbent firms a competitive advantage. But how strong is this advantage and how does it differ from that obtained from more traditional mechanisms…”.

[34] Id.

[35] Bruno Jullien & Wilfried Sand-Zantman, The Economics of Platforms: A Theory Guide for Competition Policy, 54 Info. Econ. & Pol’y 10080, 101031 (2021).

[36] Daniele Condorelli & Jorge Padilla, Harnessing Platform Envelopment in the Digital World, 16 J. Comp. L. & Pol’y 143, 167 (2020).

[37] See Hagiu & Wright, supra note 32.

[38] For a summary of these limitations, see generally Catherine Tucker, Network Effects and Market Power: What Have We Learned in the Last Decade?, Antitrust (2018) at 72, available at https://sites.bu.edu/tpri/files/2018/07/tucker-network-effects-antitrust2018.pdf; see also Manne & Auer, supra note 31, at 1330.

[39] See Jason Furman, Diane Coyle, Amelia Fletcher, Derek McAuley, & Philip Marsden (Dig. Competition Expert Panel), Unlocking Digital Competition (2019) at 32-35 (“Furman Report”), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[40] Id. at 34.

[41] Id. at 35. To its credit, it should be noted, the Furman Report does counsel caution before mandating access to data as a remedy to promote competition. See id. at 75. That said, the Furman Report maintains that such a remedy should certainly be on the table because “the evidence suggests that large data holdings are at the heart of the potential for some platform markets to be dominated by single players and for that dominance to be entrenched in a way that lessens the potential for competition for the market”. Id. In fact, the evidence does not show this.

[42] Case COMP/M.9660 — Google/Fitbit, Commission Decision (17 Dec. 2020) (Summary at O.J. (C 194) 7), available at https://ec.europa.eu/competition/mergers/cases1/202120/m9660_3314_3.pdf at 455.

[43] Id. at 896.

[44] See Natasha Lomas, EU Checking if Microsoft’s OpenAI Investment Falls Under Merger Rules, TechCrunch (9 Jan. 2024), https://techcrunch.com/2024/01/09/openai-microsoft-eu-merger-rules.

[45] Amended Complaint at 11, Meta/Zuckerberg/Within, Fed. Trade Comm’n. (2022) (No. 605837), available at https://www.ftc.gov/system/files/ftc_gov/pdf/D09411%20-%20AMENDED%20COMPLAINT%20FILED%20BY%20COUNSEL%20SUPPORTING%20THE%20COMPLAINT%20-%20PUBLIC%20%281%29_0.pdf.

[46] Amended Complaint (D.D.C), supra note 15 at ¶37.

[47] Amended Complaint (E.D. Va), supra note 15 at ¶8.

[48] Merger Guidelines, US Dep’t of Justice & Fed. Trade Comm’n (2023) at 25, available at https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[49] Merger Assessment Guidelines, Competition and Mkts. Auth (2021) at  ¶7.19(e), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051823/MAGs_for_publication_2021_–_.pdf.

[50] Furman Report, supra note 39, at ¶4.

[51] See, e.g., Chris Westfall, New Research Shows ChatGPT Reigns Supreme in AI Tool Sector, Forbes (16 Nov. 2023), https://www.forbes.com/sites/chriswestfall/2023/11/16/new-research-shows-chatgpt-reigns-supreme-in-ai-tool-sector/?sh=7de5de250e9c.

[52] See Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base, Reuters (2 Feb. 2023), https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01; Google: The AI Race Is On, App Economy Insights (7 Feb. 2023), https://www.appeconomyinsights.com/p/google-the-ai-race-is-on.

[53] See Google Trends, https://trends.google.com/trends/explore?date=today%205-y&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited 12 Jan. 2024) and https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited 12 Jan. 2024).

[54] See David F. Carr, As ChatGPT Growth Flattened in May, Google Bard Rose 187%, Similarweb Blog (5 Jun. 2023), https://www.similarweb.com/blog/insights/ai-news/chatgpt-bard.

[55] See Press Release, Introducing New AI Experiences Across Our Family of Apps and Devices, Meta (27 Sep. 2023), https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools; Sundar Pichai, An Important Next Step on Our AI Journey, Google Keyword Blog (Feb. 6, 2023), https://blog.google/technology/ai/bard-google-ai-search-updates.

[56] See Ion Prodan, 14 Million Users: Midjourney’s Statistical Success, Yon (Aug. 19, 2023), https://yon.fun/midjourney-statistics; see also Andrew Wilson, Midjourney Statistics: Users, Polls, & Growth [Oct 2023], ApproachableAI (13 Oct. 2023), https://approachableai.com/midjourney-statistics.

[57] See Hema Budaraju, New Ways to Get Inspired with Generative AI in Search, Google Keyword Blog (12 Oct. 2023), https://blog.google/products/search/google-search-generative-ai-october-update; Imagine with Meta AI, Meta (last visited Jan. 12, 2024), https://imagine.meta.com.

[58] Catherine Tucker, Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility, 54 Rev. Indus. Org. 683, 686 (2019).

[59] Manne & Auer, supra note 31, at 1345.

[60] See, e.g., Stefanie Koperniak, Artificial Data Give the Same Results as Real Data—Without Compromising Privacy, MIT News (3 Mar. 2017), https://news.mit.edu/2017/artificial-data-give-same-results-as-real-data-0303 (“[Authors] describe a machine learning system that automatically creates synthetic data—with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. While the use of authentic data can cause significant privacy concerns, this synthetic data is completely different from that produced by real users—but can still be used to develop and test data science algorithms and models”.).

[61] See, e.g., Rachel Gordon, Synthetic Imagery Sets New Bar in AI Training Efficiency, MIT News (20 Nov. 2023), https://news.mit.edu/2023/synthetic-imagery-sets-new-bar-ai-training-efficiency-1120 (“By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional ‘real-image’ training methods.).

[62] Thibault Schrepel & Alex ‘Sandy’ Pentland, Competition Between AI Foundation Models: Dynamics and Policy Recommendations, MIT Connection Science Working Paper (Jun. 2023), at 8.

[63] Igor Susmelj, Optimizing Generative AI: The Role of Data Curation, Lightly (last visited 15 Jan. 2024), https://www.lightly.ai/post/optimizing-generative-ai-the-role-of-data-curation.

[64] See, e.g., Xiaoliang Dai, et al., Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack, ArXiv (27 Sep. 2023) at 1, https://ar5iv.labs.arxiv.org/html/2309.15807 (“[S]upervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality”.); see also, Hu Xu, et al., Demystifying CLIP Data, ArXiv (28 Sep. 2023), https://arxiv.org/abs/2309.16671.

[65] Lauren Leffer, New Training Method Helps AI Generalize like People Do, Sci. Am. (26 Oct. 2023), https://www.scientificamerican.com/article/new-training-method-helps-ai-generalize-like-people-do (discussing Brendan M. Lake & Marco Baroni, Human-Like Systematic Generalization Through a Meta-Learning Neural Network, 623 Nature 115 (2023)).

[66] Timothy B. Lee, The Real Research Behind the Wild Rumors about OpenAI’s Q* Project, Ars Technica (Dec. 8, 2023), https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project.

[67] Id.; see also GSM8K, Papers with Code (last visited 18 Jan. 2023), available at https://paperswithcode.com/dataset/gsm8k; MATH Dataset, GitHub (last visited 18 Jan. 2024), available at https://github.com/hendrycks/math.

[68] Lee, supra note 66.

[69] Geoffrey Manne & Ben Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (26 Mar. 2015), https://truthonthemarket.com/2015/03/26/debunking-the-myth-of-a-data-barrier-to-entry-for-online-services (citing Andres V. Lerner, The Role of ‘Big Data’ in Online Platform Competition (26 Aug. 2014), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2482780.).

[70] See Catherine Tucker, Digital Data as an Essential Facility: Control, CPI Antitrust Chron. (Feb. 2020), at 11 (“[U]ltimately the value of data is not the raw manifestation of the data itself, but the ability of a firm to use this data as an input to insight”.).

[71] Or, as John Yun put it, data is only a small component of digital firms’ production function. See Yun, supra note 28, at 235 (“Second, while no one would seriously dispute that having more data is better than having less, the idea of a data-driven network effect is focused too narrowly on a single factor improving quality. As mentioned in supra Section I.A, there are a variety of factors that enter a firm’s production function to improve quality”.).

[72] Luxia Le, The Real Reason Windows Phone Failed Spectacularly, History–Computer (8 Aug. 2023), https://history-computer.com/the-real-reason-windows-phone-failed-spectacularly.

[73] Introducing the GPT Store, Open AI (Jan. 10, 2024), https://openai.com/blog/introducing-the-gpt-store.

[74] See Michael Schade, How ChatGPT and Our Language Models are Developed, OpenAI, https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed; Sreejani Bhattacharyya, Interesting Innovations from OpenAI in 2021, AIM (1 Jan. 2022), https://analyticsindiamag.com/interesting-innovations-from-openai-in-2021; Danny Hernadez & Tom B. Brown, Measuring the Algorithmic Efficiency of Neural Networks, ArXiv (8 May 2020), https://arxiv.org/abs/2005.04305.

[75] See Yun, supra note 28 at 235 (“Even if data is primarily responsible for a platform’s quality improvements, these improvements do not simply materialize with the presence of more data—which differentiates the idea of data-driven network effects from direct network effects. A firm needs to intentionally transform raw, collected data into something that provides analytical insights. This transformation involves costs including those associated with data storage, organization, and analytics, which moves the idea of collecting more data away from a strict network effect to more of a ‘data opportunity.’”).

[76] Lerner, supra note 69, at 4-5 (emphasis added).

[77] See Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (2013).

[78] See David J. Teece, Dynamic Capabilities and Strategic Management: Organizing for Innovation and Growth (2009).

[79] Antitrust merger enforcement has long assumed that horizontal mergers are more likely to cause problems for consumers than the latter. See: Geoffrey A. Manne, Dirk Auer, Brian Albrecht, Eric Fruits, Daniel J. Gilman, & Lazar Radic, Comments of the International Center for Law and Economics on the FTC & DOJ Draft Merger Guidelines, (18 Sep. 2023), https://laweconcenter.org/resources/comments-of-the-international-center-for-law-and-economics-on-the-ftc-doj-draft-merger-guidelines.

[80] See Hagiu & Wright, supra note 32, at 32 (“We use our dynamic framework to explore how data sharing works: we find that it in-creases consumer surplus when one firm is sufficiently far ahead of the other by making the laggard more competitive, but it decreases consumer surplus when the firms are sufficiently evenly matched by making firms compete less aggressively, which in our model means subsidizing consumers less”.); see also Lerner, supra note 69.

[81] See, e.g., Hagiu & Wright, id. (“We also use our model to highlight an unintended consequence of privacy policies. If such policies reduce the rate at which firms can extract useful data from consumers, they will tend to increase the incumbent’s competitive advantage, reflecting that the entrant has more scope for new learning and so is affected more by such a policy”.); Jian Jia, Ginger Zhe Jin, & Liad Wagman, The Short-Run Effects of the General Data Protection Regulation on Technology Venture Investment, 40 Marketing Sci. 593 (2021) (finding GDPR reduced investment in new and emerging technology firms, particularly in data-related ventures); James Campbell, Avi Goldfarb, & Catherine Tucker, Privacy Regulation and Market Structure, 24 J. Econ. & Mgmt. Strat. 47 (2015) (“Consequently, rather than increasing competition, the nature of transaction costs implied by privacy regulation suggests that privacy regulation may be anti-competitive”.).

Continue reading
Antitrust & Consumer Protection

Do We Need a ‘New Strategy Paradigm’? No

Scholarship Abstract Bansal et al.’s Point piece, “Strategy’s Ecological Fallacy: How strategy scholars have contributed to the ecological crisis and what we can do about it,” . . .

Abstract

Bansal et al.’s Point piece, “Strategy’s Ecological Fallacy: How strategy scholars have contributed to the ecological crisis and what we can do about it,” calls for reforming the strategy field to focus on the natural environment, ecological cycles, and interconnections across natural and social levels, in service of value creation for ‘a defined ecosystem that comprises respect for the natural environment’. We doubt that such new foundations are necessary or useful. We argue that Bansal et al. misconstrue the evolution and content of strategy thinking; downplay the usefulness of existing tools for dealing with their issues of concern; overlook problems of measurement, collective action, government failure, and cronyism encouraged by their preferred policies; embrace an unnecessarily alarmist worldview; and underappreciate the social benefits of the market-based institutions they criticize. We suggest instead that a market system based on clearly delineated property rights, prices that freely adjust to reflect scarcities, and an institutional environment that encourages entrepreneurship and innovation remains an underappreciated instrument for protection of the natural environment, one that is superior to centralized and regulatory alternatives.

Continue reading
Financial Regulation & Corporate Governance

Google Previews the Coming Tussle Between GDPR and DMA Article 6(11)

TOTM Among the less-discussed requirements of the European Union’s Digital Markets Act (DMA) is the data-sharing obligation created by Article 6(11). This provision requires firms designated . . .

Among the less-discussed requirements of the European Union’s Digital Markets Act (DMA) is the data-sharing obligation created by Article 6(11). This provision requires firms designated under the law as “gatekeepers” to share “ranking, query, click and view data” with third-party online search engines, while ensuring that any personal data is anonymized.

Given how restrictively the notion of “anonymization” has been interpreted under the EU’s General Data Protection Regulation (GDPR), the DMA creates significant tension without pointing to a clear resolution. Sophie Stalla-Bourdillon and Bárbara da Rosa Lazarotto recently published a helpful analysis of the relevant legal questions on the European Law Blog. In this post, I will examine Google’s proposed solution.

Read the full piece here.

Continue reading
Data Security & Privacy

Against the ‘Europeanization’ of California’s Antitrust Law

Regulatory Comments We are grateful for the opportunity to respond to the California Law Revision Commission’s Study of Antitrust Law with these comments on the Single-Firm Conduct . . .

We are grateful for the opportunity to respond to the California Law Revision Commission’s Study of Antitrust Law with these comments on the Single-Firm Conduct Working Group’s report (the “Expert Report”).[1]

The International Center for Law & Economics (ICLE) is a nonprofit, nonpartisan global research and policy center based in Portland, Oregon. ICLE was founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates, and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedents, a record of evidence, and sound economic analysis.[2]

I. Introduction

The urge to treat antitrust as a legal Swiss Army knife—capable of correcting all manner of economic and social ills—is difficult to resist. Conflating size with market power, and market power with political power, recent calls for regulation of large businesses are often framed in antitrust terms, although they rarely are rooted in cognizable legal claims or sound economic analysis.

But precisely because antitrust is such a powerful regulatory tool, we should be cautious about its scope, process, and economics, as well as its politicization. For the last 50 or so years, U.S. law has maintained a position of relative restraint in the face of novel, ambiguous conduct, while many other jurisdictions (particularly the European Union) have tended to read uncertainty as the outward expression of a lurking threat. This has led to a sharp policy divergence in the area of competition policy, with the EU passing the Digital Markets Act,[3] while the United States has, to date, continued to rely on tried-and-tested principles crafted by courts over years on a case-by-case basis.

Despite—or perhaps because of—this divergence, many advocates of more aggressive antitrust intervention assert that the United States or individual states should emulate the EU’s approach. This disposition underpins much of the California Law Review Commission’s Report on Single Firm Conduct.[4] Despite some reassuring conclusions—such as the recognition that “protecting competing businesses, even at the expense of consumers and workers” would not “provide a good model for California”[5]—the policies that the report proposes would significantly broaden California antitrust law, bringing it much closer to the European model of competition enforcement than the U.S. one.

Unfortunately, this European-inspired approach to competition policy is unlikely to serve the interests of California consumers. As explained below, the European model of competition enforcement has at least three features that tend to chill efficient business conduct, with few competitive benefits in return (relative to the U.S. approach).

A. ‘Precautionary Principle’ vs Error-Cost Framework

Differentiating pro- from anticompetitive conduct has always been the central challenge of antitrust. When the very same conduct can either benefit or harm consumers, depending on complex and often unknowable circumstances, the potential cost of overenforcement is at least as substantial as the cost of underenforcement.

The U.S. Supreme Court has repeatedly recognized that the cost of “false positive” errors might be greater than those attributable to “false negatives” because, in the words of Judge Frank Easterbrook, “the economic system corrects monopoly more readily than it corrects judicial errors.”[6] The EU’s “precautionary principle” approach is the antithesis of this. It is rooted in a belief that markets are generally unlikely to function well, and certainly are not better at mitigating harm than technocratic regulatory intervention.

The key question is whether, given the limits of knowledge and the errors that such limits may engender, consumers are better off with a more discretionary regime or one in which enforcement is limited to causes of action that policymakers are fairly certain will serve consumer interests. This is a question about changes at the margin, but it is far from marginal in its significance. As we explain below, the U.S. approach to antitrust law performs better in this respect. Departing from it would not benefit California consumers.

B. Presumptions vs Effects-Based Analysis

EU antitrust rests heavily on presumptions of harm, while U.S. courts require plaintiffs to demonstrate that the conduct at-issue actually has anticompetitive effects.

Crucially, the U.S. approach is more consistent with learnings from modern economics, which almost universally counsel against presuming competitive harm on the basis of industry structure and, in particular, in favor of presuming benefit from vertical conduct. Indeed, the EU approach often disregards these findings and presumes the contrary. As evidenced by its recent Intel decision, even the EU’s highest court has finally recognized the paucity of the European Commission’s analysis in this area. But because judicial review of antitrust decisions in the EU is so attenuated, it is not clear if the high court’s admonition will actually affect the Commission’s approach in any substantial way.

California policymakers would be wrong to emulate the European model by introducing more presumptions to California antitrust law.

C. Extraction of Rents vs Extension of Monopoly

U.S. monopolization law prohibits only predatory or exclusionary conduct that results in harm to consumers. The EU, by contrast, also regularly punishes the mere possession of monopoly power, even where lawfully obtained. Indeed, the EU goes so far as to target companies that may lack monopoly power, but merely possess an innovative and successful business model. For example, in actions involving companies ranging from soda manufacturers to digital platforms, the EU repeatedly has required essential-facilities-style access to companies’ private property for less-successful rivals.

As we explain below, the Expert Report essentially calls on California lawmakers to replicate the European model by seeking to protect even those competitors that are less efficient, thus challenging the very existence of legitimately earned monopolies. Unfortunately, this approach would diminish the incentives to create successful businesses in the first place. Such an outcome would be particularly unfortunate for California, which is host to arguably the most vibrant startup ecosystem in the world.

D. The Danger of the European Approach

In endorsing the European approach to antitrust in order to justify high-profile cases against large firms, California would effectively be prioritizing political expediency over the rule of law and consumer well-being.

The risk of an EU-like approach in California is that it would thwart technological progress and enshrine mediocrity. This is particularly true in the digital economy, where innovative practices with positive welfare effects—such as building efficient networks or improving products and services as technologies and consumer preferences evolve—are often the subject of demagoguery, especially from inefficient firms looking for a regulatory leg up.

While advocates for a more European approach to antitrust assert that their proposals would improve economic conditions in California (and the United States, more generally), economic logic and the available evidence suggest otherwise, especially in technology markets.

Once antitrust is expanded beyond its economic constraints, it ceases to be a uniquely valuable tool to address real economic harms to consumers, and becomes instead a tool for evading legislative and judicial constraints. This is hardly the promotion of democratic ideals that proponents of a more EU-like regime claim to desire.

In the following sections, we expand upon these distinctions between EU and U.S. law and explain how elements of the Expert Report’s analysis and proposed statutory language would shift California’s antitrust law toward the EU model in problematic ways. We urge the California Law Revision Commission to consider not just whether emulating the EU approach would permit the state to reach a preconceived outcome—i.e., placing large firms under increased antitrust scrutiny—but whether doing so would ultimately benefit California and its consumers.

II. The EU ‘Precautionary Principle’ Approach vs the US Error-Cost Framework

The U.S. Supreme Court has repeatedly recognized the limitations that courts face in distinguishing between pro- and anticompetitive conduct in antitrust cases, and particularly the risk this creates of reaching costly false-positive (Type I) decisions in monopolization cases.[7] As the Court has noted with respect to the expansion of liability for single-firm conduct, in particular:

Against the slight benefits of antitrust intervention here, we must weigh a realistic assessment of its costs…. Mistaken inferences and the resulting false condemnations “are especially costly because they chill the very conduct the antitrust laws are designed to protect.” The cost of false positives counsels against an undue expansion of § 2 liability.[8]

The Court has also expressed the view—originally laid out in Judge Frank Easterbrook’s seminal article “The Limits of Antitrust”—that the costs to consumers arising from Type I errors are likely greater than those attributable to Type II errors, because “the economic system corrects monopoly more readily than it corrects judicial errors.”[9]

The EU’s more “precautionary” approach to antitrust policy is the antithesis of this.[10] It is rooted in a belief that markets do not—or, more charitably, are unlikely to—function well in general, and certainly not sufficiently to self-correct in the face of monopolization.

While the precautionary principle may generally prevent certain fat-tailed negative events,[11] these potential benefits come, almost by definition, at the expense of short-term growth.[12] Adopting a precautionary approach is thus a costly policy stance in those circumstances where it is not clearly warranted by underlying risk and uncertainty. This is an essential issue for a state like California, whose economy is so reliant on the continued growth and innovation of its vibrant startup ecosystem.

While it is impossible to connect broad macroeconomic trends conclusively to specific policy decisions, it does seem clear that Europe’s overarching precautionary approach to economic regulation has not served it well.[13] In that environment, the EU’s economic performance has fallen significantly behind that of the United States.[14] “[I]n 2010 US GDP per capita was 47 percent larger than the EU while in 2021 this gap increased to 82 percent. If the current trend of GDP per capita carries forward, in 2035, the average GDP per capita in the US will be $96,000 while the average EU GDP per capita will be $60,000.”[15]

Of course, no one believes that markets are perfect, or that antitrust enforcement can never be appropriate. The question is the marginal, comparative one: Given the realities of politics, economics, the limits of knowledge, and the errors to which they can lead, which imperfect response is preferable at the margin? Or, phrased slightly differently, should we give California antitrust enforcers and private plaintiffs more room to operate, or should we continue to cabin their operation in careful, economically grounded ways, aimed squarely at optimizing—not minimizing—the extent of antitrust enforcement?

This may be a question about changes at the margin, but it is far from marginal. It goes to the heart of the market’s role in the modern economy.

While there are many views on this subject, arguments that markets have failed us in ways that more antitrust would correct are poorly supported.[16] We should certainly continue to look for conditions where market failures of one kind or another may justify intervention, but we should not make policy on the basis of mere speculation. And we should certainly not do so without considering the likelihood and costs of regulatory failure, as well. In order to reliably adopt a sound antitrust policy that might improve upon the status quo (which has evolved over a century of judicial decisions, generally alongside the field’s copious advances in economic understanding), we need much better information about the functioning of markets and the consequences of regulatory changes than is currently available.

To achieve this, antitrust law and enforcement policy should, above all, continue to adhere to the error-cost framework, which informs antitrust decision making by considering the relative costs of mistaken intervention compared with mistaken nonintervention.[17] Specific cases should be addressed as they come, with an implicit understanding that, especially in digital markets, precious few generalizable presumptions can be inferred from the previous case. The overall stance should be one of restraint, reflecting the state of our knowledge.[18] We may well be able to identify anticompetitive harms in certain cases, and when we do, we should enforce the current laws. But we should not overestimate our ability to finetune market outcomes without causing more harm than benefit.

Allegations that the modern antitrust regime is insufficient take as a given that there is something wrong with antitrust doctrine or its enforcement, and cast about for policy “corrections.” The common flaw with these arguments is that they are not grounded in robust empirical or theoretical support. Indeed, as one of the influential papers that (ironically) is sometimes cited to support claims for more antitrust puts it:

An alternative perspective on the rise of [large firms and increased concentration] is that they reflect a diminution of competition, due to weaker U.S. antitrust enforcement. Our findings on the similarity of trends in the United States and Europe, where antitrust authorities have acted more aggressively on large firms, combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may be important in some industries.[19]

Rather, such claims are little more than hunches that something must be wrong, conscripted to serve a presumptively interventionist agenda. Because they are merely hypotheses about things that could go wrong, they do not determine—and rarely even ask—if heightened antitrust scrutiny and increased antitrust enforcement are actually called for in the first place. The evidence strongly contradicts the basis for these hunches.

Critics of U.S. competition policy sometimes contend that markets have become more concentrated and thus less competitive.[20] But there are good reasons to be skeptical of the national-concentration and market-power data.[21] Even more importantly, the narrative that purports to find a causal relationship between these data and reduced competition is almost certainly incorrect.

Competition rarely takes place in national markets; it takes place in local markets. Recent empirical work demonstrates that national measures of concentration do not reflect market structures at the local level.[22] To the extent that national-level firm concentration may be growing, these trends are actually driving increased competition and decreased concentration at the local level, which is typically what matters for consumers:

Put another way, large firms have materially contributed to the observed decline in local concentration. Among industries with diverging trends, large firms have become bigger but the associated geographic expansion of these firms, through the opening of more plants in new local markets, has lowered local concentration thus suggesting increased local competition.[23]

The rise in national concentration is predominantly a function of more efficient firms competing in more—and more localized—markets. Thus, rising national concentration, where it is observed, is a result of increased productivity and competition that weed out less-efficient producers. Indeed, as one influential paper notes:

[C]oncentration increases do not correlate to price hikes and correspond to increased output. This implies that oligopolies are related to an offsetting and positive force—these oligopolies are likely due to technical innovation or scale economies. My data suggest that increases in market concentration are strongly correlated with innovations in productivity.[24]

Another important paper finds that this dynamic is driven by top firms bringing productivity increases to smaller markets, to the substantial (and previously unmeasured) benefit of consumers:

US firms in service industries increasingly operate in more local markets. Employment, sales, and spending on fixed costs have increased rapidly in these industries. These changes have favored top firms, leading to increasing national concentration. Top firms in service industries have grown by expanding into new local markets, predominantly small and mid-sized US cities. Market concentration at the local level has decreased in all US cities, particularly in cities that were initially small. These facts are consistent with the availability of new fixed-cost-intensive technologies that yield lower marginal costs in service sectors. The entry of top service firms into new local markets has led to substantial unmeasured productivity growth, particularly in small markets.[25]

Similar results hold for labor-market effects. According to one recent study, while the labor-market power of firms appears to have increased:

labor market power has not contributed to the declining labor share. Despite the backdrop of stable national concentration, we… find that [local labor-market concentration] has declined over the last 35 years. Most local labor markets are more competitive than they were in the 1970s.[26]

In short, it is inappropriate to draw conclusions about the strength of competition and the efficacy of antitrust laws from national-concentration measures. This is a view shared by many economists from across the political spectrum. Indeed, one of the Expert Report’s authors, Carl Shapiro, has raised these concerns regarding the national-concentration data:

[S]imply as a matter of measurement, the Economic Census data that are being used to measure trends in concentration do not allow one to measure concentration in relevant antitrust markets, i.e., for the products and locations over which competition actually occurs. As a result, it is far from clear that the reported changes in concentration over time are informative regarding changes in competition over time.[27]

It appears that overall competition is increasing, not decreasing, whether it is accompanied by an increase in national concentration or not.

A. The Expert Report’s Treatment of Error Costs

Implicitly shunning the evidence that demonstrates markets have become more, not less, competitive, the Expert Report proposes that California adopt a firm stance in favor of false positives over false negatives—in other words, that it tolerate erroneously condemning procompetitive behavior in exchange for avoiding the risk of erroneously accepting anticompetitive conduct:

Whereas the policy of California is that the public is best served by competition and the goal of the California antitrust laws is to promote and protect competition throughout the State, in interpreting this Section courts should bear in mind that the policy of California is that the risk of under-enforcement of the antitrust laws is greater than the risk of over-enforcement.[28]

Of course, it is possible that, in some markets, there are harms being missed and for which enforcers should be better equipped. But advocates of reform have yet to adequately explain much of what we need to know to make such a determination, let alone craft the right approach to it if we did. Antitrust law should be refined based on an empirical demonstration of harms, as well as a careful weighing of those harms against the losses to social welfare that would arise if procompetitive conduct were deterred alongside anticompetitive conduct.

Dramatic new statutes to undo decades of antitrust jurisprudence or reallocate burdens of proof with the stroke of a pen are unjustified. Suggesting, as the Expert Report does, that antitrust law should simply “err on the side of enforcement when the effect of the conduct at issue on competition is uncertain”[29] is an unsupported statement of a political preference, not one rooted in sound economics or evidence.

The primary evidence adduced to support the claim that underenforcement (and thus, the risk of Type II errors) is more significant than overenforcement (and thus, the risk of Type I errors) is that there are not enough cases brought and won. But even if superficially true, this is, on its own, just as consistent with a belief that the regime is functioning well as it is with a belief that it is functioning poorly. Indeed, as one of the Expert Report’s authors has pointed out:

Antitrust law [] has a widespread effect on business conduct throughout the economy. Its principal value is found, not in the big litigated cases, but in the multitude of anticompetitive actions that do not occur because they are deterred by the antitrust laws, and in the multitude of efficiency-enhancing actions that are not deterred by an overbroad or ambiguous antitrust.[30]

At the same time, some critics (including another of the Expert Report’s authors) contend that a heightened concern for Type I errors stems from a faulty concern that “type two errors… are not really problematic because the market itself will correct the situation,” instead asserting that “it is economically naïve to assume that markets will naturally tend toward competition.”[31]

Judge Easterbrook’s famous argument for enforcement restraint is not based on the assertion that markets are perfectly self-correcting. Rather, his claim is that the (undeniable) incentive of new entrants to compete for excess profits in monopolized markets operates to limit the social costs of Type II errors more effectively than the legal system’s ability to correct or ameliorate the costs of Type I errors. The logic is quite simple, and not dependent on the strawman notion that markets are perfect:

If the court errs by condemning a beneficial practice, the benefits may be lost for good. Any other firm that uses the condemned practice faces sanctions in the name of stare decisis, no matter the benefits. If the court errs by permitting a deleterious practice, though, the welfare loss decreases over time. Monopoly is self-destructive. Monopoly prices eventually attract entry. True, this long run may be a long time coming, with loss to society in the interim. The central purpose of antitrust is to speed up the arrival of the long run. But this should not obscure the point: judicial errors that tolerate baleful practices are self-correcting while erroneous condemnations are not.[32]

Moreover, anticompetitive conduct that is erroneously excused may be subsequently corrected, either by another enforcer, a private litigant, or another jurisdiction. Ongoing anticompetitive behavior will tend to arouse someone’s ire: competitors, potential competitors, customers, input suppliers. That means such behavior will be noticed and potentially brought to the attention of enforcers. And for the same reason—identifiable harm—it may also be actionable.

By contrast, procompetitive conduct that does not occur because it is prohibited or deterred by legal action has no constituency and no visible evidence on which to base a case for revision. Nor does a firm improperly deterred from procompetitive conduct have any standing to sue the government for erroneous antitrust enforcement, or the courts for adopting an improper standard. Of course, overenforcement can sometimes be corrected, but the institutional impediments to doing so are formidable.

The claim that concern for Type I errors is overblown further rests on the assertion that “more up-to-date economic analysis” has undermined that position.[33] But that learning is, for the most part, entirely theoretical—constrained to “possibility theorems” divorced from realistic complications and the real institutional settings of decision making. Indeed, the proliferation of these theories may actually increase, rather than decrease, uncertainty by further complicating the analysis and asking generalist judges to choose from among competing theories, without any realistic means to do so.[34]

Unsurprisingly, “[f]or over thirty years, the economics profession has produced numerous models of rational predation. Despite these models and some case evidence consistent with episodes of predation, little of this Post-Chicago School learning has been incorporated into antitrust law.”[35] Nor is it likely that the courts are making an erroneous calculation in the abstract. Evidence of Type I errors is hard to come by, but for a wide swath of conduct called into question by “Post-Chicago School” and other theories, the evidence of systematic problems is virtually nonexistent.[36]

Moreover, contrary to the Expert Report’s implications,[37] U.S. antitrust law has not ignored potentially anticompetitive harm, and courts are hardly blindly deferential to conduct undertaken by large firms. It is impossible to infer from the general “state of the world” or from perceived “wrong” judicial decisions that the current antitrust regime has failed or that California, in particular, would benefit from a wholesale shifting of its antitrust error-cost presumptions.[38]

III. The Reliance on Presumptions vs the Demonstration of Anticompetitive Effects

While U.S. antitrust law generally requires a full-blown, effects-based analysis of challenged behavior—particularly in the context of unilateral conduct (monopolization or abuse of dominance) and vertical restraints—the EU continues to rely heavily on presumptions of harm or extremely truncated analysis. Even the EU’s highest court has finally recognized the paucity of the European Commission’s analysis in this area in its recent Intel decision.[39]

The degree to which the United States and EU differ with respect to their reliance on presumptions in antitrust cases is emblematic of a broader tendency of the U.S. regime to adhere to economic principles, while the EU tends to hold such principles in relative disregard. The U.S. approach is consistent with learnings from modern economics, which almost universally counsel against presuming competitive harm on the basis of industry structure—particularly from the extent of concentration in a market. Indeed, as one of the Expert Report’s own authors has argued, “there is no well-defined ‘causal effect of concentration on price,’ but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration.”[40]

Concerns about excessive concentration are at the forefront of current efforts to expand antitrust enforcement, including through the use of presumptions. There is no reliable empirical support for claims either that concentration has been increasing, or that it necessarily leads to, or has led to, increased market power and the economic harms associated with it.[41] There is even less support for claims that concentration leads to the range of social ills ascribed to it by advocates of “populist” antitrust. Similarly, there is little evidence that the application of antitrust or related regulation to more vigorously prohibit, shrink, or break up large companies will correct these asserted problems.

Meanwhile, economic theory, empirical evidence, and experience all teach that vertical restraints—several of which would be treated more harshly under the Expert Report’s recommendations[42]—rarely harm competition. Indeed, they often benefit consumers by reducing costs, better distributing risk, better informing and optimizing R&D activities and innovation, better aligning manufacturer and distributor incentives, lowering price, increasing demand through the inducement of more promotional services, and/or creating more efficient distribution channels.

As the former Federal Trade Commission (FTC) Bureau of Economics Director Francine Lafontaine explained in summarizing the body of economic evidence analyzing vertical restraints: “it appears that when manufacturers choose to impose [vertical] restraints, not only do they make themselves better off but they also typically allow consumers to benefit from higher quality products and better service provision.”[43] A host of other studies corroborate this assessment.[44] As one of these notes, while “some studies find evidence consistent with both pro- and anticompetitive effects… virtually no studies can claim to have identified instances where vertical practices were likely to have harmed competition.”[45] Similarly, “in most of the empirical studies reviewed, vertical practices are found to have significant pro-competitive effects.”[46]

At the very least, we remain profoundly uncertain of the effects of vertical conduct (particularly in the context of modern high-tech and platform industries), with the proviso that most of what we know suggests that this conduct is good for consumers. But even that worst-case version of our state of knowledge is inconsistent with the presumptions-based approach taken by the EU.

Adopting a presumptions-based approach without a firm economic basis is far more hostile to novel business conduct, especially in the innovative markets that distinguish California’s economy. EU competition policy errs on the side of condemning novel conduct, deterring beneficial business activities where consumers would be better served if authorities instead tried to better understand them. This is not something California should emulate.

A. The Expert Report’s Quantification of Anticompetitive Harm and Causation

European competition law imposes a much less strenuous burden on authorities to quantify anticompetitive harm and establish causation than does U.S. law. This makes European competition law much more prone to false positives that condemn efficiency-generating or innovative firm behavior. The main cause of these false positives is the failure of the EU’s “competitive process” standard to separate competitive from anticompetitive exclusionary conduct.

While the Expert Report rightly recognizes that adopting an abuse-of-dominance standard (similar to that which exists in Europe) would be misguided, its proposed focus on “competitive constraints,” rather than consumer welfare, would effectively bring California antitrust enforcement much closer to the EU model.[47]

At the same time, the Expert Report counsels adopting a “material-risk-of-harm” standard, which is foreign to U.S. antitrust law:

(e) Anticompetitive exclusionary conduct includes conduct that has or had a material risk of harming trading partners due to increased market power, even if those harms have not yet arisen and may not materialize.[48]

While such a standard exists in U.S. standing jurisprudence,[49] antitrust plaintiffs (and private plaintiffs, in particular) must typically meet a higher bar to prove actual antitrust injury.[50] Moreover, the focus is generally on output restriction, rather than the risk of “harm” to a trading partner:

The government must show conduct that reasonably seems capable of causing reduced output and increased prices by excluding a rival. The private plaintiff must additionally show an actual effect producing an injury in order to support a damages action or individually threatened harm to support an injunction. The required private effect could be either a higher price which it paid, or lost profits from market exclusion.[51]

Again, this is a fairly concrete application of the error-cost framework: Lowering the standard of proof required to establish liability increases the risk of false positives and decreases the risk of false negatives. But particularly in California—where so much of the state’s economic success is built on industries characterized by large companies with substantial procompetitive economies of scale and network effects, novel business models, and immense technological innovation—the risk of erroneous condemnation is substantial, and the potential costs significant.

Further, defining antitrust harm in terms of “conduct [that] tends to… diminish or create a meaningful risk of diminishing the competitive constraints imposed by the defendant’s rivals”[52] opens the door substantially to the risk that procompetitive conduct could be enjoined. For example, such an approach would seem at odds with the concept of antitrust injury for private plaintiffs established by the Supreme Court’s Brunswick case.[53] “Competitive constraints” may “tend” to be reduced, as in Brunswick, by perfectly procompetitive conduct; enshrining such a standard would not serve California’s economic interests.

Similarly, the Expert Report’s proposed statutory language includes a provision that would infer not only causation but also the existence of harm from ambiguous conduct:

5) In cases where the trading partners are customers…, it is not necessary for the plaintiff to specify the precise nature of the harm that might be experienced in the future or to quantify with specificity any particular past harm. It is sufficient for the plaintiff to establish a significant weakening of the competitive constraints facing the defendant, from which such harms to direct or indirect customers can be presumed.[54]

The Microsoft case similarly held that plaintiffs need not quantify injury with specificity because “neither plaintiffs nor the court can confidently reconstruct a product’s hypothetical technological development in a world absent the defendant’s exclusionary conduct.”[55] But Microsoft permits the inference only of causation in such circumstances, not the existence of anticompetitive conduct. Most of the decision was directed toward identifying and assessing the anticompetitiveness of the alleged conduct. Inference is permitted only with respect to causation—to the determination that such conduct was reasonably likely to lead to harm by excluding specific (potential) competitors. Establishing merely a “weakening of the competitive constraints facing the defendant,” by contrast, does not permit an inference of anticompetitiveness.

Such an approach is much closer to the European standard of maintaining a system of “undistorted competition.” European authorities generally operate under the assumption that “competitive” market structures ultimately lead to better outcomes for consumers.[56] This contrasts with American antitrust enforcement which, by pursuing a strict consumer-welfare goal, systematically looks at the actual impact of a practice on economic parameters, such as prices and output.

In other words, European competition enforcement assumes that concentrated market structures likely lead to poor outcomes and thus sanctions them, whereas U.S. antitrust law looks systematically into the actual effects of a practice. The main consequence of this distinction is that, compared to the United states, European competition law has established a wider set of per se prohibitions (which are not discussed in the Expert Report) and sets a lower bar for plaintiffs to establish the existence of anticompetitive conduct (which the Expert Report recommends California policymakers emulate).[57] Because of this lower evidentiary threshold, EU competition decisions are also subject to less-stringent judicial review.

The EU’s competitive-process standard is similar to the structuralist analysis that was popular in the United States through the middle of the 20th century. This view of antitrust led U.S. enforcers frequently to condemn firms merely for growing larger than some arbitrary threshold, even when those firms engaged in conduct that, on net, benefited consumers. While EU enforcers often claim to be pursuing a consumer-welfare standard, and to adhere to rigorous economic analysis in their antitrust cases,[58] much of their actual practice tends to engage in little more than a window-dressed version of the outmoded structuralist analysis that U.S. scholars, courts, and enforcers roundly rejected in the latter half of the 20th century.

To take one important example, a fairly uncontroversial requirement for antitrust intervention is that a condemned practice should actually—or be substantially likely to—foster anticompetitive harm. Even in Europe, whatever other goals competition law is presumed to further, it is nominally aimed at protecting competition rather than competitors.[59] Accordingly, the mere exit of competitors from the market should be insufficient to support liability under European competition law in the absence of certain accompanying factors.[60] And yet, by pursuing a competitive-process goal, European competition authorities regularly conflate desirable and undesirable forms of exclusion precisely on the basis of their effect on competitors.

As a result, the Commission routinely sanctions exclusion that stems from an incumbent’s superior efficiency rather than from welfare-reducing strategic behavior,[61] and routinely protects inefficient competitors that would otherwise rightly be excluded from a market. As Pablo Ibanez Colomo puts it:

It is arguably more convincing to question whether the principle whereby dominant firms are under a general duty not to discriminate is in line with the logic and purpose of competition rules. The corollary to the idea that it is prima facie abusive to place rivals at a disadvantage is that competition must take place, as a rule, on a level playing field. It cannot be disputed that remedial action under EU competition law will in some instances lead to such an outcome.[62]

Unfortunately, the Expert Report’s repeated focus on diminished “competitive constraints” as the touchstone for harm may (perhaps unintentionally) even enable courts to impose liability for harm to competitors caused by procompetitive conduct. For example, the Expert Report would permit a determination that:

[C]onduct tends to… diminish or create a meaningful risk of diminishing the competitive constraints… [if it] tends to (i) increase barriers to entry or expansion by those rivals, (ii) cause rivals to lower their quality-adjusted output or raise their quality-adjusted price, or (iii) reduce rivals’ incentives to compete against the defendant.[63]

But market exit is surely an example of a reduced incentive to compete, even if it results from a rival’s intense (and consumer-welfare-enhancing) competition. Depending on how “barrier to entry” is defined, innovation, product improvement, and vertical integration by a defendant—even when they are procompetitive—all could constitute a barrier to entry by forcing rivals to incur greater costs or compete in multiple markets. Similarly, increased productivity resulting in less demand for labor or other inputs or lower wages could enable a “defendant [to] profitably make a less attractive offer to that supplier or worker… than the defendant could absent that conduct,”[64] even though the increase in market power in that case would be beneficial.[65]

It is true that the Expert Report elsewhere notes that “it is sometimes difficult for courts to distinguish between anticompetitive exclusionary conduct, which is illegal, from competition on the merits, which is legal even if it weakens rivals or drives them out of business altogether.”[66] Thus, it is perhaps unintentional that the report’s proposed language could nevertheless support liability in such circumstances. At the very least, California should not adopt the Expert Report’s proposed language without a clear disclaimer that liability will never be based on “diminished competitive constraints” resulting from consumer-welfare-enhancing conduct or vigorous competition by the defendant.

IV. Penalizing the Existence of Monopolies vs Prohibiting Only the Extension of Monopoly Power

While U.S. monopolization law prohibits only predatory or exclusionary conduct that results in both the unlawful acquisition or maintenance of monopoly power and the creation of net harm to consumers, the EU also punishes the mere exercise of monopoly power—that is, the charging of allegedly “excessive” prices by dominant firms (or the use of “exploitative” business terms). Thus, the EU is willing to punish the mere extraction of rents by a lawfully obtained dominant firm, while the United States punishes only the unlawful extension of market power.

There may be multiple reasons for this difference, including the EU’s particular history with state-sponsored monopolies and its unique efforts to integrate its internal market. Whatever the reason, the U.S. approach, unlike the EU’s, is grounded in a concern for minimizing error costs—not in order to protect monopolists or large companies, but to protect the consumers who benefit from more dynamic markets, more investment, and more innovation:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth. To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.[67]

At the same time, the U.S. approach mitigates the serious risk of simply getting it wrong. This is incredibly likely where, for example, “excessive” prices are in the eye of the beholder and are extremely difficult to ascertain econometrically.

This unfortunate feature of EU competition enforcement would likely be, at least in part, replicated under the reforms proposed by the Expert Report. Indeed, the report’s focus on the welfare of “trading partners”—and particularly its focus on trading-partner welfare, regardless of whether perceived harm is passed on to consumers—comes dangerously close to the EU’s preoccupation with reducing the rents captured by monopolists.[68] While the Expert Report does not recommend an “excessive pricing” theory of harm—like the one that exists in the EU—it does echo the EU’s fixation on the immediate fortunes of trading partners (other than consumers) in ways that may ultimately lead to qualitatively equivalent results.

V. The Emulation of European Competition Law in the Expert Report’s Treatment of Specific Practices and Theories of Harm

Beyond the high-level differences discussed above, European and U.S. antitrust authorities also diverge significantly on numerous specific issues. These dissimilarities often result from the different policy goals that animate these two bodies of law. As noted, where U.S. case law is guided by an overarching goal of maximizing consumer welfare (notably, a practice’s effect on output), European competition law tends to favor structural presumptions and places a much heavier emphasis on distributional considerations. In addition, where the U.S. approach to many of these specific issues is deeply influenced by its overwhelming concern with the potentially chilling effects of intervention, this apprehension is very much foreign to European competition law. The result is often widely divergent approaches to complex economic matters in which the United States hews far more closely than does the EU to the humility and restraint suggested by economic learning.

Unfortunately, the recommendations put forward in the Expert Report would largely bring California antitrust law in line with the European approach for many theories of harm. Indeed, the Expert Report rejects the traditional U.S. antitrust-law concern with chilling procompetitive behavior, even proposing statutory language that would hold that “courts should bear in mind that the policy of California is that the risk of under-enforcement of the antitrust laws is greater than the risk of over-enforcement.”[69] Not only is this position unsupported, but it also entails an explicit rejection of a century of U.S. antitrust jurisprudence:

[U]sing language that mimics the Sherman Act would come with a potentially severe disadvantage: California state courts might then believe that they should apply 130 years of federal jurisprudence to cases brought under California state law. In recent decades, that jurisprudence has substantially narrowed the scope of the Sherman Act, as described above, so relying on it could well rob California law of the power it needs to protect competition.[70]

The evidence suggesting that competition has been poorly protected under Sherman Act jurisprudence is generally weak and unconvincing,[71] however, and the same is true for the specific theories of harm that the Expert Report would expand.

A. Predatory Pricing

Predatory pricing is one area where the Expert Report urges policymakers to copy specific rules in force in the EU. In its model statutory language, the Expert Report proposes that California establish that:

liability [for anticompetitive exclusionary conduct] does not require finding… that any price of the defendant for a product or service was below any measure of the costs to the defendant for providing the product or service…, [or] that in a claim of predatory pricing, the defendant is likely to recoup the losses it sustains from below-cost pricing of the products or services at issue[.][72]

U.S. antitrust law subjects allegations of predatory pricing to two strict conditions: 1) monopolists must charge prices that are below some measure of their incremental costs; and 2) there must be a realistic prospect that they will be able to recoup these first-period losses.[73] In laying out its approach to predatory pricing, the Supreme Court identified the risk of false positives and the clear cost of such errors to consumers. It therefore particularly stressed the importance of the recoupment requirement because, without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”[74]

Accordingly, in the United States, authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme or that the scheme itself would effectively foreclose rivals from entering in the first place.[75] Otherwise, competitors would undercut the predator as soon as it attempts to charge supracompetitive prices to recoup its losses. In such a situation—without, that is, the strong likelihood of recouping the lost revenue from underpricing—the overwhelming weight of economic learning (to say nothing of simple logic) makes clear that predatory pricing is not a rational business strategy.[76] Thus, apparent cases of predatory pricing in the absence of the likelihood of recoupment are most likely not, in fact, predatory, and deterring or punishing them would likely actually harm consumers.

In contrast, the legal standard applied to predatory pricing in the EU is much laxer and almost certain, as a result, to risk injuring consumers. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory.[77] Even when a firm imposes prices that are between average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of “a plan to eliminate competition.”[78] Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.[79]

[I]t does not follow from the case-law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive.[80]

By affirmatively dispensing with each of these limitations, the Expert Report effectively recommends that California legislators shift California predatory-pricing law toward the European model. Unfortunately, such a standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant, “Chicago School” understanding of predatory pricing.[81] Indeed, strategic predatory pricing still requires some form of recoupment and the refutation of any convincing business justification offered in response.[82] As Bruce Kobayashi and Tim Muris emphasize, the introduction of new possibility theorems, particularly uncorroborated by rigorous empirical reinforcement, does not necessarily alter the implementation of the error-cost analysis:

While the Post-Chicago School literature on predatory pricing may suggest that rational predatory pricing is theoretically possible, such theories do not show that predatory pricing is a more compelling explanation than the alternative hypothesis of competition on the merits. Because of this literature’s focus on theoretical possibility theorems, little evidence exists regarding the empirical relevance of these theories. Absent specific evidence regarding the plausibility of these theories, the courts… properly ignore such theories.[83]

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in U.S. antitrust law essentially differentiates aggressive pricing behavior that improves consumer welfare by leading to overall price decreases from predatory pricing that reduces welfare due to ultimately higher prices. In other words, it is entirely focused on consumer welfare.

The European approach, by contrast, reflects structuralist considerations that are far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could, through aggressive pricing—even to the benefit of consumers—by their very success, engender more concentrated market structures. It is simply presumed that these less-atomistic markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the European Court of Justice’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors.[84]

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.[85]

In short, the European approach leaves much less room for analysis of a pricing scheme’s concrete effects, making it much more prone to false positives than the Brooke Group standard in the United States. It ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory-pricing standards may exert on firms that attempt to attract consumers with aggressive pricing schemes. There is no basis for enshrining such an approach in California law.

B. Refusals to Deal

Refusals to deal are another area where the Expert Report’s recommendations would bring California antitrust rules more in line with the EU model. The Expert Report proposes in its example statutory language that:

[L]iability… does not require finding (i) that the unilateral conduct of the defendant altered or terminated a prior course of dealing between the defendant and a person subject to the exclusionary conduct; [or] (ii) that the defendant treated persons subject to the exclusionary conduct differently than the defendant treated other persons[.][86]

The Expert Report further highlights “Discrimination Against Rivals, for example by refusing to provide rivals of the defendant access to a platform or product or service that the defendant provides to other third-parties” as a particular area of concern.[87]

U.S. and EU antitrust laws are hugely different when it comes to refusals to deal. While the United States has imposed strenuous limits on enforcement authorities or rivals seeking to bring such cases, EU competition law sets a far lower threshold for liability. The U.S. approach is firmly rooted in the error-cost framework and, in particular, the conclusion that avoiding Type I (false-positive) errors is more important than avoiding Type II (false-negative) errors. As the Supreme Court held in Trinko:

[Enforced sharing] may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities. Enforced sharing also requires antitrust courts to act as central planners, identifying the proper price, quantity, and other terms of dealing—a role for which they are ill suited.[88]

In that case, the Court was unwilling to extend the reach of Section 2, cabining it to a very narrow set of circumstances:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end.[89]

This highlights two key features of American antitrust law concerning refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine—indeed, as the Court held in Trinko, “we have never recognized such a doctrine.”[90] Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market.

Moreover, as the Court observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.[91] While even this is not likely to be the economically appropriate limitation on liability,[92] its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are extremely unlikely—is appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.[93] In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal.[94] Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.[95] In practice, however, all of these conditions have been significantly relaxed by EU courts and the Commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling. As John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.[96]

Thus, EU competition law is far less concerned about its potential chilling effect on firms’ investments than is U.S. antitrust law.

The Expert Report’s wording suggests that its authors would like to see California’s antitrust rules in this area move towards the European model. This seems particularly misguided for a state that so heavily relies on continued investments in innovation.

In discussing its concerns with the state of refusal-to-deal law in the United States, the Expert Report notes that:

[E]ven a monopolist can normally choose the parties with which it will deal and [] a monopolist’s selective refusal to deal with another firm, even a competitor, violates antitrust law only in unusual circumstances…. [The Court] explained that courts are ill-equipped to determine the terms on which one firm should be required to deal with another, so a bright line is necessary to preserve the incentives of both the monopolist and the competitor to compete aggressively in the marketplace. Such a rule may have been reasonable in a setting where “dealing” often meant incurring a large fixed cost to coordinate with the other firm. In an economy containing digital “ecosystems” that connect many businesses to one another, and digital markets with standardized terms of interconnection, such as established application program interfaces (APIs), that rule may immunize much conduct that could be anticompetitive.[97]

This approach is unduly focused on the welfare of specific competitors, rather than the effects on competition and consumers. Indeed, in the Aspen Skiing case (which did find a duty to deal on the defendant’s part), the Supreme Court is clear that the assessment of harm to competitors would be insufficient to establish that a refusal to deal was anticompetitive: “The question whether Ski Co.’s conduct may properly be characterized as exclusionary cannot be answered by simply considering its effect on Highlands. In addition, it is relevant to consider its impact on consumers and whether it has impaired competition in an unnecessarily restrictive way.”[98]

The Expert Report’s additional proposal that liability should not turn on whether the defendant treated particular parties differently in exercising exclusionary conduct (including refusal to deal)[99] is a further move away from effects-based analysis and toward the European model. As Einer Elhauge has noted, there is an important distinction between unconditional and discriminatory exclusionary conduct:

Efforts to simply improve a firm’s own efficiency and win sales by selling a better or cheaper product at above-cost prices should enjoy per se legality without any general requirement to share that greater efficiency with rivals. But exclusionary conditions that discriminate on the basis of rivalry by selectively denying property or products to rivals (or buyers who deal with rivals) are not necessary to further ex ante incentives to enhance the monopolist’s efficiency, and should be illegal when they create a marketwide foreclosure that impairs rival efficiency.[100]

By arguing to impose liability regardless of whether conduct is exercised in a discriminatory fashion, the Expert Report would remove the general protection under U.S. antitrust law for unconditional refusals to deal, and would instead apply the conditional standard to all exclusionary conduct.

It seems quite likely, in fact, that this provision is proposed as a rebuke to the 9th U.S. Circuit Court of Appeals’ holding in FTC v. Qualcomm, which found no duty to deal, in part, because the challenged conduct was applied to all rivals equally.[101] At least three of the Expert Report’s authors are on record as vigorously opposing the holding in Qualcomm.[102] But far from supporting a challenge to Qualcomm’s conduct on the grounds that it harmed competition by targeting threatening rivals, the Expert Report authors’ apparent preferred approach to Qualcomm’s alleged refusal to deal was to attempt to force a wholesale change in Qualcomm’s vertically integrated business model.

In other words, the authors would find liability regardless of how Qualcomm enforces its license terms, and would prefer a legal standard that does not condition that finding on exclusionary conduct against only certain rivals. In essence, they see operating at all in the relevant market as a harm.[103] Whatever the merits of this argument in the Qualcomm case, it should not be generalized to undermine the sensible limits that U.S. antitrust has imposed on the refusal-to-deal theory of harm.

C. Vertical and Platform Restraints

Finally, the Expert Report would take a leaf out of the European book when it comes to vertical restraints, including rebates, exclusive dealing, “most favored nation” (MFN) clauses, and platform conduct. Here, again, the Expert Report singles these practices out for attention:

Loyalty Rebates, which penalize a customer that conducts more business with the defendant’s rivals, as opposed to volume discounts, which are generally procompetitive;

Exclusive Dealing Provisions, which disrupt the ability of counterparties to deal with the defendant’s rivals, especially if such provisions are widely used by the defendant;

Most-Favored Nation Clauses, which prohibit counterparties from dealing with the defendant’s rivals on more favorable terms and conditions than those on which they deal with the defendant, especially if such clauses are widely used by the defendant.[104]

There are vast differences between U.S. and EU competition law with respect to vertical restraints. On the one hand, since the Supreme Court’s Leegin ruling, even price-related vertical restraints (such as resale price maintenance, or “RPM”) are assessed under the rule of reason in the United States.[105] Some commentators have gone so far as to say that, in practice, U.S. case law almost amounts to per se legality.[106] Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered restrictions of competition “by object”—the EU’s equivalent of a per se prohibition.[107] This severe treatment also applies to nonprice vertical restraints that tend to partition the European internal market.[108] Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist (and economically grounded) principle that inter-brand competition is the appropriate touchstone to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former.[109]

This especially stringent stance toward vertical restrictions flies in the face of the longstanding mainstream-economics literature addressing the subject. As Patrick Rey and Jean Tirole (hardly the most free-market of economists) saw it as long ago as 1986: “Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.”[110]

While there is theoretical literature (rooted in so-called “possibility theorems”) that suggests firms can engage in anticompetitive vertical conduct, the empirical evidence strongly suggests that, even though firms do impose vertical restraints, it is exceedingly rare that they have net anticompetitive effects. Nor is the relative absence of such evidence for a lack of looking: countless empirical papers have investigated the competitive effects of vertical integration and vertical contractual arrangements and found predominantly procompetitive benefits or, at worst, neutral effects.[111]

Unlike in the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.”[112] Further, “[the prior approach to resale price maintenance restraints] hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”[113]

By contrast, the EU’s continued per se treatment of RPM strongly reflects its precautionary-principle approach to antitrust, under which European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, unlikely (at best).[114] The U.S. approach to such vertical restraints, which rests on likelihood rather than mere possibility,[115] is far less likely to erroneously condemn beneficial conduct.

There are also significant differences between the U.S. and EU stances on the issue of rebates. This reflects the EU’s relative willingness to disregard complex economics in favor of noneconomic, formalist presumptions (at least, prior to the ECJ’s Intel ruling). Whereas U.S. antitrust has predominantly moved to an effects-based assessment of rebates,[116] this is only starting to happen in the EU. Prior to the ECJ’s Intel ruling, the EU implemented an overly simplistic approach to assessing rebates by dominant firms, where so-called “fidelity” rebates were almost per se illegal.[117] Likely recognizing the problems inherent in this formalistic assessment of rebates, the ECJ’s Intel ruling moved the European case law on rebates to a more evidence-based approach, holding that:

[T]he Commission is not only required to analyse, first, the extent of the undertaking’s dominant position on the relevant market and, secondly, the share of the market covered by the challenged practice, as well as the conditions and arrangements for granting the rebates in question, their duration and their amount; it is also required to assess the possible existence of a strategy aiming to exclude competitors that are at least as efficient as the dominant undertaking from the market.[118]

As Advocate General Nils Wahl noted in his opinion in the case, only such an evidence-based approach could ensure that the challenged conduct was actually harmful:

In this section, I shall explain why an abuse of dominance is never established in the abstract: even in the case of presumptively unlawful practices, the Court has consistently examined the legal and economic context of the impugned conduct. In that sense, the assessment of the context of the conduct scrutinised constitutes a necessary corollary to determining whether an abuse of dominance has taken place. That is not surprising. The conduct scrutinised must, at the very least, be able to foreclose competitors from the market in order to fall under the prohibition laid down in Article 102 TFEU.”[119]

The Expert Report, however, contains a direct refutation of Intel, thus “out-Europing” even Europe itself in its treatment of vertical restraints:

7) Plaintiffs need not show that the rivals whose ability to compete has been reduced are as efficient, or nearly as efficient, as the defendant. Harm to competition can arise when the competitive constraints on the defendant are weakened even when those competitive constraints come from less efficient rivals. Indeed, harm to competition can be especially great when a firm that faces limited competition further weakens its rivals.[120]

If adopted, this language would significantly limit the need for California courts to show actual anticompetitive harm arising from challenged vertical conduct. Similarly, the Expert Report’s rejection of the “no-economic-sense” test—“liability…does not require finding… that the conduct of the defendant makes no economic sense apart from its tendency to harm competition”[121]—removes another mechanism to ensure that vertical restraints lead to actual consumer harm, rather than simply injury to a competitor.

As Thom Lambert persuasively demonstrates, there are imperfections with both the “as efficient competitor” test and the “no economic sense” test. But these commonly applied tools do at least help to ensure that courts undertake to find actual anticompetitive harm.[122] The rejection of both simultaneously is decidedly problematic, suggesting a preference for no serious economic constraints on courts’ discretion to condemn practices solely on the ground of structural harm—i.e., harm to certain competitors.

By contrast, the alternative definition that Lambert proposes “would deem conduct to be unreasonably exclusionary if it would exclude from the defendant’s market a ‘competitive rival,’ defined as a rival that is both as determined as the defendant and capable, at minimum efficient scale, of matching the defendant’s efficiency.”[123] While this test may appear to have some traits in common with the Expert Report’s “diminishing competitive constraints” approach, it incorporates a much more robust set of principles and limitations, designed to more clearly distinguish conduct that merely excludes from exclusions that actually cause anticompetitive harm, while minimizing administrative costs.[124] The Expert Report, by contrast, explicitly removes such limitations.

A related problem concerns the Expert Report’s proposal that “when a defendant operates a multi-sided platform business, [liability does not turn on whether] the conduct of the defendant presents harm to competition on more than one side of the multi-sided platform[.]”[125] This provision is meant to reverse the Supreme Court’s holding on platform vertical restraints in Ohio v. American Express that:

Due to indirect network effects, two-sided platforms cannot raise prices on one side without risking a feedback loop of declining demand. And the fact that two-sided platforms charge one side a price that is below or above cost reflects differences in the two sides’ demand elasticity, not market power or anticompetitive pricing. Price increases on one side of the platform likewise do not suggest anticompetitive effects without some evidence that they have increased the overall cost of the platform’s services. Thus, courts must include both sides of the platform—merchants and cardholders—when defining the credit-card market….

…For all these reasons, “[i]n two-sided transaction markets, only one market should be defined.” Any other analysis would lead to “mistaken inferences” of the kind that could “chill the very conduct the antitrust laws are designed to protect.”[126]

As Greg Werden notes, “[a]lleging the relevant market in an antitrust case does not merely identify the portion of the economy most directly affected by the challenged conduct; it identifies the competitive process alleged to be harmed.”[127] Particularly where novel conduct or novel markets are involved, and thus the relevant economic relationships are poorly understood, market definition is crucial to determine “what the nature of [the relevant] products is, how they are priced and on what terms they are sold, what levers [a firm] can use to increase its profits, and what competitive constraints affect its ability to do so.”[128] This is the approach the Supreme Court employed in Amex.

The Expert Report’s proposal to overrule Amex in California is deeply misguided. The economics of two-sided markets are such that “there is no meaningful economic relationship between benefits and costs on each side of the market considered alone…. [A]ny analysis of social welfare must account for the pricing level, the pricing structure, and the feasible alternatives for getting all sides on board.”[129] Assessing anticompetitive harm with respect to only one side of a two-sided market will arbitrarily include and exclude various sets of users and transactions, and incorrectly assess the extent and consequences of market power.[130]

Indeed, evidence of a price effect on only one side of a two-sided platform can be consistent with either neutral, anticompetitive, or procompetitive conduct.[131] Only when output is defined to incorporate the two-sidedness of the product, and where price and quality are assessed on both sides of a sufficiently interrelated two-sided platform, is it even possible to distinguish between procompetitive and anticompetitive effects. In fact, “[s]eparating the two markets allows legitimate competitive activities in the market for general purposes to be penalized no matter how output-enhancing such activities may be.”[132]

Notably, while some scholars have opposed the Amex holding that both sides of a two-sided market must be included in the relevant market in order to assess anticompetitive harm, some of these critics appear to note that the problem is not that both sides should not be taken into account at all, but only that they should not be included in the same relevant market (thus, permitting a plaintiff to make out a prima facie case by showing harm to just one side).[133] The language proposed in the Expert Report, however, would go even further, seemingly permitting a finding of liability based solely on harm to one side of a multi-sided market, regardless of countervailing effects on the other side. As in the Amex case itself, such an approach would confer benefits on certain platform business users (in Amex, retailers) at the direct expense of consumers (in Amex, literal consumers of retail goods purchased by credit card).

Adopting such an approach in California—whose economy is significantly dependent on multisided digital-platform firms, including both incumbents and startups[134]—would imperil the state’s economic prospects[135] and exacerbate the incentives for such firms to take jobs, investments, and tax dollars elsewhere.[136]

[1] Antitrust Law — Study B-750, California Law Revision Commission (last revised Apr. 26, 2024), available at http://www.clrc.ca.gov/B750.html.

[2] We welcome the opportunity to comment further or to respond to questions about our comments. Please contact us at [email protected].

[3] Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on Contestable and Fair Markets in the Digital Sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828, 2022 O.J. (L 265) 1.

[4] See Aaron Edlin, Doug Melamed, Sam Miller, Fiona Scott Morton, & Carl Shapiro, Expert Report on Single Firm Conduct, 2024 Cal. L. Rev. Comm’n (hereinafter “Expert Report”), available at ExRpt-B750-Grp1.pdf.

[5] Id. at 14.

[6] Frank H. Easterbrook, The Limits of Antitrust, 63 Tex. L. Rev. 1, 15 (1984).

[7] See, especially, Pac. Bell Tel. Co. v. linkLine Commc’ns, Inc., 555 U.S. 438 (2009); Credit Suisse Sec. (U.S.A) LLC v. Billing, 551 U.S. 264, 265 (2007); Verizon Comm. v. Law Offices of Trinko, 540 U.S. 398 (2004).

[8] Trinko, 540 U.S. at 414 (quoting Matsushita Elec. Indus. Co. v. Zenith Radio Corp., 475 U.S. 574, 594 (1986)).

[9] Easterbrook, supra note 6, at 7.

[10] See, e.g., Aurelien Portuese, The Rise of Precautionary Antitrust: An Illustration with the EU Google Android Decision, CPI EU News November 2019 (2019) at 4 (“The absence of demonstrated consumer harm in order to find antitrust injury is not fortuitous, but represents a fundamental alteration of antitrust enforcement, predominantly when it comes to big tech companies. Coupled with the lack of clear knowledge, a shift in the burden of proof, and the lack of a consumer harm requirement in order to find abuse of dominance all reveal the precautionary approach that the European Commission has now embraced.”).

[11] See Nassim Nicholas Taleb, Rupert Read, Raphael Douady, Joseph Norman, & Yaneer Bar-Yam, The Precautionary Principle (With Application to the Genetic Modification of Organisms), arXiv preprint arXiv:1410.5787, 2 (2014). (“The purpose of the PP is to avoid a certain class of what, in probability and insurance, is called “ruin” problems. A ruin problem is one where outcomes of risks have a non-zero probability of resulting in unrecoverable losses.”).

[12] The precautionary principles implies that policymakers should bar certain mutually advantageous transactions due to the social costs that they might impose further down the line. Moreover, the precautionary principle has historically been associated with anti-growth positions. See, e.g., Jaap C Hanekamp, Guillaume Vera?Navas, & SW Verstegen, The Historical Roots of Precautionary Thinking: The Cultural Ecological Critique and ‘The Limits to Growth’, 8 J. Risk Res. 295, 299 (2005) (“The first inklings of today’s precautionary thinking as a means of creating a sustainable society can be traced historically to ‘The Limits to Growth’…”).

[13] See, e.g., Greg Ip, Europe Regulates Its Way to Last Place, Wall St. J. (Jan. 31, 2024), https://www.wsj.com/economy/europe-regulates-its-way-to-last-place-2a03c21d. (“Of course, Europe’s economy underperforms for lots of reasons, from demographics to energy costs, not just regulation. And U.S. regulators aren’t exactly hands-off. Still, they tend to act on evidence of harm, whereas Europe’s will act on the mere possibility. This precautionary principle can throttle innovation in its cradle.”) (emphasis added).

[14] See, e.g., id.; Eric Albert, Europe Trails Behind the United States in Economic Growth, Le Monde (Nov. 1, 2023), https://www.lemonde.fr/en/economy/article/2023/11/01/europe-trails-behind-the-united-states-in-economic-growth_6218259_19.html (“For the past fifteen years, Europe has been falling further and further behind…. Since 2007, per capita growth on the other side of the Atlantic has been 19.2%, compared with 7.6% in the eurozone. A gap of almost twelve points.”).

[15] Fredrik Erixon, Oscar Guinea, & Oscar du Roy, If the EU Was a State in the United States: Comparing Economic Growth Between EU and US States, ECIPE Policy Brief No. 07/2023 (2023), available at https://ecipe.org/publications/comparing-economic-growth-between-eu-and-us-states.

[16] Among other things, the Expert Report argues that antitrust should be used to address alleged policy concerns broader than protecting competition, and should accept reductions in competition to do so. See Expert Report, supra note 1, at 2 (“Nonetheless, these important values [‘broader social and political goals’] can influence the evidentiary standards that the Legislature instructs the courts to apply when handling individual antitrust cases. For example, the California Legislature could instruct the courts to err on the side of enforcement when the effect of the conduct at issue on competition is uncertain.”). But as one of the authors of the Expert Report has himself noted elsewhere: “while antitrust enforcement has a vital role to play in keeping markets competitive, antitrust law and antitrust institutions are ill suited to directly address concerns associated with the political power of large corporations or other public policy goals such as income inequality or job creation.” Carl Shapiro, Antitrust in a Time of Populism, 61 Int’l J. Indus. Org. 714, 714 (2018) (emphasis added).

[17] See generally Easterbrook, supra note 6, at 14-15. See also Geoffrey A. Manne & Joshua D. Wright, Innovation and the Limits of Antitrust, 6 J. Comp. L. & Econ. 153 (2010).

[18] See Robert W. Crandall & Clifford Winston, Does Antitrust Policy Improve Consumer Welfare? Assessing the Evidence, 17 J. Econ. Persp. 3, 4 (2003) (“[T]he economics profession should conclude that until it can provide some hard evidence that identi?es where the antitrust authorities are signi?cantly improving consumer welfare and can explain why some enforcement actions and remedies are helpful and others are not, those authorities would be well advised to prosecute only the most egregious anticompetitive violations.”).

[19] David Autor, David Dorn, Lawrence F. Katz, Christina Patterson & John Van Reenen, The Fall of the Labor Share and the Rise of Superstar Firms, 135 Q.J. Econ. 645, 651 (2020) (citations omitted) (emphasis added).

[20] See, e.g., Thomas Philippon, The Great Reversal: How America Gave Up on Free Markets (2019); Jan De Loecker, Jan Eeckhout, & Gabriel Unger, The Rise of Market Power and the Macroeconomic Implications, 135 Q. J. Econ. 561 (2020); David Wessel, Is Lack of Competition Strangling the U.S. Economy?, Harv. Bus. Rev. (Apr. 2018), https://hbr.org/2018/03/is-lack-of-competition-strangling-the-u-s-economy; Adil Abdela & Marshall Steinbaum, The United States Has a Market Concentration Problem, Roosevelt Institute Issue Brief (2018), available at https://rooseveltinstitute.org/wp-content/uploads/2020/07/RI-US-market-concentration-problem-brief-201809.pdf.

[21] A number of papers simply do not find that the accepted story—built in significant part around the famous De Loecker, Eeckhout, & Unger study, id.—regarding the vast size of markups and market power is accurate. The claimed markups due to increased concentration are likely not nearly as substantial as commonly assumed. See, e.g., James Traina, Is Aggregate Market Power Increasing? Production Trends Using Financial Statements, Stigler Center Working Paper (Feb. 2018), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3120849; see also World Economic Outlook, April 2019 Growth Slowdown, Precarious Recovery, International Monetary Fund (Apr. 2019), available at https://www.imf.org/en/Publications/WEO/Issues/2019/03/28/world-economic-outlook-april-2019. Another study finds that profits have increased, but are still within their historical range. See Loukas Karabarbounis & Brent Neiman, Accounting for Factorless Income, 33 NBER Macro. Annual 167 (2019). And still another shows decreased wages in concentrated markets, but also that local concentration has been decreasing over the relevant time period, suggesting that lack of enforcement is not a problem. See Kevin Rinz, Labor Market Concentration, Earnings, and Inequality, 57 J. Hum. Resources S251 (2022).

[22] See Esteban Rossi-Hansberg, Pierre-Daniel Sarte, & Nicholas Trachter, Diverging Trends in National and Local Concentration, 35 NBER Macro. Annual 115, 116 (2020) (“[T]he observed positive trend in market concentration at the national level has been accompanied by a corresponding negative trend in average local market concentration…. The narrower the geographic definition, the faster is the decline in local concentration. This is meaningful because the relevant definition of concentration from which to infer changes in competition is, in most sectors, local and not national.”).

[23] Id. at 117 (emphasis added).

[24] Sharat Ganapati, Growing Oligopolies, Prices, Output, and Productivity, 13 Am. Econ. J. Micro. 309, 323-24 (2021) (emphasis added).

[25] Chang-Tai Hsieh & Esteban Rossi-Hansberg, The Industrial Revolution in Services, 1 J. Pol. Econ. Macro. 3, 3 (2023) (emphasis added). See also id. at 39 (“Over the past 4 decades, the US economy has experienced a new industrial revolution that has enabled ?rms to scale up production over a large number of establishments dispersed across space. The adoption of these technologies has particularly favored productive ?rms in nontraded-service industries. The industrial revolution in services has had its largest effect in smaller and mid-sized local markets…. The gain to local consumers from access to more, better, and novel varieties of local services from the entry of top ?rms into local markets is not captured by the BLS. We estimate that such ‘missing growth’ is as large as 1.6% in the smallest markets and averages 0.5% per year from 1977 to 2013 across all US cities.”) (emphasis added).

[26] David Berger, Kyle Herkenhoff & Simon Mongey, Labor Market Power, 112 Am. Econ. Rev. 1147, 1148-49 (2022).

[27] Shapiro, Antitrust in a Time of Populism, supra note 16, at 727-28.

[28] Expert Report, supra note 1, at 15 (emphasis added).

[29] Id. at 2.

[30] A. Douglas Melamed, Antitrust Law and Its Critics, 83 Antitrust L.J. 269, 285 (2020).

[31] Herbert J. Hovenkamp & Fiona Scott Morton, Framing the Chicago School of Antitrust Analysis, 168 U. Penn. L. Rev. 1843, 1870-71 (2020).

[32] Easterbrook, supra note 6, at 2-3.

[33] Hovenkamp & Scott Morton, supra note 31, at 1849.

[34] See generally Geoffrey A. Manne, Error Costs in Digital Markets, in Global Antitrust Institute Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg eds., 2020), available at https://gaidigitalreport.com/wp-content/uploads/2020/11/Manne-Error-Costs-in-Digital-Markets.pdf.

[35] Bruce H. Kobayashi & Timothy J. Muris, Chicago, Post-Chicago, and Beyond: Time to Let Go of the 20th Century, 78 Antitrust L.J. 147, 166 (2012).

[36] See id. at 166 (“[T]here is very little empirical evidence based on in-depth industry studies that RRC is a significant antitrust problem.”); id. at 148 (“Because of [the Post-Chicago School] literature’s focus on theoretical possibility theorems, little evidence exists regarding the empirical relevance of these theories.”).

[37] See Expert Report, supra note 1, at 7 (“The history of federal antitrust enforcement of single-firm conduct illustrates that when courts are uncertain about how to assess conduct, they often find in favor of defendants even if the conduct harms competition simply because the plaintiff bears the burden of proof.”).

[38] See supra notes 19-27, and accompanying text.

[39] See Case C-413/14 P Intel v Commission, ECLI:EU:C:2017:788.

[40] See Steven Berry, Martin Gaynor, & Fiona Scott Morton, Do Increasing Markups Matter? Lessons from Empirical Industrial Organization, 33 J. Econ. Persp. 48 (2019). See also Jonathan Baker & Timothy F. Bresnahan, Economic Evidence in Antitrust: Defining Markets and Measuring Market Power in Handbook of Antitrust Economics 1 (Paolo Buccirossi ed., 2008) (“The Chicago identification argument has carried the day, and structure-conduct-performance empirical methods have largely been discarded in economics.”).

[41] See, e.g., Gregory J. Werden & Luke Froeb, Don’t Panic: A Guide to Claims of Increasing Concentration 33 Antitrust 74 (2018), https://ssrn.com/abstract=3156912, and papers cited therein. As Werden & Froeb conclude: No evidence we have uncovered substantiates a broad upward trend in the market concentration in the United States, but market concentration undoubtedly has increased significantly in some sectors, such as wireless telephony. Such increases in concentration, however, do not warrant alarm or imply a failure of antitrust. Increases in market concentration are not a concern of competition policy when concentration remains low, yet low levels of concentration are being cited by those alarmed about increasing concentration…. Id. at 78. See also Joshua D. Wright, Elyse Dorsey, Jonathan Klick, & Jan M. Rybnicek, Requiem for a Paradox: The Dubious Rise and Inevitable Fall of Hipster Antitrust, 51 Ariz. St. L.J. 293 (2019).

[42] See, e.g., Expert Report, supra note 1, at 15.

[43] Francine Lafontaine & Margaret Slade, Exclusive Contracts and Vertical Restraints: Empirical Evidence and Public Policy, in Handbook of Antitrust Economics 391 (Paolo Buccirossi ed., 2008).

[44] See, e.g., Daniel P. O’Brien, The Antitrust Treatment of Vertical Restraints: Beyond the Possibility Theorems, in The Pros and Cons of Vertical Restraints 40, 72-76 (Swedish Competition Authority, 2008) (“[Vertical restraints] are unlikely to be anticompetitive in most cases.”); James C. Cooper, et al., Vertical Antitrust Policy as a Problem of Inference, 23 Int’l J. Indus. Org. 639 (2005) (surveying the empirical literature, concluding that although “some studies find evidence consistent with both pro- and anticompetitive effects… virtually no studies can claim to have identified instances where vertical practices were likely to have harmed competition”); Benjamin Klein, Competitive Resale Price Maintenance in the Absence of Free-Riding, 76 Antitrust L.J. 431 (2009); Bruce H. Kobayashi, Does Economics Provide a Reliable Guide to Regulating Commodity Bundling by Firms? A Survey of the Economic Literature, 1 J. Comp. L. & Econ. 707 (2005).

[45] James Cooper, Luke Froeb, Daniel O’Brien, & Michael Vita, Vertical Restrictions and Antitrust Policy: What About the Evidence?, Comp. Pol’y Int’l 45 (2005).

[46] Id.

[47] Expert Report, supra note 1, at 16: (b) Conduct, whether by one or multiple actors, is deemed to be anticompetitive exclusionary conduct, if the conduct tends to (1) diminish or create a meaningful risk of diminishing the competitive constraints imposed by the defendant’s rivals and thereby increase or create a meaningful risk of increasing the defendant’s market power, and (2) does not provide sufficient benefits to prevent the defendant’s trading partners from being harmed by that increased market power.

[48] Id.

[49] See TransUnion LLC v. Ramirez, 141 S. Ct. 2190, 2210-11 (2021) (“The plaintiffs rely on language from Spokeo where the Court said that ‘the risk of real harm’ (or as the Court otherwise stated, a ‘material risk of harm’) can sometimes ‘satisfy the requirement of concreteness…. [but] in a suit for damages, the mere risk of future harm, standing alone, cannot qualify as a concrete harm—at least unless the exposure to the risk of future harm itself causes a separate concrete harm.”) (citations omitted).

[50] In essence, for uncertain future effects, U.S. antitrust law applies something like a “reasonableness” standard. See U.S. v. Microsoft Corp., 253 F.3d 34, 79 (D.C. Cir. 2001) (enjoining “conduct that is reasonably capable of contributing significantly to a defendant’s continued monopoly power”) (emphasis added). Of course, “material risk” is undefined, so perhaps it is meant to accord with this standard. If so, it should use the same language.

[51] Herbert Hovenkamp, Antitrust Harm and Causation, 99 Wash. U. L. Rev. 787, 841 (2021). See also id. at 788 (“While a showing of actual harm can be important evidence, in most cases the public authorities need not show that harm has actually occurred, but only that the challenged conduct poses an unreasonable danger that it will occur.”) (emphasis added).

[52] Expert Report, supra note 1, at 16.

[53] See Brunswick Corp. v. Pueblo Bowl-O-Mat, Inc., 429 U.S. 477, 487-88 (1977) (“If the acquisitions here were unlawful, it is because they brought a ‘deep pocket’ parent into a market of ‘pygmies.’ Yet respondents’ injury—the loss of income that would have accrued had the acquired centers gone bankrupt—bears no relationship to the size of either the acquiring company or its competitors. Respondents would have suffered the identical ‘loss’—but no compensable injury—had the acquired centers instead obtained refinancing or been purchased by ‘shallow pocket’ parents, as the Court of Appeals itself acknowledged. Thus, respondents’ injury was not of ‘the type that the statute was intended to forestall[.]’”) (citations omitted).

[54] Expert Report, supra note 1, at 17.

[55] Microsoft, 253 F.3d at 79.

[56] Treaty on European Union, Protocol (No27) on the internal market and competition, Official Journal 115.

[57] See especially Expert Report supra note 1, at 17, §§ (f)(8) & (g) through (i).

[58] See, e.g., Joaquín Almunia, Competition and Consumers: The Future of EU Competition Policy, Speech at European Competition Day, Madrid (May 12, 2010), available at http://europa.eu/rapid/press-release_SPEECH-10-233_en.pdf (“All of us here today know very well what our ultimate objective is: Competition policy is a tool at the service of consumers. Consumer welfare is at the heart of our policy and its achievement drives our priorities and guides our decisions.”). Even then, however, it must be noted that Almunia elaborated that “[o]ur objective is to ensure that consumers enjoy the benefits of competition, a wider choice of goods, of better quality and at lower prices.” Id. (emphasis added). In fact, expanded consumer choice is not necessarily the same thing as consumer welfare, and may at times be at odds with it. See Joshua D. Wright & Douglas H. Ginsburg, The Goals of Antitrust: Welfare Trumps Choice, 81 Fordham L. Rev. 2405 (2013).

[59] See Commission Guidance on the Commission’s Enforcement Priorities in Applying Article 82 of the EC Treaty to Abusive Exclusionary Conduct by Dominant Undertakings, 2009 O. J.(C 45)7 at n. 5, §6 (“[T]he Commission is mindful that what really matters is protecting an effective competitive process and not simply protecting competitors.”).

[60] See Case C-209/10, Post Danmark A/S v Konkurrencerådet, ECLI:EU:C:2012:172, §22 (“Competition on the merits may, by definition, lead to the departure from the market or the marginalisation of competitors that are less efficient and so less attractive to consumers….”).

[61] See Pablo Ibáñez Colomo, Exclusionary Discrimination Under Article 102 TFEU, 51 Common Market L. Rev. 153 (2014).

[62] Id.

[63] Expert Report, supra note 1, at 16.

[64] Id.

[65] See Brian Albrecht, Dirk Auer, & Geoffrey A. Manne, Labor Monopsony and Antitrust Enforcement: A Cautionary Tale, ICLE White Paper No. 2024-05-01 (2024) at 21, available at https://laweconcenter.org/wp-content/uploads/2024/05/Labor-Monopsony-Antitrust-final-.pdf (“[Conduct] that creates monopsony power will necessarily reduce the prices and quantity purchased of inputs like labor and materials. But this same effect (reduced prices and quantities for inputs) would also be observed if the [conduct] is efficiency enhancing. If there are efficiency gains, the [] entity may purchase fewer of one or more inputs than [it would otherwise]. For example, if the efficiency gain arises from the elimination of redundancies in a hospital…, the hospital will buy fewer inputs, hire fewer technicians, or purchase fewer medical supplies.”). See also Ivan Kirov & James Traina, Labor Market Power and Technological Change in US Manufacturing, conference paper for Institute for Labor Economics (Oct. 2022), at 42, available at https://conference.iza.org/conference_files/Macro_2022/traina_j33031.pdf (“The labor [markdown] therefore increases because ‘productivity’ rises, and not because pay falls. This suggests that technological change plays a large role in the rise of the labor [markdown].”).

[66] Expert Report, supra note 1, at 15 (emphasis added).

[67] Trinko, 540 U.S. at 407.

[68] See Expert Report, supra note 1, at 16 (“‘Trading partners’ are parties with which the defendant deals, either as a customer or as a supplier. In [assessing anticompetitive exclusionary conduct], a trading partner is deemed to be harmed or benefited even if that trading partner passes some or all of that harm or benefit on to other parties.”).

[69] Id. at 15 (emphasis added).

[70] Id. at 13.

[71] See supra Section II.

[72] Expert Report, supra note 1, at 17. As the Expert Report acknowledges elsewhere, recoupment is a “requirement for a predatory pricing claim under federal antitrust law.” Id. at 15.

[73] See Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., 509 U.S. 209, 222-27 (1993).

[74] Id. at 224.

[75] On entry deterrence, see Steven C. Salop, Strategic Entry Deterrence, 69 Am. Econ. Rev. 335 (1979).

[76] See generally John S. McGee, Predatory Pricing Revisited, 23 J.L. Econ 289 (1980). Some economists have more recently posed a “strategic” theory of predatory pricing that purports to expand substantially (and redirect) the scope of circumstances in which predatory pricing could be rational. See, e.g., Patrick Bolton, Joseph F. Brodley, & Michael H. Riordan, Predatory Pricing: Strategic Theory and Legal Policy, 88 Geo. L. J. 2239 (2000). While this and related theories have, indeed, likely expanded the theoretical scope of circumstances conducive to predatory pricing, they have not established that these conditions are remotely likely to occur. See Bruce H. Kobayashi, The Law and Economics of Predatory Pricing, in 4 Encyclopedia of Law and Economics (De Geest, ed. 2017) (“The models showing rational predation can exist and the evidence consistent with episodes of predation do not demonstrate that predation is either ubiquitous or frequent. Moreover, many of these models do not consider the welfare effects of predation, and those that do generally find the welfare effects ambiguous.”). From a legal perspective, particularly given the risk of error in discerning the difference between predatory pricing and legitimate price cutting, it is far more important to limit cases to situations likely to cause consumer harm rather than those in which harm is a remote possibility. The cost of error, of course, is the legal imposition of artificially inflated prices for consumers.

[77] Case C-62/86, AKZO v Comm’n, EU:C:1991:286, ¶¶ 71-72.

[78] Id. at ¶ 72 (“[P]rices below average total costs, that is to say, fixed costs plus variable costs, but above average variable costs, must be regarded as abusive if they are determined as part of a plan for eliminating a competitor.”).

[79] Case C-333/94 P, Tetra Pak v Comm’n, EU:C:1996:436, ¶ 44. See also, Case C-202/07 P, France Télécom v Comm’n, EU:C:2009:214, ¶ 110.

[80] Id. at ¶ 107.

[81] See, e.g., Bolton, Brodley, & Riordan, supra note 76.

[82] See id. at 2267 (“[A]nticipated recoupment is intrinsic in [strategic] theories, because without such an expectation predatory pricing is not sensible economic behavior.”). See also Kenneth G. Elzinga & David E. Mills, Predatory Pricing and Strategic Theory, 89 Geo. L.J. 2475, 2483 (2001) (“Of course, no proposed scheme of predation is credible unless it embodies a plausible means of recoupment, but this does not justify taking shortcuts in analysis. In particular, it is unwise to presume that a plausible means of recoupment exists just because facts supporting other features of a strategic theory, such as asymmetric information, are evident. Facts conducive to probable recoupment ought to be established independently.”).

[83] Kobayashi & Muris, supra note 35, at 166.

[84] Tetra Pak, supra note 79, at ¶ 44.

[85] France Télécom, supra note 79, at ¶ 112.

[86] Expert Report, supra note 1, at 17.

[87] Expert Report, supra note 1, at 15.

[88] Trinko, 540 U.S. at 408.

[89] Trinko, 540 U.S. at 409.

[90] Trinko, 540 U.S. at 411. See also Phillip Areeda, Essential Facilities: An Epithet in Need of Limiting Principles, 58 Antitrust L.J. 841 (1989).

[91] Aspen Skiing Co. v. Aspen Highlands Skiing Corp., 472 U.S. 585, 610-11 (1985).

[92] See Alan J. Meese, Property, Aspen, and Refusals to Deal, 73 Antitrust L. J. 81, 112-13 (2005).

[93] See Joined Cases 6/73 & 7/73, Instituto Chemioterapico Italiano S.p.A. and Commercial Solvents Corporation v. Comm’n, 1974 E.C.R. 223, [1974] 1 C.M.L.R. 309.

[94] See Case C-7/97, Oscar Bronner GmbH & Co. KG v Mediaprint Zeitungs, EU:C:1998:569, §41.

[95] See Case C-241/91 P, RTE and ITP v Comm’n, EU:C:1995:98, §54. See also, Case C-418/01, IMS Health, EU:C:2004:257, §37.

[96] John Vickers, Competition Policy and Property Rights, 120 Econ. J. 390 (2010).

[97] Expert Report, supra note 1, at 7.

[98] Aspen Skiing Co. v. Aspen Highlands Skiing Corp., 472 U.S. 585, 605 (1985).

[99] Expert Report, supra note 1, at 17.

[100] Einer Elhauge, Defining Better Monopolization Standards, 56 Stan. L. Rev. 253, 343 (2003).

[101] See Fed. Trade Comm’n v. Qualcomm Inc., 969 F.3d 974, 995 (9th Cir. 2020) (“Finally, unlike in Aspen Skiing, the district court found no evidence that Qualcomm singles out any specific chip supplier for anticompetitive treatment in its SEP-licensing. In Aspen Skiing, the defendant refused to sell its lift tickets to a smaller, rival ski resort even as it sold the same lift tickets to any other willing buyer (including any other ski resort)…. Qualcomm applies its OEM-level licensing policy equally with respect to all competitors in the modem chip markets and declines to enforce its patents against these rivals…. Instead, Qualcomm provides these rivals indemnifications…—the Aspen Skiing equivalent of refusing to sell a skier a lift ticket but letting them ride the chairlift anyway. Thus, while Qualcomm’s policy toward OEMs is ‘no license, no chips,’ its policy toward rival chipmakers could be characterized as ‘no license, no problem.’ Because Qualcomm applies the latter policy neutrally with respect to all competing modem chip manufacturers, the third Aspen Skiing requirement does not apply.”)

[102] Carl Shapiro was an economic expert for the FTC in the case, and Fiona Scott Morton was an economic expert for Apple in related litigation against Qualcomm. Doug Melamed was co-author of an amicus brief supporting the FTC in the 9th U.S. Circuit Court of Appeals. (In the interests of full disclosure, we authored an amicus brief, joined by 12 scholars of law & economics, supporting Qualcomm in the 9th Circuit. See Brief of Amici Curiae International Center for Law & Economics and Scholars of Law and Economics in Support of Appellant and Reversal, FTC v. Qualcomm, No. 19-16122 (9th Cir., Aug. 30, 2019), available at https://laweconcenter.org/wp-content/uploads/2019/09/ICLE-Amicus-Brief-in-FTC-v-Qualcomm-FINAL-9th-Cir-2019.pdf).

[103] For a discussion of the frailties of these arguments, see Geoffrey A. Manne & Dirk Auer, Exclusionary Pricing Without the Exclusion: Unpacking Qualcomm’s No License, No Chips Policy, Truth on the Market (Jan. 17, 2020), https://truthonthemarket.com/2020/01/17/exclusionary-pricing-without-the-exclusion-unpacking-qualcomms-no-license-no-chips-policy (“The amici are thus left with the argument that Qualcomm could structure its prices differently, so as to maximize the profits of its rivals. Why it would choose to do so, or should indeed be forced to, is a whole other matter.”). For a response by one of the Expert Report authors, see Mark A. Lemley, A. Douglas Melamed, & Steve Salop, Manne and Auer’s Defense of Qualcomm’s Licensing Policy Is Deeply Flawed, Truth on the Market (Jan. 21, 2020), https://truthonthemarket.com/2020/01/21/manne-and-auers-defense-of-qualcomms-licensing-policy-is-deeply-flawed.

[104] Expert Report, supra note 1, at 15.

[105] See Leegin Creative Leather Prods., Inc. v. PSKS, Inc., 551 U.S. 877 (2007).

[106] See, e.g., D. Daniel Sokol, The Transformation of Vertical Restraints: Per Se Illegality, The Rule of Reason, and Per Se Legality, 79 Antitrust L.J. 1003, 1004 (2014) (“[T]he shift in the antitrust rules applied to [vertical restraints] has not been from per se illegality to the rule of reason, but has been a more dramatic shift from per se illegality to presumptive legality under the rule of reason.”).

[107] See Commission Regulation (EU) No 330/2010 of 20 April 2010 on the Application of Article 101(3) of the Treaty on the Functioning of the European Union to Categories of Vertical Agreements and Concerted Practices, 2010 O.J. (L 102) art.4 (a).

[108] See, e.g., Case C-403/08, Football Association Premier League and Others, ECLI:EU:C:2011:631, §139. (“[A]greements which are aimed at partitioning national markets according to national borders or make the interpenetration of national markets more difficult must be regarded, in principle, as agreements whose object is to restrict competition within the meaning of Article 101(1) TFEU.”).

[109] Joined Cases-56/64 and 58/64, Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, ECLI:EU:C:1966:41, at 343.

[110] Patrick Rey & Jean Tirole, The Logic of Vertical Restraints, 76 Am. Econ. Rev. 921, 937 (1986) (emphasis added).

[111] These papers are collected and assessed in several literature reviews, including Lafontaine & Slade, supra note 43; O’Brien, supra note 44; Cooper et al., supra note 44; Global Antitrust Institute, Comment Letter on Federal Trade Commission’s Hearings on Competition and Consumer Protection in the 21st Century, Vertical Mergers (George Mason Law & Econ. Research Paper No. 18-27, Sep. 6, 2018). Even the reviews of such conduct that purport to be critical are only tepidly so. See, e.g., Marissa Beck & Fiona Scott Morton, Evaluating the Evidence on Vertical Mergers 59 Rev. Indus. Org. 273 (2021) (“[M]any vertical mergers are harmless or procompetitive, but that is a far weaker statement than presuming every or even most vertical mergers benefit competition regardless of market structure.”).

[112] Leegin, 551 U.S. at 889.

[113] Id. at 902.

[114] See, e.g., Lafontaine & Slade, supra note 43.

[115] See Leegin, 551 U.S. at 886-87 (holding that the per se rule should be applied “only after courts have had considerable experience with the type of restraint at issue” and “only if courts can predict with confidence that [the restraint] would be invalidated in all or almost all instances under the rule of reason” because it “‘lack[s]… any redeeming virtue’”) (citations omitted).

[116] See Bruce Kobayashi, The Economics of Loyalty Rebates and Antitrust Law in the United States, 1 Comp. Pol’y Int’l 115, 147 (2005).

[117] See, e.g., Case C-85/76, Hoffmann-La Roche & Co. AG v Commission of the European Communities, EU:C:1979:36, at 7.

[118] See Intel, supra note 39, at ¶ 139 (emphasis added).

[119] Opinion of AG Wahl in Case C-413/14 P Intel v Commission, ECLI:EU:C:2016:788, para 73.

[120] Expert Report, supra note 1, at 17.

[121] Id.

[122] See, e.g., Thomas A. Lambert, Defining Unreasonably Exclusionary Conduct: The Exclusion of a Competitive Rival Approach, 92 N.C. L. Rev. 1175, 1175 (2014) (“This Article examines the proposed definitions or tests for identifying unreasonably exclusionary conduct (including the non-universalist approach) and, finding each lacking, suggests an alternative definition.”).

[123] Id.

[124] Id. at 1244 (“Drawing lessons from past, unsuccessful attempts to define unreasonably exclusionary conduct, this Article has set forth a definition that identifies a common thread tying together all instances of unreasonable exclusion, comports with widely accepted intuitions about what constitutes improper competitive conduct, and generates specific safe harbors and liability rules that would collectively minimize the sum of antitrust’s decision and error costs.”).

[125] Expert Report, supra note 1, at 17.

[126] Ohio v. Am. Express Co., 138 S. Ct. 2274, 2286-87 (2018).

[127] Gregory J. Werden, Why (Ever) Define Markets? An Answer to Professor Kaplow, 78 Antitrust L.J. 729, 741 (2013).

[128] Geoffrey A. Manne, In Defence of the Supreme Court’s ‘Single Market’ Definition in Ohio v. American Express, 7 J. Antitrust Enforcement 104, 106 (2019).

[129] David S. Evans, The Antitrust Economics of Multi-Sided Platform Markets, 20 Yale J. Reg. 325, 355-56 (2003). See also Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Eur. Econ. Ass’n 990, 1018 (2003).

[130] See, e.g., Michal S. Gal & Daniel L. Rubinfeld, The Hidden Cost of Free Goods, 80 Antitrust L.J. 521, 557 (2016) (discussing the problematic French Competition Tribunal decision in Bottin Cartographes v. Google Inc., where “[d]isregarding the product’s two-sided market, and its cross-network effects, the court possibly prevented a welfare-increasing business strategy”).

[131] See, e.g., Brief of Amici Curiae Prof. David S Evans and Prof. Richard Schmalensee in Support of Respondents in Ohio, et al. v. American Express Co., No. 16-1454 (Sup. Ct. Jan. 23, 2018) at 21, available at https://www.supremecourt.gov/DocketPDF/16/16-1454/28957/20180123154205947_16-1454%20State%20of%20Ohio%20v%20American%20Express%20Brief%20for%20Amici%20Curiae%20Professors%20in%20Support%20of%20Respondents.pdf (“The first stage of the rule of reason analysis involves determining whether the conduct is anticompetitive. The economic literature on two-sided platforms shows that there is no basis for presuming one could, as a general matter, know the answer to that question without considering both sides of the platform.”).

[132] United States, et al. v. Am. Express Co., et al., 838 F.3d 179, 198 (2nd Cir. 2016).

[133] See, e.g., Michael Katz & Jonathan Sallet, Multisided Platforms and Antitrust Enforcement, 127 Yale L.J. 2142, 2161 (2018) (“[I]t is essential to account for any significant feedback effects and possible changes in prices on both sides of a platform when assessing whether a particular firm has substantial market power.”).

[134] California earned 10% of its statewide GDP from the tech industry in 2021, and just over 9% in 2022. See SAGDP2N Gross Domestic Product (GDP) by State, Bureau of Economic Analysis (last visited May 1, 2024), https://tinyurl.com/ysaf6rfc.

[135] See Joseph Politano, California Is Losing Tech Jobs, Apricitas Economics (Apr. 14, 2024), https://www.apricitas.io/p/california-is-losing-tech-jobs (“[California’s] GDP fell 2.1% through 2022, the second-biggest drop of any state over that period, driven by a massive deceleration across the information sector. That allowed states like Texas to overtake California in the post-pandemic GDP recovery, creating a gap that California still hasn’t been able to close despite its economic rebound in 2023.”).

[136] See id. (“[T]he Golden State has been bleeding tech jobs over the last year and a half—since August 2022, California has lost 21k jobs in computer systems design & related, 15k in streaming & social networks, 11k in software publishing, and 7k in web search & related—while gaining less than 1k in computing infrastructure & data processing. Since the beginning of COVID, California has added a sum total of only 6k jobs in the tech industry—compared to roughly 570k across the rest of the United States.”).

Continue reading
Antitrust & Consumer Protection

Lazar Radic on the EU’s DMA

Presentations & Interviews ICLE Senior Scholar Lazar Radic was a guest on the Mobile Dev Memo podcast to discuss the EU’s Digital Markets Act and the broader competition-regulation . . .

ICLE Senior Scholar Lazar Radic was a guest on the Mobile Dev Memo podcast to discuss the EU’s Digital Markets Act and the broader competition-regulation landscape. Audio of the full episode is embedded below.

Continue reading
Antitrust & Consumer Protection

The FTC Office of Patent Invalidation

TOTM The Federal Trade Commission (FTC) announced late last month that it had “expanded its campaign against pharmaceutical manufacturers’ improper or inaccurate listing of patents in the Food . . .

The Federal Trade Commission (FTC) announced late last month that it had “expanded its campaign against pharmaceutical manufacturers’ improper or inaccurate listing of patents in the Food and Drug Administration’s (FDA) Orange Book, disputing junk patent listings for diabetes, weight loss, asthma,and COPD drugs, including Novo Nordisk Inc.’s blockbuster weight-loss drug, Ozempic.” Warning letters were sent to 10 manufacturers, including, among others, Teva (identifying 58 listings), Novo Nordisk (36 listings), and Boehringer Ingelheim (10 listings).

The commission “notified the FDA that it disputes the accuracy or relevance of more than 300 Orange Book patent listings across 20 different brand name products.” That expands on the 100-plus patents listed in the November 2023 warning letters that the FTC sent to an overlapping group of manufacturers.

That’s quite a few challenges. Reading through the FTC’s press release and warning letters, it’s not really clear what’s going on here. More frolic and detour on the part of Chair Lina Khan’s FTC, or a legitimate effort to protect competition in pharmaceutical markets?

Read the full piece here.

Continue reading
Intellectual Property & Licensing