Showing 9 of 135 Publications by Dirk Auer

A Positive Agenda for Digital-Competition Enforcement

Reasonable people may disagree about their merits, but digital-competition regulations are now the law of the land in many jurisdictions, including the EU and the . . .

Reasonable people may disagree about their merits, but digital-competition regulations are now the law of the land in many jurisdictions, including the EU and the UK. Policymakers in those jurisdictions will thus need to successfully navigate heretofore uncharted waters in order to implement these regulations reasonably. In recent comments that we submitted to the UK’s Competition and Markets Authority on the recently passed Digital Markets, Competition and Consumers (DMCC) bill, we tried to outline precisely that sort of “positive agenda” for digital-competition enforcement.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

ICLE Comments to DOJ on Promoting Competition in Artificial Intelligence

Regulatory Comments Executive Summary We thank the U.S. Justice Department Antitrust Division (DOJ) for this invitation to comment (ITC) on “Promoting Competition in Artificial Intelligence.”[1] The International . . .

Executive Summary

We thank the U.S. Justice Department Antitrust Division (DOJ) for this invitation to comment (ITC) on “Promoting Competition in Artificial Intelligence.”[1] The International Center for Law & Economics (ICLE) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

In these comments, we express the view that policymakers’ current concerns about competition in AI industries may be unwarranted. This is particularly true of the notions that data-network effects shield incumbents in AI markets from competition; that Web 2.0’s most successful platforms will be able to leverage their competitive positions to dominate generative-AI markets; that these same platforms may use strategic partnerships with AI firms to insulate themselves from competition; and that generative-AI services occupy narrow markets that leave firms with significant market power.

In fact, we are still far from understanding the boundaries of antitrust-relevant markets in AI. There are three main things that need to be at the forefront of competition authorities’ minds when they think about market definition in AI products and services. First, understand that the “AI market” is not unitary, but is instead composed of many distinct goods and services. Second, and relatedly, look beyond the AI marketing hype to see how this extremely heterogeneous products landscape intersects with an equally variegated consumer-demand landscape.

In other words: AI products and services may, in many instances, be substitutable for non-AI products, which would mean that, for the purposes of antitrust law, AI and non-AI products contend in the same relevant market. Getting this relevant product-market definition right is important in antitrust because wrong market definitions could lead to wrong inferences about market power. While either an overly broad or overly narrow market definition could lead to both over and underenforcement, we believe the former currently represents the bigger threat.

Third, overenforcement in the field of generative AI could paradoxically engender the very harms that policymakers are seeking to avert. As we explain in greater detail below, preventing so-called “big tech” firms from competing in AI markets (for example, by threatening competition intervention whenever they forge strategic relationships with AI startups, launch their own generative-AI services, or embed such services in their existing platforms) may thwart an important source of competition and continued innovation. In short, competition in AI markets is important,[2] but trying naïvely to hold incumbent (in adjacent markets) tech firms back, out of misguided fears they will come to dominate the AI space, is likely to do more harm than good. It is essential to acknowledge how little we know about these nascent markets and that the most important priority at the moment is simply to ask the right questions that will lead to sound competition policy.

The comments proceed as follows. Section I debunks the notion that incumbent tech platforms can use their allegedly superior datasets to overthrow competitors in markets for generative AI. Section II discusses how policymakers should approach strategic partnerships among tech incumbents and AI startups. Section III outlines some of the challenges to defining relevant product markets in AI, and suggests how enforcers could navigate the perils of market definition in the nascent, fast-moving world of AI.

I. Anticompetitive Leveraging in AI Markets

Antitrust enforcers have recently expressed concern that incumbent tech platforms may leverage their existing market positions and resources (particularly their vast datasets) to stifle competitive pressure from AI startups. As this sections explains, however, these fears appear overblown, as well as underpinned by assumptions about data-network effects that are unlikely to play a meaningful role in generative AI. Instead, the competition interventions that policymakers are contemplating would, paradoxically, remove an important competitive threat for today’s most successful AI providers, thereby reducing overall competition in generative-AI markets.

Subsection A summarizes recent calls for competition intervention in generative-AI markets. Subsection B argues that many of these calls are underpinned by fears of data-related incumbency advantages (often referred to as “data-network effects”), including in the context of mergers. Subsection C explains why these effects are unlikely to play a meaningful role in generative-AI markets. Subsection D offers five key takeaways to help policymakers better weigh the tradeoffs inherent to competition-enforcement interventions in generative-AI markets.

A. Calls for Intervention in AI Markets

It was once (and frequently) said that Google’s “data monopoly” was unassailable: “If ‘big data’ is the oil of the information economy, Google has Standard Oil-like monopoly dominance—and uses that control to maintain its dominant position.”[3] Similar claims of data dominance have been attached to nearly all large online platforms, including Facebook (Meta), Amazon, and Uber.[4]

While some of these claims continue even today (for example, “big data” is a key component of the DOJ Google Search and adtech antitrust suits),[5] a shiny new data target has emerged in the form of generative artificial intelligence (AI). The launch of ChatGPT in November 2022, as well as the advent of AI image-generation services like Midjourney and Dall-E, have dramatically expanded the public’s conception of what is—and what might be—possible to achieve with generative-AI technologies built on massive datasets.

While these services remain both in the early stages of mainstream adoption and in the throes of rapid, unpredictable technological evolution, they nevertheless already appear to be on the radar of competition policymakers around the world. Several antitrust enforcers appear to believe that, by acting now, they can avoid the “mistakes” that purportedly were made during the formative years of Web 2.0.[6] These mistakes, critics assert, include failing to appreciate the centrality of data in online markets, as well as letting mergers go unchecked and allowing early movers to entrench their market positions.[7] As Federal Trade Commission (FTC) Chair Lina Khan has put it: “we are still reeling from the concentration that resulted from Web 2.0, and we don’t want to repeat the mis-steps of the past with AI.”[8]

This response from the competition-policy world is deeply troubling. Rather than engage in critical self-assessment and adopt an appropriately restrained stance, the enforcement community appears to be champing at the bit. Rather than assessing their prior assumptions based on the current technological moment, enforcers’ top priority appears to be figuring out how to rapidly and almost reflexively deploy existing competition tools to address the presumed competitive failures presented by generative AI.[9]

It is increasingly common for competition enforcers to argue that so-called “data-network effects” serve not only to entrench incumbents in those markets where the data is collected, but also to confer similar, self-reinforcing benefits in adjacent markets. Several enforcers have, for example, prevented large online platforms from acquiring smaller firms in adjacent markets, citing the risk that they could use their vast access to data to extend their dominance into these new markets.[10]

They have also launched consultations to ascertain the role that data plays in AI competition. For instance, in a recent consultation, the European Commission asked: “What is the role of data and what are its relevant characteristics for the provision of generative AI systems and/or components, including AI models?”[11] Unsurprisingly, the FTC has likewise been hypervigilant about the risks ostensibly posed by incumbents’ access to data. In comments submitted to the U.S. Copyright Office, for example, the FTC argued that:

The rapid development and deployment of AI also poses potential risks to competition. The rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms. These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.[12]

Recently, in the conference that prompts these comments, Jonathan Kanter, assistant U.S. attorney general for antitrust, claimed that:

We also see structures and trends in AI that should give us pause AI relies on massive amounts of data and computing power, which can give already dominant firms a substantial advantage. Powerful networks and feedback effects may enable dominant firms to control these new markets, and existing power in the digital economy may create a powerful incentive to control emerging innovations that will not only impact our economy, but the health and well-being of our society and free expression itself.[13]

On an even more hyperbolic note, Andreas Mundt, the head of Germany’s Federal Cartel Office, called AI a “first-class fire accelerator” for anticompetitive behavior and argued it “will make all the problems only worse.”[14] He further argued that “there’s a great danger that we’ll will get an even deeper concentration of digital markets and power increase at various levels, from chips to the front end.”[15] In short, Mundt is one of many policymakers who believes that AI markets will enable incumbent tech firms to further entrench their market positions.

Certainly, it makes sense that the largest online platforms—including Alphabet, Meta, Apple, and Amazon—should have a meaningful advantage in the burgeoning markets for generative-AI services. After all, it is widely recognized that data is an essential input for generative AI.[16] This competitive advantage should be all the more significant, given that these firms have been at the forefront of AI technology for more than a decade. Over this period, Google’s DeepMind and AlphaGo and Meta’s NLLB-200 have routinely made headlines.[17] Apple and Amazon also have vast experience with AI assistants, and all of these firms deploy AI technologies throughout their platforms.[18]

Contrary to what one might expect, however, the tech giants have, to date, been largely unable to leverage their vast troves of data to outcompete startups like OpenAI and Midjourney. At the time of writing, OpenAI’s ChatGPT appears to be, by far, the most successful chatbot,[19] despite the large tech platforms’ apparent access to far more (and more up-to-date) data.

Moreover, it is important not to neglect the role that open-source models currently play in fostering innovation and competition. As former DOJ Chief Antitrust Economist Susan Athey pointed out in a recent interview, “[the AI industry] may be very concentrated, but if you have two or three high quality — and we have to find out what that means, but high enough quality — open models, then that could be enough to constrain the for-profit LLMs.[20] Open-source models are important because they allow innovative startups to build upon models already trained on large datasets—therefore entering the market without incurring that initial cost. Apparently, there is no lack of open-source models, since companies like xAI, Meta, and Google offer their AI models for free.[21]

There are important lessons to glean from these developments, if only enforcers would stop to reflect. The meteoric rise of consumer-facing AI services should offer competition enforcers and policymakers an opportunity for introspection. As we explain, the rapid emergence of generative-AI technology may undercut many core assumptions of today’s competition-policy debates, which have focused largely on the rueful after-effects of the purported failure of 20th-century antitrust to address the allegedly manifest harms of 21st-century technology. These include the notions that data advantages constitute barriers to entry and can be leveraged to project dominance into adjacent markets; that scale itself is a market failure to be addressed by enforcers; and that the use of consumer data is inherently harmful to those consumers.

B. Data-Network Effects Theory and Enforcement

Proponents of more extensive intervention by competition enforcers into digital markets often cite data-network effects as a source of competitive advantage and barrier to entry (though terms like “economies of scale and scope” may offer more precision).[22] The crux of the argument is that “the collection and use of data creates a feedback loop of more data, which ultimately insulates incumbent platforms from entrants who, but for their data disadvantage, might offer a better product.”[23] This self-reinforcing cycle purportedly leads to market domination by a single firm. Thus, it is argued, e.g., that Google’s “ever-expanding control of user personal data, and that data’s critical value to online advertisers, creates an insurmountable barrier to entry for new competition.[24]

But it is important to note the conceptual problems these claims face. Because data can be used to improve products’ quality and/or to subsidize their use, if possessing data constitutes an entry barrier, then any product improvement or price reduction made by an incumbent could be problematic. This is tantamount to an argument that competition itself is a cognizable barrier to entry. Of course, it would be a curious approach to antitrust if competition were treated as a problem, as it would imply that firms should under-compete—i.e., should forego consumer-welfare enhancements—in order to inculcate a greater number of firms in a given market, simply for its own sake.[25]

Meanwhile, actual economic studies of data-network effects have been few and far between, with scant empirical evidence to support the theory.[26] Andrei Hagiu and Julian Wright’s theoretical paper offers perhaps the most comprehensive treatment of the topic to date.[27] The authors ultimately conclude that data-network effects can be of differing magnitudes and have varying effects on firms’ incumbency advantage.[28] They cite Grammarly (an AI writing-assistance tool) as a potential example: “As users make corrections to the suggestions offered by Grammarly, its language experts and artificial intelligence can use this feedback to continue to improve its future recommendations for all users.”[29]

This is echoed by economists who contend that “[t]he algorithmic analysis of user data and information might increase incumbency advantages, creating lock-in effects among users and making them more reluctant to join an entrant platform.”[30] Crucially, some scholars take this logic a step further, arguing that platforms may use data from their “origin markets” in order to enter and dominate adjacent ones:

First, as we already mentioned, data collected in the origin market can be used, once the enveloper has entered the target market, to provide products more efficiently in the target market. Second, data collected in the origin market can be used to reduce the asymmetric information to which an entrant is typically subject when deciding to invest (for example, in R&D) to enter a new market. For instance, a search engine could be able to predict new trends from consumer searches and therefore face less uncertainty in product design.[31]

This possibility is also implicit in Hagiu and Wright’s paper.[32] Indeed, the authors’ theoretical model rests on an important distinction between “within-user” data advantages (that is, having access to more data about a given user) and “across-user” data advantages (information gleaned from having access to a wider user base). In both cases, there is an implicit assumption that platforms may use data from one service to gain an advantage in another market (because what matters is information about aggregate or individual user preferences, regardless of its origin).

Our review of the economic evidence suggests that several scholars have, with varying degrees of certainty, raised the possibility that incumbents may leverage data advantages to stifle competitors in their primary market or in adjacent ones (be it via merger or organic growth). As we explain below, however, there is ultimately little evidence to support such claims. Policymakers have nonetheless been keenly receptive to these limited theoretical findings, basing multiple decisions on these theories, often with little consideration given to the caveats that accompany them.[33]

Indeed, it is remarkable that, in its section on “[t]he data advantage for incumbents,” the “Furman Report” created for the UK government cited only two empirical economic studies, and they offer directly contradictory conclusions with respect to the question of the strength of data advantages.[34] The report nevertheless concluded that data “may confer a form of unmatchable advantage on the incumbent business, making successful rivalry less likely,”[35] and it adopted without reservation what it deemed “convincing” evidence from non-economists that have no apparent empirical basis.[36]

In the Google/Fitbit merger proceedings, the European Commission found that the combination of data from Google services with that of Fitbit devices would reduce competition in advertising markets:

Giving [sic] the large amount of data already used for advertising purposes that Google holds, the increase in Google’s data collection capabilities, which goes beyond the mere number of active users for which Fitbit has been collecting data so far, the Transaction is likely to have a negative impact on the development of an unfettered competition in the markets for online advertising.[37]

As a result, the Commission cleared the merger only on the condition that Google refrain from using data from Fitbit devices for its advertising platform.[38] The Commission also appears likely to focus on similar issues in its ongoing investigation of Microsoft’s investment in OpenAI.[39]

Along similar lines, in its complaint to enjoin Meta’s purchase of Within Unlimited—makers of the virtual-reality (VR) fitness app Supernatural—the FTC relied on, among other things, the fact that Meta could leverage its data about VR-user behavior to inform its decisions and potentially outcompete rival VR-fitness apps: “Meta’s control over the Quest platform also gives it unique access to VR user data, which it uses to inform strategic decisions.”[40]

The DOJ’s twin cases against Google also implicate data leveraging and data barriers to entry. The agency’s adtech complaint charges that “Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”[41] Similarly, in its Google Search complaint, the agency argued that:

Google’s anticompetitive practices are especially pernicious because they deny rivals scale to compete effectively. General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms.[42]

Finally, updated merger guidelines published in recent years by several competition enforcers cite the acquisition of data as a potential source of competition concerns. For instance, the FTC and DOJ’s 2023 guidelines state that “acquiring data that helps facilitate matching, sorting, or prediction services may enable the platform to weaken rival platforms by denying them that data.”[43] Likewise, the UK Competition and Markets Authority warned against incumbents acquiring firms in order to obtain their data and foreclose other rivals:

Incentive to foreclose rivals…

7.19(e) Particularly in complex and dynamic markets, firms may not focus on short term margins but may pursue other objectives to maximise their long-run profitability, which the CMA may consider. This may include… obtaining access to customer data….[44]

In short, competition authorities around the globe have taken an increasingly aggressive stance on data-network effects. Among the ways this has manifested is in enforcement decisions based on fears that data collected by one platform might confer decisive competitive advantages in adjacent markets. Unfortunately, these concerns rest on little to no empirical evidence, either in the economic literature or the underlying case records.

C. Data-Incumbency Advantages in Generative-AI

Given the assertions detailed in the previous section, it would be reasonable to assume that firms such as Google, Meta, and Amazon should be in pole position to meet the burgeoning demand for generative AI. After all, these firms have not only been at the forefront of the field for the better part of a decade, but they also have access to vast troves of data, the likes of which their rivals could only dream when they launched their own services. Thus, the authors of the Furman Report caution that “to the degree that the next technological revolution centres around artificial intelligence and machine learning, then the companies most able to take advantage of it may well be the existing large companies because of the importance of data for the successful use of these tools.”[45]

To date, however, this is not how things have unfolded (although it bears noting that these technologies remain in flux and the competitive landscape is susceptible to change). The first significantly successful generative-AI service was arguably not from either Meta—which had been working on chatbots for years and had access to, arguably, the world’s largest database of actual chats—or Google. Instead, the breakthrough came from a previously unknown firm called OpenAI.

OpenAI’s ChatGPT service currently accounts for an estimated 60% of visits to online AI tools (though reliable numbers are somewhat elusive).[46] It broke the record for the fastest online service to reach 100 million users (in only a couple of months), more than four times faster than TikTok, the previous record holder.[47] Based on Google Trends data, ChatGPT is nine times more popular worldwide than Google’s own Bard service, and 14 times more popular in the United States.[48] In April 2023, ChatGPT reportedly registered 206.7 million unique visitors, compared to 19.5 million for Google’s Bard.[49] In short, at the time we are writing, ChatGPT appears to be the most popular chatbot. The entry of large players such as Google Bard or Meta AI appear to have had little effect thus far on its leading position.[50]

The picture is similar in the field of AI-image generation. As of August 2023, Midjourney, Dall-E, and Stable Diffusion appear to be the three market leaders in terms of user visits.[51] This is despite competition from the likes of Google and Meta, who arguably have access to unparalleled image and video databases by virtue of their primary platform activities.[52]

This raises several crucial questions: how have these AI upstarts managed to be so successful, and is their success just a flash in the pan before Web 2.0 giants catch up and overthrow them? While we cannot answer either of these questions dispositively, we offer what we believe to be some relevant observations concerning the role and value of data in digital markets.

A first important observation is that empirical studies suggest that data exhibits diminishing marginal returns. In other words, past a certain point, acquiring more data does not confer a meaningful edge to the acquiring firm. As Catherine Tucker put it, following a review of the literature: “Empirically there is little evidence of economies of scale and scope in digital data in the instances where one would expect to find them.”[53]

Likewise, following a survey of the empirical literature on this topic, Geoffrey Manne and Dirk Auer conclude that:

Available evidence suggests that claims of “extreme” returns to scale in the tech sector are greatly overblown. Not only are the largest expenditures of digital platforms unlikely to become proportionally less important as output increases, but empirical research strongly suggests that even data does not give rise to increasing returns to scale, despite routinely being cited as the source of this effect.[54]

In other words, being the firm with the most data appears to be far less important than having enough data. Moreover, this lower bar may be accessible to far more firms than one might initially think possible. Furthermore, obtaining sufficient data could become easier still—that is, the volume of required data could become even smaller—with technological progress. For instance, synthetic data may provide an adequate substitute to real-world data,[55] or may even outperform real-world data.[56] As Thibault Schrepel and Alex Pentland surmise:

[A]dvances in computer science and analytics are making the amount of data less relevant every day. In recent months, important technological advances have allowed companies with small data sets to compete with larger ones.[57]

Indeed, past a certain threshold, acquiring more data might not meaningfully improve a service, where other improvements (such as better training methods or data curation) could have a large impact. In fact, there is some evidence that excessive data impedes a service’s ability to generate results appropriate for a given query: “[S]uperior model performance can often be achieved with smaller, high-quality datasets than massive, uncurated ones. Data curation ensures that training datasets are devoid of noise, irrelevant instances, and duplications, thus maximizing the efficiency of every training iteration.”[58]

Consider, for instance, a user who wants to generate an image of a basketball. Using a model trained on an indiscriminate range and number of public photos in which a basketball appears surrounded by copious other image data, the user may end up with an inordinately noisy result. By contrast, a model trained with a better method on fewer, more carefully selected images could readily yield far superior results.[59] In one important example:

The model’s performance is particularly remarkable, given its small size. “This is not a large language model trained on the whole Internet; this is a relatively small transformer trained for these tasks,” says Armando Solar-Lezama, a computer scientist at the Massachusetts Institute of Technology, who was not involved in the new study…. The finding implies that instead of just shoving ever more training data into machine-learning models, a complementary strategy might be to offer AI algorithms the equivalent of a focused linguistics or algebra class.[60]

Platforms’ current efforts are thus focused on improving the mathematical and logical reasoning of large language models (LLMs), rather than maximizing training datasets.[61] Two points stand out. The first is that firms like OpenAI rely largely on publicly available datasets—such as GSM8K—to train their LLMs.[62] Second, the real challenge to creating innovative AI lies not so much in collecting data, but in creating innovative AI-training processes and architectures:

[B]uilding a truly general reasoning engine will require a more fundamental architectural innovation. What’s needed is a way for language models to learn new abstractions that go beyond their training data and have these evolving abstractions influence the model’s choices as it explores the space of possible solutions.

We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.[63]

Furthermore, it is worth noting that the data most relevant to startups in a given market may not be those held by large incumbent platforms in other markets. They might instead be data specific to the market in which the startup is active or, even better, to the given problem it is attempting to solve:

As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use—they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market and mounted their successful challenges—not before.[64]

The bottom line is that data is not the be-all and end-all that many in competition circles make it out to be. While data may often confer marginal benefits, there is little evidence that these benefits are ultimately decisive.[65] As a result, incumbent platforms’ access to vast numbers of users and troves of data in their primary markets might only marginally affect their competitiveness in AI markets.

A related observation is that firms’ capabilities and other features of their products arguably play a more important role than the data they own.[66] Examples of this abound in digital markets. Google overthrew Yahoo in search, despite initially having access to far fewer users and far less data. Google and Apple overcame Microsoft in the smartphone operating-system market, despite having comparatively tiny ecosystems (at the time) to leverage. TikTok rose to prominence despite intense competition from incumbents like Instagram, which had much larger userbases. In each of these cases, important product-design decisions (such as the PageRank algorithm, recognizing the specific needs of mobile users,[67] and TikTok’s clever algorithm) appear to have played far more significant roles than the firms’ initial user and data endowments (or lack thereof).

All of this suggests that the early success of OpenAI likely has more to do with its engineering decisions than with what data it did or did not possess. Going forward, OpenAI and its rivals’ relative abilities to offer and monetize compelling use cases by offering custom versions of their generative-AI technologies will arguably play a much larger role than (and contribute to) their ownership of data.[68] In other words, the ultimate challenge is arguably to create a valuable platform, of which data ownership is a consequence, not a cause.

It is also important to note that, in those instances where it is valuable, data does not just fall from the sky. Instead, it is through smart business and engineering decisions that firms can generate valuable information (which does not necessarily correlate with owning more data). For instance, OpenAI’s success with ChatGPT is often attributed to its more efficient algorithms and training models, which arguably have enabled the service to improve more rapidly than its rivals.[69] Likewise, the ability of firms like Meta and Google to generate valuable data for advertising arguably depends more on design decisions that elicit the right data from users, rather than the raw number of users in their networks.

Put differently, setting up a business so as to gather and organize the right information is more important than simply owning vast troves of data.[70] Even in those instances where high-quality data is an essential parameter of competition, it does not follow that having vaster databases or more users on a platform necessarily leads to better information for the platform. Indeed, if data ownership consistently conferred a significant competitive advantage, these new AI firms would not be where they are today.

This does not, of course, mean that data is worthless. Rather, it means that competition authorities should not assume that the mere possession of data is a dispositive competitive advantage, absent compelling empirical evidence to support such a finding. In this light, the current wave of decisions and competition-policy pronouncements that rely on data-related theories of harm are premature.

D. Five Key Takeaways: Reconceptualizing the Role of Data in Generative-AI Competition

As we explain above, data network effects are not the source of barriers to entry that they are sometimes made out to be. The picture is far more nuanced. Indeed, as economist Andres Lerner demonstrated almost a decade ago (and the assessment is only truer today):

Although the collection of user data is generally valuable for online providers, the conclusion that such benefits of user data lead to significant returns to scale and to the entrenchment of dominant online platforms is based on unsupported assumptions. Although, in theory, control of an “essential” input can lead to the exclusion of rivals, a careful analysis of real-world evidence indicates that such concerns are unwarranted for many online businesses that have been the focus of the “big data” debate.[71]

While data can be an important part of the competitive landscape, incumbents’ data advantages are far less pronounced than today’s policymakers commonly assume. In that respect, five primary lessons emerge:

  1. Data can be (very) valuable, but beyond a certain threshold, those benefits tend to diminish. In other words, having the most data is less important than having enough;
  2. The ability to generate valuable information does not depend on the number of users or the amount of data a platform has previously acquired;
  3. The most important datasets are not always proprietary;
  4. Technological advances and platforms’ engineering decisions affect their ability to generate valuable information, and this effect swamps those that stem from the amount of data they own; and
  5. How platforms use data is arguably more important than what data or how much data they own.

These lessons have important ramifications for policy debates over the competitive implications of data in technologically evolving areas.

First, it is not surprising that startups, rather than incumbents, have taken an early lead in generative AI (and in Web 2.0 before it). After all, if data-incumbency advantages are small or even nonexistent, then smaller and more nimble players may have an edge over established tech platforms. This is all the more likely given that, despite significant efforts, the biggest tech platforms were unable to offer compelling generative-AI chatbots and image-generation services before the emergence of ChatGPT, Dall-E, Midjourney, etc.

This suggests that, in a process akin to Clayton Christensen’s “innovator’s dilemma,”[72] something about the incumbent platforms’ existing services and capabilities might have been holding them back in this emerging industry. Of course, this does not necessarily mean that those same services or capabilities could not become an advantage when the generative-AI industry starts addressing issues of monetization and scale.[73] But it does mean that assumptions about a firm’s market power based primarily on its possession of data are likely to be off the mark.

Another important implication is that, paradoxically, policymakers’ efforts to prevent Web 2.0 platforms from competing freely in generative-AI markets may ultimately backfire and lead to less, not more, competition. Indeed, OpenAI is currently acquiring a sizeable lead in generative AI. While competition authorities might like to think that other startups will emerge and thrive in this space, it is important not to confuse those desires with reality. While there currently exists a vibrant AI-startup ecosystem, there is at least a case to be made that significant competition for today’s AI leaders will come from incumbent Web 2.0 platforms—although nothing is certain at this stage.

Policymakers should beware not to stifle that competition on the misguided assumption that competitive pressure from large incumbents is somehow less valuable to consumers than that which originates from smaller firms. This is particularly relevant in the context of merger control. An acquisition (or an “acqui-hire”) by a “big tech” company does not only, in principle, entail a minor risk to harm competition (it is not a horizontal merger),[74] but could create a stronger competitor to the current market leaders.

Finally, even if there were a competition-related market failure to be addressed in the field of generative AI (which is anything but clear), the remedies under contemplation may do more harm than good. Some of the solutions that have been put forward have highly ambiguous effects on consumer welfare. Scholars have shown that, e.g., mandated data sharing—a solution championed by EU policymakers, among others—may sometimes dampen competition in generative AI.[75] This is also true of legislation like the General Data Protection Regulation (GDPR), which makes it harder for firms to acquire more data about consumers—assuming such data is, indeed, useful to generative-AI services.[76]

In sum, it is a flawed understanding of the economics and practical consequences of large agglomerations of data that has led competition authorities to believe data-incumbency advantages are likely to harm competition in generative AI—or even in the data-intensive Web 2.0 markets that preceded it. Indeed, competition or regulatory intervention to “correct” data barriers and data network and scale effects is liable to do more harm than good.

II. Merger Policy and AI

Policymakers have expressed particular concern about the anticompetitive potential of deals wherein AI startups obtain funding from incumbent tech firms, even in cases where these strategic partnerships cannot be considered mergers in the antitrust sense (because there is no control exercised by one firm over the other). To date, there is no evidence to support differentiated scrutiny for mergers involving AI firms or, in general, firms working with information technology. The view that so-called “killer acquisitions,” for instance, pose a significant competition risk in AI markets is not supported by solid evidence.[77] To the contrary, there is reason to believe these acquisitions bolster competition by allowing larger firms to acquire capabilities relevant to innovation, and by increasing incentives to invest for startup founders.[78]

Companies with “deep pockets” that invest in AI startups may provide those firms the resources to compete with prevailing market leaders. Firms like Amazon, Google, Meta, and Microsoft, for instance, have been investing to create their own microchips capable of building AI systems, aiming to be less dependent on Nvidia.[79] The tributaries of this flow of funds could serve to enhance competition at all levels of the AI industry.[80]

A. Existing AI Partnerships Are Unlikely to Be Anticompetitive

Some jurisdictions have also raised concerns regarding recent partnerships among big tech firms and AI “unicorns,”[81] in particular, Amazon’s partnership with Anthropic; Microsoft’s partnership with Mistral AI; and Microsoft’s hiring of former Inflection AI employees (including, notably, founder Mustafa Suleyman) and related arrangements with the company. Publicly available information, however, suggests that these transactions may not warrant merger-control investigation, let alone the heightened scrutiny that comes with potential Phase II proceedings. At the very least, given the AI industry’s competitive landscape, there is little to suggest these transactions merit closer scrutiny than similar deals in other sectors.

Overenforcement in the field of generative AI could paradoxically engender the very harms that policymakers are seeking to avert. Preventing big tech firms from competing in these markets (for example, by threatening competition intervention as soon as they build strategic relationships with AI startups) may thwart an important source of competition needed to keep today’s leading generative-AI firms in check. In short, while competition in AI markets is important,[82] trying naïvely to hold incumbent (in adjacent markets) tech firms back, out of misguided fears they will come to dominate this space, is likely to do more harm than good.

At a more granular level, there are important reasons to believe these kinds of agreements will have no negative impact on competition and may, in fact, benefit consumers—e.g., by enabling those startups to raise capital and deploy their services at an even larger scale. In other words, they do not bear any of the prima facie traits of “killer acquisitions,” or even of the acquisition of “nascent potential competitors.”[83]

Most importantly, these partnerships all involve the acquisition of minority stakes and do not entail any change of control over the target companies. Amazon, for instance, will not have “ownership control” of Anthropic. The precise amount of shares acquired has not been made public, but a reported investment of $4 billion in a company valued at $18.4 billion does not give Amazon a majority stake or sufficient voting rights to control the company or its competitive strategy. [84] It has also been reported that the deal will not give Amazon any seats on the Anthropic board or special voting rights (such as the power to veto some decisions).[85] There is thus little reason to believe Amazon has acquired indirect or de facto control over Anthropic.

Microsoft’s investment in Mistral AI is even smaller, in both absolute and relative terms. Microsoft is reportedly investing just $16 million in a company valued at $2.1 billion.[86] This represents less than 1% of Mistral’s equity, making it all but impossible for Microsoft to exert any significant control or influence over Mistral AI’s competitive strategy. There have similarly been no reports of Microsoft acquiring seats on Mistral AI’s board or any special voting rights. We can therefore be confident that the deal will not affect competition in AI markets.

Much the same applies to Microsoft’s dealings with Inflection AI. Microsoft hired two of the company’s three founders (which currently does not fall under the scope of merger laws), and also paid $620 million for nonexclusive rights to sell access to the Inflection AI model through its Azure Cloud.[87] Admittedly, the latter could entail (depending on deal’s specifics) some limited control over Inflection AI’s competitive strategy, but there is currently no evidence to suggest this will be the case.

Finally, none of these deals entail any competitively significant behavioral commitments from the target companies. There are no reports of exclusivity agreements or other commitments that would restrict third parties’ access to these firms’ underlying AI models. Again, this means the deals are extremely unlikely to negatively impact the competitive landscape in these markets.

B. AI Partnerships Increase Competition

As discussed in the previous section, the AI partnerships that have recently grabbed antitrust headlines are unlikely to harm competition. They do, however, have significant potential to bolster competition in generative-AI markets by enabling new players to scale up rapidly and to challenge more established players by leveraging the resources of incumbent tech platforms.

The fact that AI startups willingly agree to the aforementioned AI partnerships suggests this source of funding presents unique advantages for them, or they would have pursued capital through other venues. The question for antitrust policymakers is whether this advantage is merely an anticompetitive premium, paid by big tech platforms to secure monopoly rents, or whether the investing firms are bringing something else to the table. As we discussed in the previous section, there is little reason to believe these partnerships are driven by anticompetitive motives. More importantly, however, these deals may present important advantages for AI startups that, in turn, are likely to boost competition in these burgeoning markets.

To start, partnerships with so-called big tech firms are likely a way for AI startups to rapidly obtain equity financing. While this lies beyond our area of expertise, there is ample economic literature to suggest that debt and equity financing are not equivalent for firms.[88] Interestingly for competition policy, there is evidence to suggest firms tend to favor equity over debt financing when they operate in highly competitive product markets.[89]

Furthermore, there may be reasons that AI startups to turn to incumbent big tech platforms to obtain financing, rather than to other partners (though there is evidence these firms are also raising significant amounts of money from other sources).[90] In short, big tech platforms have a longstanding reputation for deep pockets, as well as a healthy appetite for risk. Because of the relatively small amounts at stake—at least, relative to the platforms’ market capitalizations—these firms may be able to move faster than rivals, for whom investments of this sort may present more significant risks. This may be a key advantage in the fast-paced world of generative AI, where obtaining funding and scaling rapidly could be the difference between becoming the next GAFAM or an also-ran.

Partnerships with incumbent tech platforms may also create valuable synergies that enable startups to extract better terms than would otherwise be the case (because the deal creates more surplus for parties to distribute among themselves). Potential synergies include better integrating generative-AI services into existing platforms; several big tech platforms appear to see the inevitable integration of AI into their services as a challenge similar to the shift from desktop to mobile internet, which saw several firms thrive, while others fell by the wayside.[91]

Conversely, incumbent tech platforms may have existing infrastructure that AI startups can use to scale up faster and more cheaply than would otherwise be the case. Running startups’ generative-AI services on top of this infrastructure may enable much faster deployment of generative-AI technology.[92] Importantly, if these joint strategies entail relationship-specific investments on the part of one or both partners, then big tech platforms taking equity positions in AI startups may be an important facilitator to prevent holdup.[93] Both of these possibilities are perfectly summed up by Swami Sivasubramanian, Amazon’s vice president of Data and AI, when commenting on Amazon’s partnership with Anthropic:

Anthropic’s visionary work with generative AI, most recently the introduction of its state-of-the art Claude 3 family of models, combined with Amazon’s best-in-class infrastructure like AWS Tranium and managed services like Amazon Bedrockfurther unlocks exciting opportunities for customers to quickly, securely, and responsibly innovate with generative AI. Generative AI is poised to be the most transformational technology of our time, and we believe our strategic collaboration with Anthropic will further improve our customers’ experiences, and look forward to what’s next.[94]

All of this can be expected to have a knock-on effect on innovation and competition in generative-AI markets. To put it simply, a leading firm like OpenAI might welcome the prospect of competition authorities blocking the potential funding of one of its rivals. It may also stand to benefit if incumbent tech firms are prevented from rapidly upping their generative-AI game via partnerships with other AI startups. In short, preventing AI startups from obtaining funding from big tech platforms could not only arrest those startups’ growth, but also harm long-term competition in the burgeoning AI industry.

III. Market Definition in AI

The question of market definition, long a cornerstone of antitrust analysis, is of particular importance and complexity in the context of AI. The difficulty in defining relevant markets accurately stems not only from the novelty of AI technologies, but from their inherent heterogeneity and the myriad ways they intersect with existing markets and business models. In short, it is not yet clear how to determine the boundaries of markets for AI-powered products. Indeed, traditional approaches to market definition will ultimately provide the correct tools to accomplish this task, but, as we discuss below, we do not yet know the right questions to ask.

Regulators and policymakers must develop a nuanced understanding of AI markets, one that moves beyond broad generalizations and marketing hyperbole to examine the specific characteristics of these emerging technologies and their impacts on various product and service markets.

There are three main things that need to be at the forefront of competition authorities’ minds when they think about market definition in AI products and services. First, they must understand that AI is not a single thing, but is a composite category composed of many distinct goods and services. Second, and related to looking beyond the AI marketing hype, they must recognize how the extremely heterogeneous products landscape of “AI” intersects with an equally variegated consumer-demand landscape. Finally, they must acknowledge how little we know about these nascent markets, and that the most important priority at the moment is simply to ask the right questions that will lead to sound competition policy.

A. AI Is Difficult to Define and Not Monolithic

The task of defining AI for the purposes of antitrust analysis is fraught with complexity, stemming from the multifaceted nature of AI technologies and their diverse applications across industries. It is imperative to recognize that AI does not constitute a monolithic entity or a singular market, but rather encompasses a heterogeneous array of technologies, techniques, and applications that defy simplistic categorization.[95]

At its core, the “AI Stack” comprises multiple layers of interrelated yet distinct technological components. At the foundational level, we find specialized hardware such as semiconductors, graphics processing units (GPUs), and tensor processing units (TPUs), as well as other specialized chipsets designed to accelerate the computationally intensive tasks associated with AI. These hardware components, while critical to AI functionality, also serve broader markets beyond AI applications (e.g., crypto and gaming), complicating efforts to delineate clear market boundaries.

The data layer presents another dimension of complexity. AI systems rely on vast quantities of both structured and unstructured data for training and operation.[96] The sourcing, curation, and preparation of this data constitute distinct markets within the AI ecosystem, each with its own competitive dynamics and potential barriers to entry.

Moving up the stack, we encounter the algorithmic layer, where a diverse array of machine-learning techniques—including, but not limited to, supervised learning, unsupervised learning, and reinforcement learning[97]—are employed. These algorithmic approaches, while fundamental to AI functionality, are not uniform in their application or market impact. Different AI applications may utilize distinct combinations of these techniques,[98] potentially serving disparate markets and consumer needs.

At the application level, the heterogeneity of AI becomes most apparent. From natural-language processing and computer vision to predictive analytics and autonomous vehicles, AI technologies manifest in a multitude of forms, each potentially constituting a distinct relevant market for antitrust purposes. Moreover, these AI applications can intersect with and compete against non-AI solutions, further blurring the boundaries of what might be considered an “AI market.”

The deployment models for AI technologies add yet another layer of complexity to the task of defining antitrust-relevant markets. Cloud-based AI services, edge-computing solutions, and on-premises AI deployments may each serve different market segments and face distinct competitive pressures. The ability of firms to make “build or buy” decisions regarding AI capabilities further complicates the delineation of clear market boundaries.[99]

B. Look Beyond the Marketing Hype

The application of antitrust principles to AI markets necessitates a rigorous analytical approach that transcends superficial categorizations and marketing rhetoric. It is imperative for enforcement authorities to eschew preconceived notions and popular narratives surrounding AI, and to focus instead on empirical evidence and careful economic analysis, in order to accurately assess competitive dynamics in AI-adjacent markets.

The allure of AI as a revolutionary technology has led to a proliferation of marketing claims and industry hype[100] that often may obscure the true nature and capabilities of AI systems. This obfuscation presents a significant challenge for antitrust authorities, who must disentangle factual competitive realities from speculative or exaggerated assertions about AI’s market impact. This task is further complicated by the rapid pace of technological advancement in the field, which can render even recent market analyses obsolete.

A particularly pernicious misconception that must be addressed is the notion that AI technologies operate in a competitive vacuum, distinct from and impervious to competition from non-AI alternatives. This perspective risks leading antitrust authorities to define markets too narrowly, potentially overlooking significant competitive constraints from traditional technologies or human-driven services.

Consider, for instance, the domain of natural-language processing. While AI-powered language models have made significant strides in recent years, they often compete directly with human translators, content creators, and customer-service representatives. Similarly, in the realm of data analysis, AI systems may vie for market share not only with other AI solutions, but also with traditional statistical methods and human analysts. Failing to account for these non-AI competitors in market-definition exercises could result in a distorted view of market power and competitive dynamics.

Moreover, the tendency to treat AI as a monolithic entity obscures the reality that many AI-powered products and services are, in fact, hybrid solutions that combine AI components with traditional software and human oversight.[101] This hybridization further complicates market-definition efforts, as it becomes necessary to assess the degree to which the AI element of a product or service contributes to its market position and substitutability.

C. Current Lack of Knowledge About Relevant Markets

It is crucial to acknowledge at this juncture the profound limitations in our current understanding of how AI technologies will ultimately shape competitive landscapes across various industries. This recognition of our informational constraints should inform a cautious and empirically grounded approach to market definition in the context of AI.

The dynamic nature of AI development renders many traditional metrics for market definition potentially unreliable or prematurely restrictive. Market share, often a cornerstone of antitrust analysis, may prove particularly volatile in AI markets, where technological breakthroughs can rapidly alter competitive positions. Moreover, the boundaries between distinct AI applications and markets remain fluid, with innovations in one domain frequently finding unexpected applications in others, and thereby further complicating efforts to delineate stable market boundaries.

In this context, Jonathan Barnett’s observations regarding the dangers of preemptive antitrust approaches in nascent markets are particularly salient.[102] Barnett argues persuasively that, at the early stages of a market’s development, uncertainty concerning the competitive effects of certain business practices is likely to be especially high.[103] This uncertainty engenders a significant risk of false-positive error costs, whereby preemptive intervention may inadvertently suppress practices that are either competitively neutral or potentially procompetitive.[104]

The risk of regulatory overreach is particularly acute in the realm of AI, where the full spectrum of potential applications and competitive dynamics remains largely speculative. Premature market definition and subsequent enforcement actions based on such definitions could stifle innovation and impede the natural evolution of AI technologies and business models.

Further complicating matters is the fact that what constitutes a relevant product in AI markets is often ambiguous and subject to rapid change. The modular nature of many AI systems, where components can be combined and reconfigured to serve diverse functions, challenges traditional notions of product markets. For instance, a foundational language model might serve as a critical input for a wide array of downstream applications, from chatbots to content-generation tools, each potentially constituting a distinct product market. The boundaries between these markets, and the extent to which they overlap or remain distinct, are likely to remain in flux in the near future.

Given these uncertainties, antitrust authorities must adopt a posture of epistemic humility when approaching market definition in the context of AI. This approach of acknowledged uncertainty and adaptive analysis does not imply regulatory paralysis. Rather, it calls for a more nuanced and dynamic form of antitrust oversight, one that remains vigilant to potential competitive harms while avoiding premature or overly rigid market definitions that could impede innovation.

Market definition should reflect our best understanding of both AI and AI markets. Since this understanding is still very much in an incipient phase, antitrust authorities should view their current efforts not as definitive pronouncements on the structure of AI markets, but as iterative steps in an ongoing process of learning and adaptation. By maintaining this perspective, regulators can hope to strike a balance between addressing legitimate competitive concerns and fostering an environment conducive to continued innovation and dynamic competition in the AI sector.

D. Key Questions to Ask

Finally, the most important function for enforcement authorities to play at the moment is to ask the right questions that will help to optimally develop an analytical framework of relevant markets in subsequent competition analyses. This framework should be predicated on a series of inquiries designed to elucidate the true nature of competitive dynamics in AI-adjacent markets. While the specific contours of relevant markets may remain elusive, the process of rigorous questioning can provide valuable insights and guide enforcement decisions.

Two fundamental questions emerge as critical starting points for any attempt to define relevant markets in AI contexts.

First, “Who are the consumers, and what is the product or service?” This seemingly straightforward inquiry belies a complex web of considerations in AI markets. The consumers of AI technologies and services are often not end-users, but rather, intermediaries that participate in complex value chains. For instance, the market for AI chips encompasses not only direct purchasers like cloud-service providers, but also downstream consumers of AI-powered applications. Similarly, the product or service in question may not be a discrete AI technology, but rather a bundle of AI and non-AI components, or even a service powered by AI but indistinguishable to the end user from non-AI alternatives.

The heterogeneity of AI consumers and products necessitates a granular approach to market definition. Antitrust authorities must carefully delineate between different levels of the AI value chain, considering the distinct competitive dynamics at each level. This may involve separate analyses for markets in AI inputs (such as specialized hardware or training data), AI development tools, and AI-powered end-user applications.

Second, and perhaps more crucially, “Does AI fundamentally transform the product or service in a way that creates a distinct market?” This question is at the heart of the challenge in defining AI markets. It requires a nuanced assessment of the degree to which AI capabilities alter the nature of a product or service from the perspective of consumers.

In some cases, AI’s integration into products or services may represent merely an incremental improvement, not warranting the delineation of a separate market. For example, AI-enhanced spell-checking in word-processing software might not constitute a distinct market from traditional spell-checkers if consumers do not perceive a significant functional difference.

Conversely, in other cases, AI may enable entirely new functionalities or levels of performance that create distinct markets. Large language models capable of generating human-like text, for instance, might be considered to operate in a market separate from traditional writing aids or information-retrieval tools (or not, depending on the total costs and benefits of the option).

The analysis must also consider the potential for AI to blur the boundaries between previously distinct markets. As AI systems become more versatile, they may compete across multiple traditional product categories, challenging conventional market definitions.

In addressing these questions, antitrust authorities should consider several additional factors:

  1. The degree of substitutability between AI and non-AI solutions, from the perspective of both direct purchasers and end-users.
  2. The extent to which AI capabilities are perceived as essential or differentiating factors by consumers in the relevant market.
  3. The potential for rapid evolution in AI capabilities and consumer preferences, which may necessitate dynamic market definitions.
  4. The presence of switching costs or lock-in effects, which could influence market boundaries.
  5. The geographic scope of AI markets, which may transcend traditional national or regional boundaries.

It is crucial to note that these questions do not yield simple or static answers. Rather, they serve as analytical tools to guide ongoing assessment of AI markets. Antitrust authorities must be prepared to revisit and refine their market definitions as technological capabilities evolve and market dynamics shift.

Moreover, the process of defining relevant markets in the context of AI should not be viewed as an end in itself, but as a means to understand competitive dynamics and to inform enforcement decisions. In some cases, traditional market-definition exercises may prove insufficient, necessitating alternative analytical approaches that focus on competitive effects or innovation harms.

By embracing this questioning approach, antitrust authorities can develop a more nuanced and adaptable framework for market definition in AI contexts. This approach would acknowledge the complexities and uncertainties inherent in AI markets, while providing a structured methodology to assess competitive dynamics. As our understanding of AI markets deepens, this framework will need to evolve further, ensuring that antitrust enforcement remains responsive to the unique challenges posed by artificial-intelligence technologies.

[1] Press Release, Justice Department and Stanford University to Cohost Workshop “Promoting Competition in Artificial Intelligence”, U.S. Justice Department (May 21, 2024), https://www.justice.gov/opa/pr/justice-department-and-stanford-university-cohost-workshop-promoting-competition-artificial.

[2] Artificial intelligence is, of course, not a market (at least not a relevant antitrust market). Within the realm of what is called “AI,” companies offer myriad products and services, and specific relevant markets would need to be defined before assessing harm to competition in specific cases.

[3] Nathan Newman, Taking on Google’s Monopoly Means Regulating Its Control of User Data, Huffington Post (Sep. 24, 2013), http://www.huffingtonpost.com/nathan-newman/taking-on-googlesmonopol_b_3980799.html.

[4] See, e.g., Lina Khan & K. Sabeel Rahman, Restoring Competition in the U.S. Economy, in Untamed: How to Check Corporate, Financial, and Monopoly Power (Nell Abernathy, Mike Konczal, & Kathryn Milani, eds., 2016), at 23. (“From Amazon to Google to Uber, there is a new form of economic power on display, distinct from conventional monopolies and oligopolies…, leverag[ing] data, algorithms, and internet-based technologies… in ways that could operate invisibly and anticompetitively.”); Mark Weinstein, I Changed My Mind—Facebook Is a Monopoly, Wall St. J. (Oct. 1, 2021), https://www.wsj.com/articles/facebook-is-monopoly-metaverse-users-advertising-platforms-competition-mewe-big-tech-11633104247 (“[T]he glue that holds it all together is Facebook’s monopoly over data…. Facebook’s data troves give it unrivaled knowledge about people, governments—and its competitors.”).

[5] See, generally, Abigail Slater, Why “Big Data” Is a Big Deal, The Reg. Rev. (Nov. 6, 2023), https://www.theregreview.org/2023/11/06/slater-why-big-data-is-a-big-deal; Amended Complaint at ¶36, United States v. Google, 1:20-cv-03010- (D.D.C. 2020); Complaint at ¶37, United States v. Google, 1:23-cv-00108 (E.D. Va. 2023), https://www.justice.gov/opa/pr/justice-department-sues-google-monopolizing-digital-advertising-technologies (“Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”).

[6] See, e.g., Press Release, Commission Launches Calls for Contributions on Competition in Virtual Worlds and Generative AI, European Commission (Jan. 9, 2024), https://ec.europa.eu/commission/presscorner/detail/en/IP_24_85; Krysten Crawford, FTC’s Lina Khan Warns Big Tech over AI, SIEPR (Nov. 3, 2020), https://siepr.stanford.edu/news/ftcs-lina-khan-warns-big-tech-over-ai (“Federal Trade Commission Chair Lina Khan delivered a sharp warning to the technology industry in a speech at Stanford on Thursday: Antitrust enforcers are watching what you do in the race to profit from artificial intelligence.”) (emphasis added).

[7] See, e.g., John M. Newman, Antitrust in Digital Markets, 72 Vand. L. Rev. 1497, 1501 (2019) (“[T]he status quo has frequently failed in this vital area, and it continues to do so with alarming regularity. The laissez-faire approach advocated for by scholars and adopted by courts and enforcers has allowed potentially massive harms to go unchecked.”); Bertin Martins, Are New EU Data Market Regulations Coherent and Efficient?, Bruegel Working Paper 21/23 (2023), https://www.bruegel.org/working-paper/are-new-eu-data-market-regulations-coherent-and-efficient (“Technical restrictions on access to and re-use of data may result in failures in data markets and data-driven services markets.”); Valéria Faure-Muntian, Competitive Dysfunction: Why Competition Law Is Failing in a Digital World, The Forum Network (Feb. 24, 2021), https://www.oecd-forum.org/posts/competitive-dysfunction-why-competition-law-is-failing-in-a-digital-world.

[8] See Rana Foroohar, The Great US-Europe Antitrust Divide, Financial Times (Feb. 5, 2024), https://www.ft.com/content/065a2f93-dc1e-410c-ba9d-73c930cedc14.

[9] See, e.g., Press Release, European Commission, supra note 6.

[10] See infra, Section I.B. Commentators have also made similar claims; see, e.g., Ganesh Sitaram & Tejas N. Narechania, It’s Time for the Government to Regulate AI. Here’s How, Politico (Jan. 15, 2024) (“All that cloud computing power is used to train foundation models by having them “learn” from incomprehensibly huge quantities of data. Unsurprisingly, the entities that own these massive computing resources are also the companies that dominate model development. Google has Bard, Meta has LLaMa. Amazon recently invested $4 billion into one of OpenAI’s leading competitors, Anthropic. And Microsoft has a 49 percent ownership stake in OpenAI — giving it extraordinary influence, as the recent board struggles over Sam Altman’s role as CEO showed.”).

[11] Press Release, European Commission, supra note 6.

[12] Comment of U.S. Federal Trade Commission to the U.S. Copyright Office, Artificial Intelligence and Copyright, Docket No. 2023-6 (Oct. 30, 2023), at 4, https://www.ftc.gov/legal-library/browse/advocacy-filings/comment-federal-trade-commission-artificial-intelligence-copyright (emphasis added).

[13] Jonathan Kanter, Remarks at the Promoting Competition in AI Conference (May 30, 2024), https://youtu.be/yh–1AGf3aU?t=424.

[14] Karin Matussek, AI Will Fuel Antitrust Fires, Big Tech’s German Nemesis Warns, Bloomberg (Jun. 26, 2024), https://www.bloomberg.com/news/articles/2024-06-26/ai-will-fuel-antitrust-fires-big-tech-s-german-nemesis-warns?srnd=technology-vp.

[15] Id.

[16] See, e.g., Joe Caserta, Holger Harreis, Kayvaun Rowshankish, Nikhil Srinidhi, & Asin Tavakoli, The Data Dividend: Fueling Generative AI, McKinsey Digital (Sep. 15, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-data-dividend-fueling-generative-ai (“Your data and its underlying foundations are the determining factors to what’s possible with generative AI.”).

[17] See, e.g., Tim Keary, Google DeepMind’s Achievements and Breakthroughs in AI Research, Techopedia (Aug. 11, 2023), https://www.techopedia.com/google-deepminds-achievements-and-breakthroughs-in-ai-research; see, e.g., Will Douglas Heaven, Google DeepMind Used a Large Language Model to Solve an Unsolved Math Problem, MIT Technology Review (Dec. 14, 2023), https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set; see also, A Decade of Advancing the State-of-the-Art in AI Through Open Research, Meta (Nov. 30, 2023), https://about.fb.com/news/2023/11/decade-of-advancing-ai-through-open-research; see also, 200 Languages Within a Single AI Model: A Breakthrough in High-Quality Machine Translation, Meta, https://ai.meta.com/blog/nllb-200-high-quality-machine-translation (last visited Jan. 18, 2023).

[18] See, e.g., Jennifer Allen, 10 Years of Siri: The History of Apple’s Voice Assistant, Tech Radar (Oct. 4, 2021), https://www.techradar.com/news/siri-10-year-anniversary; see also Evan Selleck, How Apple Is Already Using Machine Learning and AI in iOS, Apple Insider (Nov. 20, 2023), https://appleinsider.com/articles/23/09/02/how-apple-is-already-using-machine-learning-and-ai-in-ios; see also, Kathleen Walch, The Twenty Year History Of AI At Amazon, Forbes (Jul. 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/07/19/the-twenty-year-history-of-ai-at-amazon.

[19] See infra Section I.C.

[20] Josh Sisco, POLITICO PRO Q&A: Exit interview with DOJ Chief Antitrust Economist Susan Athey, Politico Pro (Jul. 2, 2024), https://subscriber.politicopro.com/article/2024/07/politico-pro-q-a-exit-interview-with-doj-chief-antitrust-economist-susan-athey-00166281.

[21] Belle Lin, Open-Source Companies Are Sharing Their AI Free. Can They Crack OpenAI’s Dominance?, Wall St. J. (Mar. 21, 2024), https://www.wsj.com/articles/open-source-companies-are-sharing-their-ai-free-can-they-crack-openais-dominance-26149e9c.

[22] See, e.g., Cédric Argenton & Jens Prüfer, Search Engine Competition with Network Externalities, 8 J. Comp. L. & Econ. 73, 74 (2012).

[23] John M. Yun, The Role of Big Data in Antitrust, in The Global Antitrust Institute Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg, eds., Nov. 11, 2020) at 233, https://gaidigitalreport.com/2020/08/25/big-data-and-barriers-to-entry/#_ftnref50; see also, e.g., Robert Wayne Gregory, Ola Henfridsson, Evgeny Kaganer, & Harris Kyriakou, The Role of Artificial Intelligence and Data Network Effects for Creating User Value, 46 Acad. of Mgmt. Rev. 534 (2020), final pre-print version at 4, http://wrap.warwick.ac.uk/134220) (“A platform exhibits data network effects if, the more that the platform learns from the data it collects on users, the more valuable the platform becomes to each user.”); see also, Karl Schmedders, José Parra-Moyano, & Michael Wade, Why Data Aggregation Laws Could be the Answer to Big Tech Dominance, Silicon Republic (Feb. 6, 2024), https://www.siliconrepublic.com/enterprise/data-ai-aggregation-laws-regulation-big-tech-dominance-competition-antitrust-imd.

[24] Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. Reg. 401, 409 (2014) (emphasis added); see also id. at 420 & 423 (“While there are a number of network effects that come into play with Google, [“its intimate knowledge of its users contained in its vast databases of user personal data”] is likely the most important one in terms of entrenching the company’s monopoly in search advertising…. Google’s overwhelming control of user data… might make its dominance nearly unchallengeable.”).

[25] See also Yun, supra note 23 at 229 (“[I]nvestments in big data can create competitive distance between a firm and its rivals, including potential entrants, but this distance is the result of a competitive desire to improve one’s product.”).

[26] For a review of the literature on increasing returns to scale in data (this topic is broader than data-network effects) see Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo Mason L. Rev. 1281, 1344 (2021).

[27] Andrei Hagiu & Julian Wright, Data-Enabled Learning, Network Effects, and Competitive Advantage, 54 RAND J. Econ. 638 (2023).

[28] Id. at 639. The authors conclude that “Data-enabled learning would seem to give incumbent firms a competitive advantage. But how strong is this advantage and how does it differ from that obtained from more traditional mechanisms… .”

[29] Id.

[30] Bruno Jullien & Wilfried Sand-Zantman, The Economics of Platforms: A Theory Guide for Competition Policy, 54 Info. Econ. & Pol’y 10080, 101031 (2021).

[31] Daniele Condorelli & Jorge Padilla, Harnessing Platform Envelopment in the Digital World, 16 J. Comp. L. & Pol’y 143, 167 (2020).

[32] See Hagiu & Wright, supra note 27.

[33] For a summary of these limitations, see generally Catherine Tucker, Network Effects and Market Power: What Have We Learned in the Last Decade?, Antitrust (2018) at 72, available at https://sites.bu.edu/tpri/files/2018/07/tucker-network-effects-antitrust2018.pdf; see also Manne & Auer, supra note 26, at 1330.

[34] See Jason Furman, Diane Coyle, Amelia Fletcher, Derek McAuley, & Philip Marsden (Dig. Competition Expert Panel), Unlocking Digital Competition (2019) at 32-35 (“Furman Report”), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[35] Id. at 34.

[36] Id. at 35. To its credit, it should be noted, the Furman Report does counsel caution before mandating access to data as a remedy to promote competition. See id. at 75. That said, the Furman Report maintains that such a remedy should remain on the table because “the evidence suggests that large data holdings are at the heart of the potential for some platform markets to be dominated by single players and for that dominance to be entrenched in a way that lessens the potential for competition for the market.” Id. The evidence, however, does not show this.

[37] Case COMP/M.9660 — Google/Fitbit, Commission Decision (Dec. 17, 2020) (Summary at O.J. (C 194) 7), available at https://ec.europa.eu/competition/mergers/cases1/202120/m9660_3314_3.pdf, at 455,

[38] Id. at 896.

[39] See Natasha Lomas, EU Checking if Microsoft’s OpenAI Investment Falls Under Merger Rules, TechCrunch (Jan. 9, 2024), https://techcrunch.com/2024/01/09/openai-microsoft-eu-merger-rules.

[40] Amended Complaint at 11, Meta/Zuckerberg/Within, Fed. Trade Comm’n. (2022) (No. 605837), available at https://www.ftc.gov/system/files/ftc_gov/pdf/D09411%20-%20AMENDED%20COMPLAINT%20FILED%20BY%20COUNSEL%20SUPPORTING%20THE%20COMPLAINT%20-%20PUBLIC%20%281%29_0.pdf.

[41] Amended Complaint (D.D.C), supra note 5 at ¶37.

[42] Amended Complaint (E.D. Va), supra note 5 at ¶8.

[43] Merger Guidelines, US Dep’t of Justice & Fed. Trade Comm’n (2023) at 25, available at https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[44] Merger Assessment Guidelines, Competition and Mkts. Auth (2021) at ¶7.19(e), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051823/MAGs_for_publication_2021_–_.pdf.

[45] Furman Report, supra note 34, at ¶4.

[46] See, e.g., Chris Westfall, New Research Shows ChatGPT Reigns Supreme in AI Tool Sector, Forbes (Nov. 16, 2023), https://www.forbes.com/sites/chriswestfall/2023/11/16/new-research-shows-chatgpt-reigns-supreme-in-ai-tool-sector/?sh=7de5de250e9c; Sujan Sarkar, AI Industry Analysis: 50 Most Visited AI Tools and Their 24B+ Traffic Behavior, Writerbuddy (last visited, Jul. 15, 2024), https://writerbuddy.ai/blog/ai-industry-analysis.

[47] See Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base, Reuters (Feb. 2, 2023), https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01; Google: The AI Race Is On, App Economy Insights (Feb. 7, 2023), https://www.appeconomyinsights.com/p/google-the-ai-race-is-on.

[48] See Google Trends, https://trends.google.com/trends/explore?date=today%205-y&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024) and https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024).

[49] See David F. Carr, As ChatGPT Growth Flattened in May, Google Bard Rose 187%, Similarweb Blog (Jun. 5, 2023), https://www.similarweb.com/blog/insights/ai-news/chatgpt-bard.

[50] See Press Release, Introducing New AI Experiences Across Our Family of Apps and Devices, Meta (Sep. 27, 2023), https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools; Sundar Pichai, An Important Next Step on Our AI Journey, Google Keyword Blog (Feb. 6, 2023), https://blog.google/technology/ai/bard-google-ai-search-updates.

[51] See Ion Prodan, 14 Million Users: Midjourney’s Statistical Success, Yon (Aug. 19, 2023), https://yon.fun/midjourney-statistics; see also Andrew Wilson, Midjourney Statistics: Users, Polls, & Growth [Oct 2023], ApproachableAI (Oct. 13, 2023), https://approachableai.com/midjourney-statistics.

[52] See Hema Budaraju, New Ways to Get Inspired with Generative AI in Search, Google Keyword Blog (Oct. 12, 2023), https://blog.google/products/search/google-search-generative-ai-october-update; Imagine with Meta AI, Meta (last visited Jan. 12, 2024), https://imagine.meta.com.

[53] Catherine Tucker, Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility, 54 Rev. Indus. Org. 683, 686 (2019).

[54] Manne & Auer, supra note 26, at 1345.

[55] See, e.g., Stefanie Koperniak, Artificial Data Give the Same Results as Real Data—Without Compromising Privacy, MIT News (Mar. 3, 2017), https://news.mit.edu/2017/artificial-data-give-same-results-as-real-data-0303 (“[Authors] describe a machine learning system that automatically creates synthetic data—with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. While the use of authentic data can cause significant privacy concerns, this synthetic data is completely different from that produced by real users—but can still be used to develop and test data science algorithms and models.”).

[56] See, e.g., Rachel Gordon, Synthetic Imagery Sets New Bar in AI Training Efficiency, MIT News (Nov. 20, 2023), https://news.mit.edu/2023/synthetic-imagery-sets-new-bar-ai-training-efficiency-1120 (“By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional ‘real-image’ training methods.).

[57] Thibault Schrepel & Alex ‘Sandy’ Pentland, Competition Between AI Foundation Models: Dynamics and Policy Recommendations, MIT Connection Science Working Paper (Jun. 2023), at 8.

[58] Igor Susmelj, Optimizing Generative AI: The Role of Data Curation, Lightly (last visited Jan. 15, 2024), https://www.lightly.ai/post/optimizing-generative-ai-the-role-of-data-curation.

[59] See, e.g., Xiaoliang Dai, et al., Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack, ArXiv (Sep. 27, 2023) at 1, https://ar5iv.labs.arxiv.org/html/2309.15807 (“[S]upervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality.”); see also, Hu Xu, et al., Demystifying CLIP Data, ArXiv (Sep. 28, 2023), https://arxiv.org/abs/2309.16671.

[60] Lauren Leffer, New Training Method Helps AI Generalize like People Do, Sci. Am. (Oct. 26, 2023), https://www.scientificamerican.com/article/new-training-method-helps-ai-generalize-like-people-do (discussing Brendan M. Lake & Marco Baroni, Human-Like Systematic Generalization Through a Meta-Learning Neural Network, 623 Nature 115 (2023)).

[61] Timothy B. Lee, The Real Research Behind the Wild Rumors about OpenAI’s Q* Project, Ars Technica (Dec. 8, 2023), https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project.

[62] Id.; see also GSM8K, Papers with Code (last visited Jan. 18, 2023), https://paperswithcode.com/dataset/gsm8k; MATH Dataset, GitHub (last visited Jan. 18, 2024), https://github.com/hendrycks/math.

[63] Lee, supra note 61.

[64] Geoffrey Manne & Ben Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (Mar. 26, 2015), https://truthonthemarket.com/2015/03/26/debunking-the-myth-of-a-data-barrier-to-entry-for-online-services (citing Andres V. Lerner, The Role of ‘Big Data’ in Online Platform Competition (Aug. 26, 2014), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2482780.).

[65] See Catherine Tucker, Digital Data as an Essential Facility: Control, CPI Antitrust Chron. (Feb. 2020), at 11 (“[U]ltimately the value of data is not the raw manifestation of the data itself, but the ability of a firm to use this data as an input to insight.”).

[66] Or, as John Yun put it, data is only a small component of digital firms’ production function. See Yun, supra note 23, at 235 (“Second, while no one would seriously dispute that having more data is better than having less, the idea of a data-driven network effect is focused too narrowly on a single factor improving quality. As mentioned in supra Section I.A, there are a variety of factors that enter a firm’s production function to improve quality.”).

[67] Luxia Le, The Real Reason Windows Phone Failed Spectacularly, History–Computer (Aug. 8, 2023), https://history-computer.com/the-real-reason-windows-phone-failed-spectacularly.

[68] Introducing the GPT Store, Open AI (Jan. 10, 2024), https://openai.com/blog/introducing-the-gpt-store.

[69] See Michael Schade, How ChatGPT and Our Language Models Are Developed, OpenAI, https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed; Sreejani Bhattacharyya, Interesting Innovations from OpenAI in 2021, AIM (Jan. 1, 2022), https://analyticsindiamag.com/interesting-innovations-from-openai-in-2021; Danny Hernadez & Tom B. Brown, Measuring the Algorithmic Efficiency of Neural Networks, ArXiv (May 8, 2020), https://arxiv.org/abs/2005.04305.

[70] See Yun, supra note 23 at 235 (“Even if data is primarily responsible for a platform’s quality improvements, these improvements do not simply materialize with the presence of more data—which differentiates the idea of data-driven network effects from direct network effects. A firm needs to intentionally transform raw, collected data into something that provides analytical insights. This transformation involves costs including those associated with data storage, organization, and analytics, which moves the idea of collecting more data away from a strict network effect to more of a ‘data opportunity.’”).

[71] Lerner, supra note 64, at 4-5 (emphasis added).

[72] See Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (2013).

[73] See David J. Teece, Dynamic Capabilities and Strategic Management: Organizing for Innovation and Growth (2009).

[74] Antitrust merger enforcement has long assumed that horizontal mergers are more likely to cause problems for consumers than vertical mergers. See: Geoffrey A. Manne, Dirk Auer, Brian Albrecht, Eric Fruits, Daniel J. Gilman, & Lazar Radic, Comments of the International Center for Law and Economics on the FTC & DOJ Draft Merger Guidelines, (Sep. 18, 2023), https://laweconcenter.org/resources/comments-of-the-international-center-for-law-and-economics-on-the-ftc-doj-draft-merger-guidelines.

[75] See Hagiu & Wright, supra note 27, at 27 (“We use our dynamic framework to explore how data sharing works: we find that it in-creases consumer surplus when one firm is sufficiently far ahead of the other by making the laggard more competitive, but it decreases consumer surplus when the firms are sufficiently evenly matched by making firms compete less aggressively, which in our model means subsidizing consumers less.”); see also Lerner, supra note 64.

[76] See, e.g., Hagiu & Wright, id. (“We also use our model to highlight an unintended consequence of privacy policies. If such policies reduce the rate at which firms can extract useful data from consumers, they will tend to increase the incumbent’s competitive advantage, reflecting that the entrant has more scope for new learning and so is affected more by such a policy.”); Jian Jia, Ginger Zhe Jin, & Liad Wagman, The Short-Run Effects of the General Data Protection Regulation on Technology Venture Investment, 40 Marketing Sci. 593 (2021) (finding GDPR reduced investment in new and emerging technology firms, particularly in data-related ventures); James Campbell, Avi Goldfarb, & Catherine Tucker, Privacy Regulation and Market Structure, 24 J. Econ. & Mgmt. Strat. 47 (2015) (“Consequently, rather than increasing competition, the nature of transaction costs implied by privacy regulation suggests that privacy regulation may be anti-competitive.”).

[77] See Jonathan M. Barnett, “Killer Acquisitions” Reexamined: Economic Hyperbole in the Age of Populist Antitrust, 3 U. Chi. Bus. L. Rev. 39 (2023).

[78] Id. at 85. (“At the same time, these transactions enhance competitive conditions by supporting the profit expectations that elicit VC investment in the startups that deliver the most transformative types of innovation to the biopharmaceutical ecosystem (and, in some cases, mature into larger firms that can challenge incumbents).)”

[79] Cade Metz, Karen Weise, & Mike Isaac, Nvidia’s Big Tech Rivals Put Their Own A.I. Chips on the Table, N.Y. Times (Jan. 29, 2024), https://www.nytimes.com/2024/01/29/technology/ai-chips-nvidia-amazon-google-microsoft-meta.html.

[80] See, e.g., Chris Metinko, Nvidia’s Big Tech Rivals Put Their Own A.I. Chips on the Table, CrunchBase (Jun. 12, 2024), https://news.crunchbase.com/ai/msft-nvda-lead-big-tech-startup-investment.

[81] CMA Seeks Views on AI Partnerships and Other Arrangements, Competition and Mkts. Auth. (Apr. 24, 2024), https://www.gov.uk/government/news/cma-seeks-views-on-ai-partnerships-and-other-arrangements.

[82] As noted infra, companies offer myriad “AI” products and services, and specific relevant markets would need to be defined before assessing harm to competition in specific cases.

[83] Start-ups, Killer Acquisitions and Merger Control, OECD (2020), available at https://web-archive.oecd.org/2020-10-16/566931-start-ups-killer-acquisitions-and-merger-control-2020.pdf.

[84] Kate Rooney & Hayden Field, Amazon Spends $2.75 Billion on AI Startup Anthropic in Its Largest Venture Investment Yet, CNBC (Mar. 27, 2024), https://www.cnbc.com/2024/03/27/amazon-spends-2point7b-on-startup-anthropic-in-largest-venture-investment.html.

[85] Id.

[86] Tom Warren, Microsoft Partners with Mistral in Second AI Deal Beyond OpenAI, The Verge (Feb. 26, 2024), https://www.theverge.com/2024/2/26/24083510/microsoft-mistral-partnership-deal-azure-ai.

[87] Mark Sullivan, Microsoft’s Inflection AI Grab Likely Cost More Than $1 Billion, Says An Insider (Exclusive), Fast Company  (Mar. 26, 2024), https://www.fastcompany.com/91069182/microsoft-inflection-ai-exclusive; see also, Mustafa Suleyman, DeepMind and Inflection Co-Founder, Joins Microsoft to Lead Copilot, Microsoft Corporate Blogs (Mar. 19, 2024), https://blogs.microsoft.com/blog/2024/03/19/mustafa-suleyman-deepmind-and-inflection-co-founder-joins-microsoft-to-lead-copilot; Krystal Hu & Harshita Mary Varghese, Microsoft Pays Inflection $ 650 Mln in Licensing Deal While Poaching Top Talent, Source Says, Reuters (Mar. 21, 2024), https://www.reuters.com/technology/microsoft-agreed-pay-inflection-650-mln-while-hiring-its-staff-information-2024-03-21; The New Inflection: An Important Change to How We’ll Work, Inflection (Mar. 19, 2024), https://inflection.ai/the-new-inflection; Julie Bort, Here’s How Microsoft Is Providing a ‘Good Outcome’ for Inflection AI VCs, as Reid Hoffman Promised, Tech Crunch (Mar. 21, 2024), https://techcrunch.com/2024/03/21/microsoft-inflection-ai-investors-reid-hoffman-bill-gates.

[88]  See, e.g., Paul Marsh, The Choice Between Equity and Debt: An Empirical Study, 37 The J. of Finance 121, 142 (1982) (“First, it demonstrates that companies are heavily influenced by market conditions and the past history of security prices in choosing between equity and debt. Indeed, these factors appeared to be far more significant in our model than, for example, other variables such as the company’s existing financial structure. Second, this study provides evidence that companies do appear to make their choice of financing instrument as though they had target levels in mind for both the long term debt ratio, and the ratio of short term to total debt. Finally, the results are consistent with the notion that these target levels are themselves functions of company size, bankruptcy risk, and asset composition.”); see also, Armen Hovakimian, Tim Opler, & Sheridan Titman, The Debt-Equity Choice, 36 J. of Financial and Quantitative Analysis 1, 3(2001) (“Our results suggest that, although pecking order considerations affect corporate debt ratios in the short-run, firms tend to make financing choices that move them toward target debt ratios that are consistent with tradeoff models of capital structure choice. For example, our findings confirm that more profitable firms have, on average, lower leverage ratios. But we also find that more profitable firms are more likely to issue debt rather than equity and are more likely to repurchase equity rather than retire debt. Such behavior is consistent with our conjecture that the most profitable firms become under-levered and that firms’ financing choices tend to offset these earnings-driven changes in their capital structures.”): see also, Sabri Boubaker, Wael Rouatbi, & Walid Saffar, The Role of Multiple Large Shareholders in the Choice of Debt Source, 46 Financial Management 241, 267 (2017) (“Our analysis shows that firms controlled by more than one large shareholder tend to rely more heavily on bank debt financing. Moreover, we find that the proportion of bank debt in total debt is significantly higher for firms with higher contestability of the largest controlling owner’s power.”).

[89] Sabri Boubaker, Walid Saffar, & Syrine Sassi, Product Market Competition and Debt Choice, 49 J. of Corp. Finance 204, 208 (2018). (“Our findings that firms substitute away from bank debt when faced with intense market pressure echo the intuition in previous studies that the disciplinary force of competition substitutes for the need to discipline firms through other forms of governance.”).

[90] See, e.g., George Hammond, Andreessen Horowitz Raises $7.2bn and Sets Sights on AI Start-ups, Financial Times (Apr. 16, 2024), https://www.ft.com/content/fdef2f53-f8f7-4553-866b-1c9bfdbeea42; Elon Musk’s xAI Says It Raised $6 Billion to Develop Artificial Intelligence, Moneywatch (May. 27, 2024), https://www.cbsnews.com/news/elon-musk-xai-6-billion; Krystal Hu, AI Search Startup Genspark Raises $60 Million in Seed Round to Challenge Google, Reuters (Jun. 18, 2024), https://www.reuters.com/technology/artificial-intelligence/ai-search-startup-genspark-raises-60-million-seed-round-challenge-google-2024-06-18; Visa to Invest $100 Million in Generative AI for Commerce and Payments, PMYNTS (Oct. 2, 2023), https://www.pymnts.com/artificial-intelligence-2/2023/visa-to-invest-100-million-in-generative-ai-for-commerce-and-payments.

[91] See, e.g., Eze Vidra, Is Generative AI the Biggest Platform Shift Since Cloud and Mobile?, VC Cafe (Mar. 6, 2023), https://www.vccafe.com/2023/03/06/is-generative-ai-the-biggest-platform-shift-since-cloud-and-mobile. See also, OpenAI and Apple Announce Partnership to Integrate ChatGPT into Apple Experiences, OpenAI (Jun. 10, 2024), https://openai.com/index/openai-and-apple-announce-partnership (“Apple is integrating ChatGPT into experiences within iOS, iPadOS, and macOS, allowing users to access ChatGPT’s capabilities—including image and document understanding—without needing to jump between tools.”). See also, Yusuf Mehdi, Reinventing Search With a new AI-powered Microsoft Bing and Edge, Your Copilot for the Web, Microsoft Official Blog (Feb. 7, 2023), https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web (“‘AI will fundamentally change every software category, starting with the largest category of all – search,’ said Satya Nadella, Chairman and CEO, Microsoft. ‘Today, we’re launching Bing and Edge powered by AI copilot and chat, to help people get more from search and the web.’”).

[92] See, e.g., Amazon and Anthropic Deepen Their Shared Commitment to Advancing Generative AI, Amazon (Mar. 27, 2024), https://www.aboutamazon.com/news/company-news/amazon-anthropic-ai-investment (“Global organizations of all sizes, across virtually every industry, are already using Amazon Bedrock to build their generative AI applications with Anthropic’s Claude AI. They include ADP, Amdocs, Bridgewater Associates, Broadridge, CelcomDigi, Clariant, Cloudera, Dana-Farber Cancer Institute, Degas Ltd., Delta Air Lines, Druva, Enverus, Genesys, Genomics England, GoDaddy, HappyFox, Intuit, KT, LivTech, Lonely Planet, LexisNexis Legal & Professional, M1 Finance, Netsmart, Nexxiot, Parsyl, Perplexity AI, Pfizer, the PGA TOUR, Proto Hologram, Ricoh USA, Rocket Companies, and Siemens.”).

[93] Ownership of another firm’s assets is widely seen as a solution to contractual incompleteness. See, e.g., Sanford J. Grossman & Oliver D. Hart, The Costs and Benefits of Ownership: A Theory of Vertical and Lateral Integration, 94 J. Polit. Econ. 691, 716 (1986) (“When it is too costly for one party to specify a long list of the particular rights it desires over another party’s assets, then it may be optimal for the first party to purchase all rights except those specifically mentioned in the contract. Ownership is the purchase of these residual rights of control.”).

[94] See Amazon Staff, supra note 92.

[95] As the National Security Commission on Artificial Intelligence has observed: “AI is not a single technology breakthrough… The race for AI supremacy is not like the space race to the moon. AI is not even comparable to a general-purpose technology like electricity. However, what Thomas Edison said of electricity encapsulates the AI future: “It is a field of fields … it holds the secrets which will reorganize the life of the world.” Edison’s astounding assessment came from humility. All that he discovered was “very little in comparison with the possibilities that appear.” National Security Commission on Artificial Intelligence, Final Report, 7 (2021), available at https://www.dwt.com/-/media/files/blogs/artificial-intelligence-law-advisor/2021/03/nscai-final-report–2021.pdf.

[96] See, e.g., Structured vs Unstructured Data, IBM Cloud Education (Jun. 29, 2021), https://www.ibm.com/think/topics/structured-vs-unstructured-data; Dongdong Zhang, et al., Combining Structured and Unstructured Data for Predictive Models: A Deep Learning Approach, BMC Medical Informatics and Decision Making (Oct. 29, 2020), https://link.springer.com/article/10.1186/s12911-020-01297-6 (describing generally the use of both structured and unstructured data in predictive models for health care).

[97] For a somewhat technical discussion of all three methods, see generally Eric Benhamou, Similarities Between Policy Gradient Methods (PGM) in Reinforcement Learning (RL) and Supervised Learning (SL), SSRN (2019), https://ssrn.com/abstract=3391216.

[98] Id.

[99] For a discussion of the “buy vs build” decisions firms employing AI undertake, see Jonathan M. Barnett, The Case Against Preemptive Antitrust in the Generative Artificial Intelligence Ecosystem, in Artificial Intelligence and Competition Policy (Alden Abbott and Thibault Schrepel eds., 2024), at 3-6.

[100] See, e.g., Melissa Heikkilä & Will Douglas Heaven, What’s Next for AI in 2024, MIT Tech. Rev. (Jan. 4, 2024), https://www.technologyreview.com/2024/01/04/1086046/whats-next-for-ai-in-2024 (Runway hyping Gen-2 as a major film-production tool that, to date, still demonstrates serious limitations). LLMs, impressive as they are, have been touted as impending replacements for humans across many job categories, but still demonstrate many serious limitations that may ultimately limit their use cases. See, e.g., Melissa Malec, Large Language Models: Capabilities, Advancements, And Limitations, HatchWorksAI (Jun. 14, 2024), https://hatchworks.com/blog/gen-ai/large-language-models-guide.

[101] See, e.g., Hybrid AI: A Comprehensive Guide to Applications and Use Cases, SoluLab, https://www.solulab.com/hybrid-ai (last visited Jul. 12, 2024); Why Hybrid Intelligence Is the Future of Artificial Intelligence at McKinsey, McKinsey & Co. (Apr. 29, 2022), https://www.mckinsey.com/about-us/new-at-mckinsey-blog/hybrid-intelligence-the-future-of-artificial-intelligence; Vahe Andonians, Harnessing Hybrid Intelligence: Balancing AI Models and Human Expertise for Optimal Performance, Cognaize (Apr. 11, 2023), https://blog.cognaize.com/harnessing-hybrid-intelligence-balancing-ai-models-and-human-expertise-for-optimal-performance; Salesforce Artificial Intelligence, Salesforce, https://www.salesforce.com/artificial-intelligence (last visited Jul. 12, 2024) (combines traditional CRM and algorithms with AI modules); AI Overview, Adobe, https://www.adobe.com/ai/overview.html (last visited Jul. 12, 2024) (Adobe packages generative AI tools into its general graphic-design tools).

[102] Barnett supra note 99.

[103] Id. at 7-8.

[104] Id.

Continue reading
Antitrust & Consumer Protection

ICLE Comments on the CMA’s Draft Guidance for the UK’s Digital Markets Competition Regime

Regulatory Comments I.  Introduction: Some Guiding Principles for Reasonable Enforcement of Digital Competition Regulation We thank the Competition and Markets Authority (CMA) for this invitation to comment . . .

I.  Introduction: Some Guiding Principles for Reasonable Enforcement of Digital Competition Regulation

We thank the Competition and Markets Authority (CMA) for this invitation to comment on its draft guidance for the digital-markets competition regime.[1] The International Center for Law & Economics (ICLE) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

Reasonable people may disagree about their merits, but digital competition regulations are now the law of the land in many jurisdictions, including the UK. Policymakers in those jurisdictions will thus need to successfully navigate heretofore uncharted territory in order to implement these regulations.

Most digital competition regulations, including the Digital Markets, Competition and Consumers (DMCC) Act, give competition authorities new and far-reaching powers. Ultimately, this affords them far greater discretion to shape digital markets according to what they perceive to be consumers’ best interests. But as a famous pop-culture quote has it, “with great power comes great responsibility”.[2]

The CMA’s acquisition of vast new powers does not mean it should wield them indiscriminately. Because these new powers are so broad, they also have the potential to deteriorate market conditions for consumers. Thus, the CMA and other enforcers should consider carefully how best to deploy their newfound prerogatives. Enforcers will need time to identify those enforcement practices that yield the best outcomes for consumers. While this will undoubtedly be an iterative process, some overarching regulatory and enforcement principles appear to us to be essential:

  1. Prioritize consumer welfare: Measure success by assessing outcomes for consumers, including price, quality, and innovation;
  2. Establish clear metrics and conduct regular assessments: Design specific, measurable indicators of success, and evaluate outcomes frequently to ensure implementation remains effective and relevant;
  3. Respect platform autonomy: Ensure that firms remain the primary designers of their platforms;
  4. Implement robust procedural safeguards and evidentiary standards: Minimize unintended consequences through sound legal processes and evidence-based decision-making;
  5. Foster innovation and technological progress: Ensure regulations do not stifle innovation, but rather encourage it across the digital ecosystem.

In many respects, the draft guidance already incorporates elements of these principles, and the CMA is to be commended for its thoughtful approach. We discuss these principles in greater detail below, followed by a discussion of areas where the guidance can and should be made to better reflect this approach.

A. Prioritize Consumer Welfare

Consumers’ well-being should be the metric by which digital competition enforcement and compliance are ultimately assessed. As the CMA’s Prioritisation Principles proclaim: “The CMA has a statutory duty to ‘promote competition, both within and outside the UK, for the benefit of consumers.’”[3] It is thus essential that DMCC enforcement ultimately benefits, rather than harms, consumers. In this respect, it will be crucial for the CMA to distinguish conduct that “harms” competitors, because a rival brings superior products to the market, from conduct that harms consumers by distorting competition and foreclosing rivals. Preventing the former would penalize consumers by forcing strategic market status (SMS) firms to degrade their products and by dampening their incentives to continue to improve them.

As we explain throughout our comments, some simple procedural and substantive guardrails could ensure that enforcement ultimately delivers the goods for consumers. For example, the CMA’s guidance should make clear that potential SMS firms are allowed to make the case that increases in size, scope, or popularity are due to competition on the merits, rather than a chronic and entrenched position of market power. By the same token, the CMA should be required to show some degree of causation between consumer harm and potential SMS firms’ insulation from competition.

Favoring light-touch remedies over more intrusive alternatives would reduce the risk that DMCC enforcement leads firms to degrade their platforms in order to comply with its provisions. Other principles that would help to ensure the DMCC remains committed to consumer welfare include granting SMS firms freedom to decide how to achieve outcomes mandated by the conduct requirements, thus leveraging their expertise and know-how, and allowing sufficient time for the effects of remedies to become palpable.

B. Establish Clear Metrics and Conduct Regular Assessments

A second important point is that the deployment of new regulation is a discovery process.[4] Regulators (including the CMA) ought to require multiple iterations—learning from each as they proceed—in order to craft optimal rules. Indeed, despite some similarities with competition law, the DMCC largely rests on untested rules and procedures. This is not inherently bad or good, but it does increase the scope for enforcement errors that could harm stakeholders, including consumers and small businesses. These errors can largely be avoided by defining clear metrics for success, repeatedly assessing whether they are met, and learning from identified successes and/or failures to improve the legal regime in the future. In short, DMCC enforcement should be dynamic, with repeated reassessments of its effectiveness.

While there is some evidence in the CMA’s draft digital-markets competition regime guidance[5] that these issues are at the forefront of its thinking, there is scope to incorporate more positive feedback loops into the DMCC’s implementation. This includes establishing clear metrics for success; creating processes—such as regulatory sandboxes, experiments, and structured regulation[6]—to test rules, and to identify and impute potential failures; as well as defining procedures that enable the CMA to act on previously unavailable information and change its regulatory stance accordingly.

A look at the European experience with the DMA may prove enlightening in this respect. At the time of writing, European users still cannot directly click on Google Maps locations from the Google search-engine results page.[7] In a perfect world, regulations like the DMCC need to identify such failures (ideally before the rules are rolled out to hundreds of millions of users), and then determine whether they are inherent in the legal regime or whether they amount to noncompliance by firms. Depending on the answer, this may lead the regulator either to open noncompliance proceedings (if firms are to blame) or to rethink implementation (if degraded service is a direct consequence of the rule). This is much easier said than done. But creating processes that facilitate such assessments, and using them to improve rules going forward, is essential to maximize positive outcomes for consumers.

C. Respect Platform Autonomy

A third guiding principle is that SMS firms, rather than regulators or (even moreso) competitors, should remain the platforms’ central designers. The basic issue is that it is the platforms themselves whose incentives are the most (though not perfectly) aligned with consumers. Indeed, direct competitors will generally stand to benefit if a platform becomes highly degraded, as this may cause consumers to switch platforms. Similarly, while regulators do not benefit from degrading the services of an SMS firm, they are unlikely to suffer severe repercussions if it occurs.

The same does not hold for platforms. To a first approximation, where consumers are dissatisfied, even a monopoly platform may suffer significant losses. Consumers may switch platforms or reduce their time on the platform, which harms the firm’s bottom line and gives it an incentive to avoid offering a degraded service.

In short, platforms have better—though certainly not perfect—incentives than anyone else to design services that are optimal for users. This does not mean other stakeholders shouldn’t have any input into the scope and shape remedies and how they are rolled out, but rather that key platform-design decisions should ultimately reside with a platform’s owner.

In practice, this behooves policymakers, including the CMA, to exhibit some deference toward platforms’ product-design philosophy and key product differentiators. For instance, if a platform has built its success on features like a frictionless user interface or data security, then enforcers should favor remedies that preserve these key differentiators, even if this might entail less than optimal competition at the margin. This is simply a recognition that, if a platform has become highly successful by offering certain features to users, there is a high likelihood that users value them, and enforcers should thus attempt to preserve them.

In other words, there may be tradeoffs between increasing competition (or contestability) and certain platform features. The optimal balance is unlikely to be one where no weight is given to platform features.

D. Implement Robust Procedural Safeguards and Evidentiary Standards

Fourth, enforcers should bear in mind the maxim: “first, do no harm”. Indeed, while unintended consequences are largely unavoidable when intervening in complex systems like digital-platform markets, some procedural and evidentiary safeguards can minimize these undesired consequences. In general, these safeguards should guarantee (i) that enforcers intervene only when necessary, and (ii) that, when interventions occur, they are as surgical as possible.

In practice, this means the CMA should ensure that DMCC remedies do not degrade the usability of online services—as has arguably been the case in the EU under the Digital Markets Act (DMA).[8] Among the ways this be achieved is by granting firms the time (in terms of compliance deadlines) and flexibility (by testing multiple iterations of remedies) to roll out effective remedies. Similarly, there is a sense the CMA should favor simple remedies which only affect one part of an online platform, rather than more complex remedy packages that could have wider-reaching unintended consequences.[9] A corollary is that enforcement actions are only appropriate when enforcers have a clear sense that remedies would enable markets to function better than the status quo.

In general, enforcers should also be open to the notion that DMCC enforcement could have potentially unintended and undesirable effects on consumers.[10] After all, other digital market regulations—notably, the EU’s General Data Protection Regulation (GDPR)—have been shown to harm innovation and competition.[11] There is no reason to assume the DMCC could not suffer from similar issues if enforcers are not cautious.

Finally, enforcers should intervene only when there is a clear sense that the market is not sufficiently disciplining SMS firms; this, in turn, implies that services should only be designated when there is clear evidence that competition is failing, and that a platform has significant market power. This is why, as explained in Section II.A, it is advisable not to dispense with the definition of relevant markets while enforcing the DMCC, and to have in place a procedure that ensures the best assessment possible of market power. This is a “filter” that would allow the CMA to make efficient use of its resources and reduce both the administrative and error costs of the DMCC, benefitting not only those firms offering digital services and products, but also consumers and society overall.

E. Foster Innovation and Technological Progress

Finally, we have not reached the end of digital history. Online platform markets, including those services designated under the DMCC, could (and likely will) continue to evolve and improve dramatically over the coming decades. This is likely to be especially true as generative-AI technology continues to augment these services.

Ensuring this innovation continues requires that enforcers preserve firms’ incentives to invest in their services. These incentives may sometimes be enhanced by boosting competition, but they also depend on firms (even designated services) being able to earn risk-adjusted returns on their investments.[12] Enforcers should thus be particularly vigilant that DMCC enforcement does not expropriate designated firms, or else their incentives to continue innovating may be severely diminished (and these weakened incentives may have a knock-on effect on rivals’ efforts if innovation is seen as a strategic complement). The upshot is that, pushed to their limits, mandated competition and transfers of rents away from gatekeepers could have dramatic effects on the innovative output of some of the world’s leading innovators.

As we explain throughout the rest of our comments, some simple changes to the current guidance could bring DMCC enforcement further in line with these guiding principles, thereby benefiting society as a whole.

Legitimate concerns were raised when the DMCC (and other digital competition regulation) was passed into law. Indeed, if executed poorly, these regulations have the potential to significantly degrade consumers’ online experience, with little to no benefits to competition.[13] This is arguably what has occurred in the European Union under the DMA. That these regulations are now the law of the land should not obscure such challenges. Instead, these early warning signs suggest it is essential to fine-tune guidance and other policy documents that will drive enforcement of these regulations.

The remainder of these comments proceeds as follows. Section II discusses how strategic market status is assessed under the CMA’s draft guidance. Section III discusses the guidance on conduct requirements. Section IV discusses pro-competition intervention.

II. Strategic Market Status Definition Should Be Based on Solid Economic Evidence and Ensure an Efficient Use of the CMA’s Resources

A platform’s designation as an SMS firm is the first step toward application of the DMCC. Hence, this section of the guidance is of utmost importance to provide economic agents with certainty in designing their business models, contracts, and strategies.

The DMCC sensibly contemplates that a digital-services provider should have “substantial and entrenched market power” and “a position of strategic significance in respect of the digital activity” to be designated as an SMS firm.[14] This is appropriate, because only a firm with substantial market power would be able to impose the kind of harms that are generally relevant to competition law.[15]

Of course, the DMCC also has broader objectives, such as the fair-dealing objective, the open-choices objective, and the trust and transparency objective. But even in those scenarios, a firm without substantial market power would most likely not have incentive to treat its customers and business users “unfairly”.

This “filter” also channels the efficient use of the CMA’s resources. Without a requirement of some substantial degree of market power, competition agencies would pursue cases that are not necessarily worth the effort, as the number of citizens or businesses harmed by the alleged anticompetitive or unfair practice would be irrelevant. This would engender many more “false positives” and an over-deterrence effect on economic agents.[16] As Petit and Radic explain, the market-power requirement also filters out claims that would entail mere transfers of surplus, rather than real harms to the competitive process.[17]

The guidance then (S.2.42) specifies that: “The mere holding of market power is not in itself sufficient for an undertaking to meet the first SMS condition which requires that market power is ‘substantial’ and ‘entrenched’,” and that “‘Substantial’ and ‘entrenched’ are distinct elements and each needs to be demonstrated.” This is an important distinction, as any firm may have some market power. As Landes and Posner explained in their seminal article “Market Power in Antitrust Cases”:

[M]arket power must be distinguished from the amount of market power. When the deviation of price from marginal cost is trivial, or simply reflects certain fixed costs, there is no occasion for antitrust concern, even though the firm has market power in our sense of the term.[18]

The guidance then further clarifies, however, that the terms “substantial [and] entrenched … are not entirely separate as the assessment of each will typically draw on a common set of evidence on market power”. While it is fair to assert that the magnitude of market power (substantial or not) and its level of resiliency (entrenched or not) would have to be assessed using similar evidence, the fact that the drafters of the DMCC deliberately chose to include those words in S. 5.20, and connect them with the word “and” cannot be ignored.

Along those lines, the guidance should establish what it means for market power to be “entrenched”. In turn, this word should mean something different than “substantial”, as it should add some meaning to the Section. The concept is not defined in the case law or codified by the United Kingdom, the EU, or the United States. Both the “Online Platforms and Digital Advertising Market Study Final Report”[19] (at 21) and the “Furman Report”[20] (at 75), however, use the term “entrenched market power” to mean “difficult to remove”. The former, for instance, states that:

Google and Facebook have such entrenched market power as a result of these self-reinforcing entry barriers, that we have concluded that the CMA’s current tools, which allow us to enforce against individual practices and concerns, are not sufficient to protect competition. Further, the markets we have reviewed are fast-moving, and the issues arising within them are wideranging, complex and rapidly evolving. Tackling such issues requires an ongoing focus, and the ability to monitor and amend interventions as required.[21]

While these comments do not endorse the findings or conclusions of the abovementioned reports, the CMA may consider them as guidance to define the term “entrenched” and to specify which kind of evidence may be used to substantiate it. Following the logic of said reports, “entrenched” should mean a high degree of market power, which is not only “substantial”, but also hard to dispute. Therefore, the assessment of such quality should involve some long-term evidence of rivals not entering the market (because, for instance, of the existence of regulatory barriers to entry) or at least of a dominant firm with very stable market shares (because entrants can only compete on a small competitive fringe). A recent background note by the Organization for Economic Co-operation and Development (OECD), for instance, acknowledges that “(a)n entrenched market position therefore implies a degree of durability in a dominant position and resistance to changes”.[22]

Therefore, Section 2.52 of the guidance should be revised or eliminated. The section establishes that “where the CMA has found evidence that the firm has substantial market power at the time of the SMS investigation, this will generally support a finding that market power is entrenched”, establishing a relative presumption, rebuttable with “clear and convincing evidence” that such market power is likely to dissipate. As has been explained in the paragraphs above, the word “entrenched” should add some meaning to the section. The term “entrenched market power” cannot be reasonably construed as being generally the same as “market power”.

The guidance establishes (S.2.43) that “…assessing substantial and entrenched market power does not require the CMA to undertake a formal market definition exercise which often involves drawing arbitrary bright lines indicating which products are ‘in’ and which products are ‘out’.” It would be wise, however, not to disregard the relevant market definition when analyzing the existence of substantial and entrenched market power. While contemporary economists may be open to dispensing with the definition of relevant markets where it is possible to directly infer market power,[23] market definition is helpful not only to measure market power, but also to better identify the competitive process being harmed.[24] As Manne explains:

Particularly where novel conduct or novel markets are involved and thus the relevant economic relationships are poorly understood, market definition is crucial to determine “what the nature of [the relevant] products is, how they are priced and on what terms they are sold, what levers [a firm] can use to increase its profits, and what competitive constraints affect its ability to do so.” In this way market definition not only helps to economize on administrative costs (by cabining the scope of inquiry), it also helps to improve the understanding of the conduct in question and its consequences.[25]

Of course, as the same author warns, it is very important, especially in the case of digital markets, not to define relevant markets too narrowly by looking only to past competition in a static way:

Market definition is inherently retrospective—systematically minimizing where competition is going, and locking even fast-evolving digital competitors into the past. Traditional market definition analysis that infers future substitution possibilities from existing or past market conditions will systematically lead to overly narrow markets and an increased likelihood of erroneous market power determinations. This is the problem of viewing Google as a “search engine” and Amazon as an “online retailer,” for example, and excluding each from the other’s market. In reality, of course, both are competing for scarce user attention (and advertising dollars) in digital environments; the specific functionality they employ in order to do so is a red herring. As such (and as is apparent to virtually everyone but antitrust enforcers and advocates of increased antitrust intervention) they invest significantly in new technology, product designs, and business models because of competitive pressures from each other…

Relatively static market definitions may lead systematically to the erroneous identification of such innovation (or other procompetitive conduct) as anticompetitive. And the benefits of innovation aimed at competing with rivals outside an improperly narrow market, or procompetitive effects conferred on users elsewhere on the platform or in another market, will be relatively, if not completely, neglected.[26]

The guidance takes the abovementioned into account in Sections 2.47 and 2.48:

2.47 The CMA’s starting point will be market conditions and market power at the time of the SMS investigation. From that starting position, the CMA will consider the potential dynamics of competition over the next five years, taking into account any expected or foreseeable developments that may affect the firm’s conduct in respect of the digital activity if the firm was not to be designated.

2.48 As with any ex ante assessment, there will necessarily be some uncertainty as to the future evolution of a sector. However, such uncertainty does not preclude the CMA from finding substantial and entrenched market power based on the evidence available to it when making its assessment. If post designation developments or new evidence indicate that a firm’s market power has – contrary to the CMA’s expectations in its initial assessment been significantly diminished, the CMA is able to revisit its previous assessment and can consider whether to revoke the SMS designation.”

It is commendable that the guidelines contemplate procedures to continue or revoke an SMS designation and specify that the CMA would undertake ongoing monitoring and early reassessment of relevant digital markets, considering the submissions from economic agents. This is a good practice or regulatory governance, considering the abovementioned dynamism of digital markets.

At this point, it is relevant to mention that the market definition—or, in any case, the substitutability analysis conducted by the CMA—should consider the possible substitution of a digital product or service from offline markets. While market definition often involves discussion of specific uses or specific features of a product or service, substitutability should be measured in light of customers’ inclination to switch to other producers of the same product or services, or even to other products after the introduction of “small but significant and non-transitory increase in price”.[27] Irrespective of the product’s nature, if customers switch, there is an alternative to the hypothetic monopolist’s product that disciplines any potential exercise of market power.

There is evidence, for instance, that Amazon has faced robust competition from retail stores like Walmart.[28] In Mexico, for instance, there is empirical evidence that Amazon not only competes, but competes intensively with other distribution channels and has a net-positive welfare effect on Mexican consumers. A 2022 paper found that “e-commerce and brick-and-mortar retailers in Mexico operate in a single, highly competitive retail market”; and that “Amazon’s entry has generated a significant pro-competitive effect by reducing brick-and-mortar retail prices and increasing product selection for Mexican consumers”.[29]

The guidance clarifies in Section 2.45 that:

Substantial and entrenched market power is a distinct legal concept from that of ‘dominance’ used in competition law enforcement cases, reflecting the fact that the digital markets competition regime is a new framework with a different purpose. As a result, the CMA will not typically seek to draw on case law relating to the assessment of dominance when undertaking an SMS assessment.

This wording suggests that the CMA could set a lower standard than that required to infer dominance in the application of competition law. While the DMCC has a different purpose than the Competition Act of 1998 and the Enterprise Act of 2002, it cannot ignored that the DMCC is concerned with the regulation of competition in digital markets,[30] and that it confers power to the CMA “to promote competition where it considers that activities of a designated undertaking are having an adverse effect on competition”.[31] Moreover, by using terms like “market power” (that in turn has to be “substantial”), the DMCC’s text allows us to infer that the bar should be set, at least, at “dominance” (if not higher, if we consider that the market power should be “entrenched”).

The DMCC, in other words, speaks the language of competition law, and competition law tends to equate the concept of dominance with “substantial market power”. As Whish explains:

Paragraph 65 of the Court’s judgment in United Brands can be understood to equate dominance with the economist’s definition of substantial market power; the Commission does so in paragraph 10 of its Guidance on Article 102 Enforcement Priorities where it says that the notion of independence referred to by the Court is related to the degree of competitive constraint exerted on the undertaking under investigation. Where competitive constraints are ineffective, the undertaking in question enjoys ‘substantial market power over a period of time; the Guidance says that an undertaking has substantial market power if it is ‘capable of profitably increasing prices above the competitive level for a significant period of time.[32] (emphasis added).

The 2004 Office of Fair Trading Guidelines on Abuse of a Dominant Position, in the same vein, states that “(a)n undertaking will not be dominant unless it has substantial market power”.[33]

Market power must be assessed case-by-case. Therefore, it is only reasonable that the CMA shouldn’t be constrained by past specific findings of dominance (or findings that there was no dominance). Still, there is no reason to disregard the criteria applied in competition caselaw to assess the dominance of a given economic agent. Such criteria would bring consistency to the CMA’s actions, more predictability to economic agents, and, therefore, more legitimacy to the DMCC.

The guidance also deals with the concept of “a position of strategic significance” of an SMS firm. In that regard, it follows to a great extent the definitions included in the DMCC, establishing that a firm has strategic significance if it has “achieved a position of significant size or scale in respect of the digital activity” and “(a) significant number of other firms use the digital activity as carried out by the firm in carrying on their business”.[34] It does not, however, offer clear guidance, as the following section establishes that “(t)here is no quantitative threshold for when size or scale of the potential SMS firm can be considered as significant, and this may be assessed in terms of the firm’s absolute position and/or relative to other relevant firms”.[35]

Like the concepts of “substantiality” and “entrenchment”, the concept of “strategic significance” should mean something different and additional to “ordinary” market power. Otherwise, we can assume that the DMCC’s drafters would not have included it in its Section 2. Several of the laws and regulations addressing digital markets target firms’ size, scalability, or “strategic significance”. But many investments, business practices, and innovations that benefit consumers—either immediately or over the long term—may also enhance a company’s size, scale, or “strategic significance”. Some of these are possible because of a company’s size. In that vein, targeting size or conduct that bolsters market power, without any accompanying evidence of harm, creates a serious danger of broad inhibition of research, innovation, and investment—all to the detriment of consumers.

Finally, regarding the evidence considered to assess market power, the guidance (Section 2.49) mentions that it may include “a firm’s internal documents, business forecasts, or industry reports”. Later, paragraphs 2.63 to 2.67 below describe how the CMA may assess such evidence. These sections establish, in general, that the CMA does not have a prescriptive list of evidence, and that the standard of proof will be of the “balance of probabilities”. This is correct and according to procedural good practices.

Furthermore, it is why the CMA should be cautious not to rely too heavily on internal business documents to prove anticompetitive behaviour or “dominance”. As Manne and Williamson explain, business documents “are written by business people, for business purposes, and their translation from business to law (and economics) is frequently untenable”.[36] Salespeople, for instance, have strong incentives to communicate to internal stakeholders their efforts to beat competitors and their results, often overstating them. These communications can be mistakenly construed as evidence of “anticompetitive conduct”.

III. Conduct Requirements

Along with pro-competition interventions, discussed in the next section, the CMA’s other primary tool to achieve the DMCC’s goals of “fair dealing”, “transparency”, and “open choices” will be conduct requirements.[37] The guidance generally adopts a reasonable and balanced approach to such requirements, which suggests that the CMA is committed to achieving the DMCC’s goals without unduly burdening SMS firms.

While the CMA should be commended for putting the interests of consumers first and acknowledging the possibility that conduct requirements might not always pan out as expected, the guidance does not always draw a sufficiently clear distinction between the interests of third parties and consumers. To avoid stifling procompetitive conduct, the guidance should explicitly acknowledge that these groups’ interests may not always align. Where they conflict, consumers’ interests must take precedence over those of business users—including, of course, competitors. This is important to ensure that the DMCC is used to bolster competition to the ultimate benefit of consumers, and not as a rent-seeking tool for self-interested third parties.

In addition to this overarching observation, we offer other thoughts on how to improve the guidance’s conduct-requirement provisions. In particular, we think some key terms and concepts could use further clarification; that the CMA should be patient in evaluating measures taken by SMS firms to comply with conduct requirements; and that the CMA should be realistic about its ability to anticipate the effects of complex conduct requirements and, in particular, the interaction of several conduct requirements applying simultaneously. We also appreciate the use of examples and encourage the CMA to provide more such examples when possible.

A primary challenge of ex-ante competition rules is the indeterminacy of some core concepts used to establish the need for prohibitions and obligations to address gatekeeper power. The CMA’s guidance makes important inroads in the direction of much-needed clarity by demonstrating what inherently vague concepts, such as “fairness”, mean in practice. Some key DMCC concepts, however, could benefit from further clarification. For instance, when will the CMA consider that market power has increased “materially”? (S.20(3)(C)). Does any increase in market power count toward satisfying the materiality criterion, or does the increase have to be of a certain magnitude? If so, how much? While a definitive answer likely cannot be given a priori, it would be useful for the CMA to offer more guidance on the factors that will be considered when assessing materiality. Some examples would also be useful to advance legal certainty.

The guidance generally recognizes the importance of protecting consumer welfare and preserving SMS firms’ incentives to continue to innovate and reap the rewards of their business acumen, foresight, and innovation (See, e.g., Points 3.7, 3.22 and 3.23). The guidance is also cognizant of the possibility of unintended consequences, which suggests that the CMA is realistic about the DMCC’s potential to promote—but also potentially to distort—competition, if conduct requirements are poorly designed (see, for instance, Points 3.26 and 3,28). This is to be applauded, as no regulation is without risks and tradeoffs.[38]

In keeping with this sound approach, the CMA should make clear that not every type of conduct that might strengthen a company’s SMS justifies imposing conduct requirements. According to S.20(3)(C) DMCC:

Carrying on activities other than the relevant digital activity in a way that is likely to increase the undertaking’s market power materially, or bolster the strategic significance of its position, in relation to the relevant digital activity. (emphasis added).

As the DMCC indicates, strategic significance can arise from increased scale, size,[39] and popularity,[40] among other factors. Increased size, scale, and popularity can, however, also be the result of increased efficiency or superior products and services. In other words, companies, including those that render “digital activities” as defined by the DMCC,[41] can also gain size, scale and popularity by competing on the merits, not simply by thwarting competition. In a recent interview about competition reform, Aaron Wudrick, senior fellow and director of the Macdonald-Laurier Institute’s Domestic Policy Program, noted thus:

Say you have one competitor, in particular, offering lower prices, higher quality, or newer cutting-edge products, so they end up breaking from the pack. They gain customers, and their market share rises. So this higher concentration is actually signaling more, rather than less, competition![42]

Wudrick was advising against using concentration measures alone—as opposed to market power—as proxy for the level of competition in a given market. The DMCC does not dispense with the market-power requirement, which is generally a good thing.[43] But like concentration, some measures of SMS status—such as size, scale, and popularity—could be equivocal or might point to vigorous competition, rather than the absence thereof.

Sound competition regulation should seek to encourage, not castigate, procompetitive conduct that rewards companies with size, scale, and popularity. Furthermore, so long as entry into the market is possible, size, scale and network effects can yield further procompetitive benefits, thus creating a virtuous cycle. It is therefore important for the guidance to draw a line in the sand between conduct that merely entrenches market power and conduct that increases sales or traffic as a result of competition on the merits—including competition along the consumer-valued dimensions of efficiency, quality, or convenience.

Just as in competition law, the primary criterion here should be whether a certain conduct has negative, neutral, or positive effects for consumers. Where increases in a firm’s size, scale, or sales revenue (or traffic, as appropriate) are accompanied by cognizable consumer benefits (e.g., lower prices, better quality, choice, or curation), the CMA should generally conclude that such growth is the result of competition on the merits. By contrast, an increase in a firm’s size, scale, or sales that runs parallel to long-term depreciating consumer benefits would be a prima-facie indication that the company is using its position to entrench its market power, and that it may therefore be appropriately labelled an SMS firm. Where increases in scale, size, or popularity are not accompanied by any appreciable effects on consumers—positive or negative—the CMA should defer to consumer choice and to companies’ freedom to experiment, reorganize, redesign and, in general, run their enterprise as they see fit.

In any case, the CMA should allow, and the guidance should make clear, that potential SMS firms are allowed to make the case that any increases in size, scope, or popularity are due to competition on the merits, rather than a chronic and entrenched position of market power. By the same token, the CMA should be required to show some degree of causation between consumer harm and a potential SMS firm’s insulation from competition.

For instance, the CMA should be clear about when tying is procompetitive, such as when consumers benefit from increased convenience or when two products/services combine to create synergies are linked. The CMA should clarify how it will interpret S.20(3)(C), which not only prohibits SMS firms from requiring but also incentivising “users or potential users of one of the designated undertaking’s products to use one or more of the undertaking’s other products alongside services or digital content the provision of which is, or is comprised in, the relevant digital activity”. Read literally, this would prohibit any combination of services that comprise one or several digital activities.

Consumers, however, often appreciate and benefit from integrated products and services—such as, e.g., the seamless integration of Google Search and Google Maps. In fact, following the DMA’s entry into force in the EU, many users have complained that they can no longer access Google Maps from Google.[44] Further, tying could reduce consumers’ search costs and improve functionality by integrating complimentary products that work better together.[45] The guidance should clarify that the CMA does not intend to throw the proverbial baby out with the bathwater.

S.20(3)(c) allows the CMA to impose conduct requirements that capture non-designated digital activities for the purpose of preventing a material increase in the SMS firm’s market power or strategic significance in relation to the designated digital activity. As Point 3.13 of the guidance explains:

This would include requirements to prevent the SMS firm from carrying out non-designated activities in a way that is likely to reinforce or embed such market power and/or position of strategic significance.

As indicated in our comment on Point 3.7 of the guidance, however, strategic significance can also result from procompetitive conduct, such as improved efficiency, quality, or innovation. An expansive reading of S.20(3)(c) would prohibit conduct on any market in which the SMS company is active that resulted in or was (according to the CMA) likely to result in an increase in size, scale, or popularity. We fear that this reading is not only overly broad, but risks capturing swathes of procompetitive conduct in markets that are not even covered by the DMCC.

The guidance could, at a minimum, give some sense of the sort of nondigital activities that could be affected by S.20(3)(c)—such as, e.g., through non-exhaustive but illustrative examples (examples are given elsewhere such as, e.g., Points 3.15, 3.14, or 3.8). We believe this is crucial for the sake of legal certainty, as well as to ensure that the DMCC’s scope remains cabined within its natural and legally prescribed limits, thereby reducing the likelihood of regulatory overreach.

In general, the CMA should be clear that the DMCC’s goal is to protect competition and consumers, not to help competitors. To a large extent, the guidance achieves this, and should be commended for doing so (see, e.g., Point 3.10). In Point 3.22, the guidance states that:

The factors that informed the CMA’s decision to designate a firm as having SMS in respect of a relevant digital activity, including its size, market power, and strategic significance, will often be highly relevant in identifying issues that could cause harm to businesses or consumers which the CMA may wish to remedy, mitigate or prevent through the imposition of CRs. (emphasis added).

This might suggest that harms to businesses and to consumers are treated equally under the DMCC, which we strongly advise against (see our comments to Point 3.7 above). In the next point, however, the Guidance clarifies that “in considering what a [conduct requirement] or combination of CRs is intended to achieve, the CMA will have regard in particular to achieving benefits for consumers”. This is the right approach, and a welcome clarification.

As indicated in our response to Point 3.7, however, the CMA should be clear that there may be times when the interests of competing businesses or business users are not equivalent to the interests of consumers. The guidance’s indication that conduct requirements might benefit consumers either directly or indirectly by giving rise to benefits to businesses that are likely to be passed on to consumers should be tempered by acknowledging that some benefits might not be passed down to consumers at all and, more generally, that not everything that harms or benefits competitors will necessarily have the same effect on consumers. This is important to ensure that the DMCC is used to benefit consumers, and not as a rent-seeking tool by self-interested (and, often, less-successful) businesses. We therefore suggest that the guidance explicitly incorporate examples of situations where certain behavior by SMS firms harms business users or competitors but benefits consumers (and vice versa).

It is good that, as in Point 3.26, the CMA is aware of the need to ensure consistency and coherence in designing and implementing conduct requirements, especially given the range of products and services that are encompassed under “digital activities”. Indeed, the “digital activity” blanket term is misleading. “Digital activities” are anything but monolithic. They cover a range of products and services with little in common, except that they are provided via the internet and involve some sort of digital content.[46]

Furthermore, the companies that render such services are also vastly different. For example, some, like Amazon, are primarily logistics operators, while others, like Apple, are primarily hardware companies. In other words, given that SMS firms and their products are anything but homogenous, achieving coherent and consistent outcomes might require the CMA to impose different conduct requirements on different companies for the same digital activity.

Our (somewhat belated) point here is that the CMA should be commended for showing an awareness that achieving coherence and consistency under DMCC is an important, albeit complex, task. To ensure that coherence and consistency remain a top priority—in theory as well as in practice—the guidance could spend more time elaborating how the CMA intends to design conduct requirements such that different products, rendered by different companies, achieve the same goals.

The guidance states that, whenever possible, SMS firms will be free to decide how to achieve outcomes mandated by the conduct requirements (see, e.g., Principle 1, 3). This is the correct approach, as it allows SMS firms sufficient flexibility to leverage their expertise and know-how in designing solutions that do not undermine the core benefits of their products and services, while allowing the CMA to monitor firms’ alignment with the DMCC’s goals. In a similar vein, it is also commendable that the CMA is willing to impose higher-level requirements before escalating “the enforcement pyramid” toward more stringent and detailed conduct requirements (Principle 4). The opposite approach would be unjustified, and more apt to lead to unintended consequences. It could also foster ill will and distrust between the regulator and the regulated companies, which could negatively affect the DMCC’s effectiveness ove the long term.

With reference to Point 3.28, it is unclear what timescale the CMA will consider when assessing whether a conduct requirement or combination of conduct requirements is likely to be effective in achieving its intended aim or aims. To ensure legal certainty and compliance, however, the guidance should provide some sense of how soon the CMA expects a conduct requirement to start producing the desired results. Or, put differently, when will the CMA consider that a conduct requirement has succeeded or failed? Understandably, this may vary from case to case, but the CMA should at least provide general timescales, along with an explanation and, if possible, examples.

Our view is that, in establishing a timescale, the CMA should be patient and allow a reasonable period for the results of changes made pursuant to the conduct requirements to become palpable. For example, if the CMA requires an SMS company to allow third-party app stores on its operating system, it might take some time before consumers start using those alternative app stores. Thus, if the market shares of competing app stores do not immediately surge following the implementation of changes, the CMA should not be too quick to assume that the SMS firm has not complied with its obligations under the DMCC or has “complied maliciously”.[47] It could be that consumers need more time to get acquainted with the new options, or that they ultimately prefer to stick with the first-party app store. It would be useful to underscore this patience in the guidance, as it would provide clarity to SMS firms and help manage the expectations of business users.

On a separate note, the CMA should be commended for considering effects on consumers and taking into account the risk of unintended consequences when assessing whether a conduct requirement would be effective in achieving its aims. As we have argued throughout these comments, the CMA should ensure that it does not lose sight of the DMCC regime’s effects on consumers and that it remain vigilant to the possibility of unintended consequences with every intervention.

In Point 3.29, the guidance states that the CMA will seek to ensure that a conduct requirement or combination of conduct requirements is coherent with conduct requirements imposed on the same or different SMS firms. It also states that the CMA may consider, as appropriate, coherence with other interventions imposed elsewhere within the scope of the authority’s powers. Ensuring coherence generally signals the right approach, but it is easier said than done (see also our comments on Point 3.26).

Conduct requirements are likely to involve complex product-design changes. They are also, by definition, forward-looking, requiring the CMA to anticipate likely outcomes from the confluence of multiple codependent factors. To minimize unintended consequences and error costs, the CMA should start with simpler, individual conduct requirements, rather than complex, combined conduct requirements. During the early stages of the DMCC, in particular, it is risky to start with combinations of conduct requirements, as such requirements might behave differently together than they do individually.

Furthermore, individual conduct requirements make it easier to observe the relationship between the independent variable (the conduct requirement) and the dependent variable (the market outcome sought). Only once the CMA has significant experience with individual conduct requirements should it start tinkering with combinations. Obviously, some combinations of conduct requirements (such as, e.g., conduct requirements aimed at different SMS firms rendering the same digital activity) are inevitable, but we do not advise the CMA to be overly ambitious until it has developed substantial expertise. A commitment to this piecemeal and cautious approach could perhaps be incorporated into the guidance.

When assessing the proportionality of conduct requirements, the guidance does well to consider the likely positive and negative effects on SMS firms (Point 3.30). The DMCC should not seek to punish SMS firms or undercut their incentives to keep investing in products and services. It is important that conduct requirements do not disproportionately encumber SMS firms or impose unnecessary requirements.

When gathering information before imposing a conduct requirement, the guidance states that the CMA will consider information from a range of sources, including responses to invitations to comment, market-monitoring mechanisms, or market studies (Point 3.38). This is good: the CMA should not overly rely on information and complaints submitted by business users and third parties (especially competitors), who may have vested interests that do not align with those of consumers or the DMCC’s public-interest objectives. Moreover, as some have pointed out, business users face a “Stalter and Waldorf problem”, as they have an interest in never being satisfied and always seeking to extract more concessions from the regulated companies.[48]

Generally, the CMA should be commended for its willingness to give SMS firms flexibility in responding to conduct requirements, even in ways that differ from its interpretative note (see, for example, Point 3.55). In doing so, the guidance recognizes that there may be more than one valid way to interpret a conduct requirement.

We also salute the fact that the guidance displays a willingness to grant SMS firms sufficient time to implement the necessary technical or business changes (see Points 3.61-62). As noted throughout these comments, redesigning products or business practices is costly and time-consuming, and the CMA does well to manage expectations regarding how quickly these things can be achieved.

Furthermore, the CMA displays a generally cordial disposition to SMS firms, rather than an antagonistic one. In a future where the CMA is likely to interact repeatedly and work closely with SMS firms, fostering goodwill and trust between the regulator and the regulated is crucial.

IV. Pro-Competition Interventions

Section 44 of DMCC grants the CMA powers to make pro-competitive interventions (PCI or PCIs, in plural). How the CMA deploys these powers will be one of the factors that most determine whether the DMCC achieves its ambitions. The DMCC bill affords the CMA great discretion to design and enforce PCIs, making them something of a double-edged sword. In the best-case scenario, PCIs could be used to swiftly obtain light-touch remedies from SMS firms, while benefiting consumers and other stakeholders. On the other hand, if used heavy-handedly, PCIs have the potential to degrade online platforms, while dragging the CMA into lengthy legal disputes. In other words, PCIs’ greatest potential lies in their use as a surgical tool, not a sledgehammer.

The CMA’s guidance conveys reassuring signals that it will seek to use PCIs even-handedly. For instance, Article 4.12 of the guidance lists a series of indicators the CMA will consider when determining whether a practice has an adverse effect on competition (AEC).[49] To some extent, this mimics the sort of fact-intensive inquiry that firms have come to expect under competition rules. The CMA’s commitment to account for potential efficiencies when investigating potential AECs is also highly commendable.[50]

In that vein, a good additional procedural safeguard to include in the guidance would be to make at least a preliminary assessment of the PCI before initiating any CR procedure. If a competition agency does not have a very good idea how to implement a remedy that would allow the market to function reasonably, and better than the status quo, then it probably is not a good use of resources to initiate a procedure that may affect business models and practices that we know benefit consumers.[51]

Another positive note concerns the CMA’s acknowledgement that PCIs can fail. According to the guidance, this can happen when a PCI fails to increase competition in the intended way or, crucially, because the PCI degrades an SMS firm’s platform to such an extent that consumers are left worse off than if no PCI had been imposed (the latter is an important recognition that other regulators often fail to acknowledge). Indeed, as the draft guidance explains:

The CMA will have regard to a range of factors, including: (a) the PCI’s likely impact on the AEC and, in addition, any detrimental effects, either already arising or expected to arise from it… (c) the risk of the PCI not meeting its intended purpose and/or giving rise to unintended consequences.[52]

The CMA’s proposed PCI trial and testing of PCIs is, in that respect, a welcome addition. If carefully implemented, this should enable the authority to avoid some of the pitfalls that foreign enforcers, such as the European Commission, have encountered when attempting to enforce digital competition regulations. Following the entry into force of the DMA, gatekeepers have, for instance, been forced to degrade their platforms for European users—mostly because the DMA did not provide sufficient timeframes or legal sandboxes for gatekeepers to market test their compliance solutions.[53]

Despite these reassuring statements, there are several areas where we believe the CMA’s guidance could be amended to provide further clarity to firms and better safeguards against the potential unintended effects of DMCC compliance.

For a start, while the CMA understandably wants to leave all remedial options on the table, some additional clarity concerning the respective roles of behavioral and structural remedies would be welcome. There is, indeed, a sense that structural remedies are far more invasive than behavioral ones; as the CMA notes, they will often amount to selling a highly successful line of business into which an SMS firm may have invested billions of pounds to create or acquire. Structural remedies may also be much harder to implement when an online platform’s distinct services are built upon common infrastructure, such as code, that cannot be easily divided.

The guidance appears implicitly to recognize this much. Many of the procedural safeguards outlined in the CMA’s draft guidance are, indeed, impossible to apply to structural remedies. Divestitures cannot, by definition, be market tested, replaced, or revoked.[54] This makes them inherently less compatible with the spirit of the draft guidance than behavioral ones—which, again by definition, are more amenable to these procedural protections.

Given this, we believe a commitment by the CMA to use structural remedies only in exceptional circumstances would have a beneficial impact on SMS firms that may be considering whether to launch new services in the UK (or continue offering them), as they would be assured that the “nuclear option” is a last resort.

Along similar lines, there is also a sense that the CMA should, when possible, favor simple remedies (such as cease-and-desists orders) rather than more complex ones that entail deep product-design changes. Doing so would minimize the risk of unintended consequences and error costs. This is especially true during the early stages of DMCC implementation. Combinations of remedies might have collective effects that are greater than the sum of their parts.

It would also be easier to infer the cause of unintended consequences in the case of individual (rather than combined) remedies. Initially favoring simple remedies will enable the CMA to “learn by doing” by establishing clearer links between conduct requirements and observable outcomes. As it gains enforcement experience, it will be better-positioned to design more intricate remedy packages.

This leads us to a second important consideration. While the CMA’s proposed testing, trialing, replacement, and revocation of pro-competitive orders (PCO or PCOs, in plural) is commendable, we regret that some of these procedural safeguards appear to be merely optional under the guidance:

4.65 The CMA may include specific provisions within a PCO imposing requirements to test and trial different remedies or remedy design options (on a time limited basis) before imposing any PCI on an enduring basis….[55]

This may seem like a detail, but a firmer commitment to systematically trialing new PCOs before they are introduced would signal a desire to protect consumers from unintended negative effects of remedies. It would also give firms more leeway to experiment and identify those compliance solutions that reach the best tradeoff between the sometimes-diverging interests of consumers, competition, and the SMS firms themselves. In other words, trialing remedies is a sign of regulatory humility in the face of complex digital markets.

Third, the guidance seems to underestimate the difficulty of assessing some of the metrics on which it relies. This is notably the case of Section 4.12, which explains that the CMA will consider whether “SMS firms’ profits reflect a reasonable rate of return based on the nature of competition” or “the competitive positions of SMS firms and their rivals are based on the merits of their respective offerings”.[56] Assessing these factors is much easier said than done.

For example, determining whether profits reflect a “reasonable rate of return” amounts to asking what rate of return the firm would earn absent some anticompetitive conduct. This, in turn, requires a robust counterfactual analysis, including, but not limited to, comparative studies of prices for similar products in other countries, etc. This is no easy task. Yet the error costs entailed are significant, as overenforcement could diminish the very price signals on which the competitive process relies. In fast-moving digital markets, the problem is compounded, as what constitutes a “reasonable rate of return” is likely to quickly go out of date.

The guidance should therefore detail how the CMA intends to calculate a “reasonable rate or return”, and how it will weigh various factors to determine whether an SMS firm’s competitive position is based on competitive merits or on entrenched market power.

Finally, and along similar lines, we believe the CMA’s openness to replacing or revoking PCOs based on evidence that they do not sufficiently promote competition should be explicitly complemented by a mirror-image provision that enables replacement or revocation on the basis of evidence that (i) competition has become sufficiently robust to discipline SMS firms, or (ii) that a given PCO’s costs outweigh its benefits.

Explicitly contemplating these scenarios in the guidance would ensure that consumer welfare is ultimately the metric by which DMCC remedies are to be evaluated. There is, indeed, mounting evidence that DMA remedies in the European Union may not be achieving their stated ambitions because they unintendedly degrade the products of online platforms.[57] At the time of writing, it is still not possible to click through to a Google Maps location from the Google Search engine. The DMA’s enforcement has also significantly and negatively impacted traffic to hotel websites.[58] These unintended consequences provide clear evidence that, for the good of consumers, enforcers need to contemplate the possibility that a remedy does more harm than good. By explicitly contemplating these scenarios in guidance, the CMA would exhibit a humility that has, to date, been absent in other jurisdictions enforcing similar regulations.

The upshot is that the CMA’s guidance on PCIs is a step in the right direction. It shows a regulator willing to contemplate the possibility of regulatory failure when dealing with the highly complex world of digital-platform markets. Certain aspects of the guidance could, however, be further clarified to reinforce the CMA’s commitment to even-handed policymaking.

[1] Consultation on Digital Markets Competition Regime Guidance, Competition and Markets Authority (24 May 2024), https://www.gov.uk/government/consultations/consultation-on-digital-markets-competition-regime-guidance.

[2] Spider-Man (Sony Pictures 2002).

[3] CMA Prioritisation Principles, Competition and Markets Authority (Oct. 30, 2023), https://www.gov.uk/government/publications/cma-prioritisation-principles/cma-prioritisation-principles. See also, The Government’s Strategic Steer to the Competition and Markets Authority, Dep’t for Business & Trade Policy Paper (Jul. 18, 2019), https://www.gov.uk/government/publications/governments-strategic-steer-to-the-competition-and-markets-authority-cma/governments-strategic-steer-to-the-competition-and-markets-authority (“The CMA has a key role in helping consumers and benefiting the wider economy.”).

[4] Justin G. Hurwitz & Geoffrey A. Manne, Pigou’s Plumber (or Regulation as a Discovery Process), SSRN (15 Mar. 2024), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4721112.

[5] Draft Digital Markets Competition Regime Guidance, Competition and Markets Authority (2024), available at https://assets.publishing.service.gov.uk/media/6650a56d8f90ef31c23ebaa6/Digital_markets_competition_regime_guidance.pdf (hereinafter “Draft Guidance”).

[6] See Hurwitz & Manne, supra note 4, at 34-35.

[7] See Dirk Auer, The Future of the DMA: Judge Dredd or Juror 8?, Truth on the Market (8 Apr. 2024), https://truthonthemarket.com/2024/04/08/the-future-of-the-dma-judge-dredd-or-juror-8.

[8] Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on Contestable and Fair Markets in the Digital Sector and Amending Directives (EU) 2019/1937 and (EU) 2020/1828, 2022 O.J. (L 265) 1 (hereinafter “‘DMA”‘).

[9] This is explained in more detail in Section IV on pro-competition interventions.

[10] Margrethe Vestager, A Whack-A-Mole Approach to Big Tech Won’t Do, Says Europe’s Antitrust Chief, The Economist (4 Jun. 2024), https://www.economist.com/by-invitation/2024/06/04/a-whack-a-mole-approach-to-big-tech-wont-do-says-europes-antitrust-chief (“Some argue that opening up involves trade-offs. It does not have to. Asking platforms to open up their ecosystems, for instance, does not mean they have to compromise the security of their service. Technology can deliver an open and safe digital environment, if there is the will and sufficient investment to make that happen. Compliance with the DMA can be achieved without undermining users’ rights to safety and privacy.”); Foo Yun Chee, Exclusive: EU’s Vestager Warns About Apple, Meta Fees, Disparaging Rival Products, Reuters (19 Mar. 2024), https://www.reuters.com/technology/eus-vestager-warns-about-apple-meta-fees-disparaging-rival-products-2024-03-19.

[11] See, e.g., Jian Jia, Ginger Zhe Jin &Liad Wagman, The Short-Run Effects of GDPR on Technology Venture Investment, 40 Marketing Sci. (2021); Garrett Johnson, Economic Research on Privacy Regulation: Lessons From the GDPR and Beyond, in THE ECONOMICS OF PRIVACY (Avi Goldfarb & Catherine Tucker eds., 2024); See also Michal Gal & Oshrit Aviv, The Competitive Effects of the GDPR, 16 J. Comp. L. & Econ. 349 (2020).

[12] See Dirk Auer, Innovation Defenses and Competition Laws: The Case for Market Power 18 (2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4667754 (“There is thus a constant tension between antitrust enforcement and the promotion of innovation. And it is this tension which the dissertation seeks to explore. This task is complicated by the fact that the ex ante/ex post tradeoff is mostly intangible. It will generally be the case that no single innovation can be traced back to antitrust authorities’ restraint, nor can a single antitrust intervention easily be associated with reduced innovation. Just like people trying to respect their new year’s resolutions (lose weight, read more, etc.), no single departure is likely to be of pivotal importance. But a slew of small deviations will add up and may ultimately scupper authorities long term plans to bolster firms’ incentives.”).

[13] Dirk Auer, Matthew Lesh, & Lazar Radic, Digital Overload: How the Digital Markets, Competition and Consumers Bill’s Sweeping New Powers Threaten Britain’s Economy, 4 IEA Perspectives (Sep. 2023), available at https://laweconcenter.org/wp-content/uploads/2023/09/Perspectives_4_Digital-overload_web-1.pdf.

[14] Regarding market power, Section 2.40 of the Guidance states that: “Market power arises where a firm faces limited competitive pressure and individual consumers and businesses have limited alternatives to its product or service or, even if they have good ones, they face barriers to shopping around and switching. Therefore, an assessment of market power is largely an assessment of the available alternatives and the extent to which they are substitutable for that product or service. This includes alternatives available in the present and possibilities for entry and expansion.” It is important that the section mentions “possibilities for entry and expansion”, but the text should be amended to clarify that alternatives should be “reasonable substitutes” and not identical substitutes, with every feature of the product or service offered by the firm whose market power is being assessed.

[15] Hay, for instance, describes the concept of market power as a “filter” or “screen” in antitrust cases. “If we accept the notion that the point of antitrust is promoting consumer welfare, then it is clear why the concept of market power plays such a prominent role in antitrust analysis. If the structure of the market is such that there is little potential for consumers to be harmed, we need not be especially concerned with how firms behave because the presence of effective competition will provide a powerful antidote to any effort to exploit consumers.” See George A. Hay, Market Power in Antitrust, 60 Antitrust L.J. 807, 808 (1991).

[16] See, e.g., Geoffrey A. Manne, Error Costs in Digital Markets, in The Global Antitrust Institute Report On The Digital Economy 103 (Joshua D. Wright & Douglas H. Ginsburg eds., Nov. 11, 2020), https://gaidigitalreport.com (“Market definition is similarly employed as a function of error-cost minimization. One of its primary functions is to decrease administrative costs: analysis of total effects of a proposed conduct would be inordinately expensive or impossible without reducing the scope of analysis. Market definition defines the geographic and product areas most likely to be affected by challenged conduct, sacrificing a degree of analytical accuracy for the sake of tractability.”).

[17] See Nicolas Petit & Lazar Radic, The Necessity of a Consumer Welfare Standard in Antitrust Analysis, Promarket (18 Dec. 2023), https://www.promarket.org/2023/12/18/the-necessity-of-a-consumer-welfare-standard-in-antitrust-analysis (“In general, excessive prices, discriminatory conduct, or unfair trading conditions reflect transaction or mobility costs that can coexist with free and open competition for entry. They only very faintly and ambiguously suggest harm to competition. In such cases, a market power requirement will filter out mere surplus transfers reflecting asymmetries in bargaining power or insignificant distortions in the level playing field, both of which represent the essence of the competitive process in all but name. Without a market power filter, abusive conduct cases blur the line between protecting competition and protecting competitors, since competition by definition consists in putting competitors at a disadvantage and, ultimately, in facilitating their exit from the market.”).

[18] Richard A. Posner & William M. Landes, Market Power in Antitrust Cases, 94 Harv. L. Rev. 937, 939 (1980) (emphasis added).

[19] Online Platforms and Digital Advertising Market Study Final Report, Competition and Markets Authority (1 Jul. 2020), available at https://assets.publishing.service.gov.uk/media/5fa557668fa8f5788db46efc/Final_report_Digital_ALT_TEXT.pdf.

[20] Jason Furman, et al., Unlocking Digital Competition: Report of the Digital Competition Expert Panel (Mar. 2019), available at https://assets.publishing.service.gov.uk/media/5c88150ee5274a230219c35f/unlocking_digital_competition_furman_review_web.pdf.

[21] CMA, supra note 19, at 75 (emphasis added).

[22] Moat Building and Entrenchment Strategies, OECD Background Note (11 Jun. 2004) at 8, available at https://one.oecd.org/document/DAF/COMP/WP3(2024)1/en/pdf.

[23] See, e.g., Louis Kaplow, Market Definition: Impossible and Counterproductive, 79 Antitrust L.J. 361 (2013).

[24] Gregory J. Werden, Why (Ever) Define Markets? An Answer to Professor Kaplow, 78 Antitrust L.J. 729, 741 (2013).

[25] Manne, Error Costs, supra note 16, at 48.

[26] Id. at 104-05.

[27] Richard Whish & David Bailey, Competition Law (8th Ed., 2015) at 31-32.

[28] Jonathan Barnett, Does the European Union’s Digital Markets Act Provide an Appropriate Model for Maintaining Competition in California’s Innovation Economy?, Report Submitted to the California Law Revision Commission (Jan. 2024) at 17, available at http://www.clrc.ca.gov/pub/2024/MM24-05.pdf.

[29] Raymundo Campos, Alejandro Castañeda, Aurora Ramírez & Carlos Ruiz, Amazon’s Effect on Prices: The Case of Mexico, Centro de Estudios Economicos Working Paper No. II-2022 (2022), available at https://cee.colmex.mx/dts/2022/DT-2022-2.pdf.

[30] DMCC, S.1, (1), (a).

[31] DMCC, S.1, (4).

[32] Whish and Bailey, supra note 27, at 190.

[33] Abuse of a Dominant Position: Understanding Competition Law, Office of Fair Trading (2004) at 13, available at https://assets.publishing.service.gov.uk/media/5a74c497ed915d4d83b5ecd7/oft402.pdf.

[34] Sections 2.53-2.56.

[35] Section 2.57.

[36] Geoffrey A. Manne & E. Marcellus Williamson, Hot Docs vs Cold Economics: The Use and Misuse of Business Documents in Antitrust Enforcement and Adjudication, 47 Ariz. L. Rev., 609, 610 (2005).

[37] DMCC, S.19(5).

[38] In the context of the DMA, see, e.g., Carmelo Cennamo & Juan Santaló, Potential Risks and Unintended Effects of the New EU Digital Markets Act, Esade Ctr. Econ. Pol’y. (Open Internet Governance Inst. Working Paper Series No. 4, 2023), available at https://www.esade.edu/ecpol/wp-content/uploads/2023/02/AAFF_EcPol-OIGI_PaperSeries_04_Potentialrisks_ENG_v5.pdf; see also Lazar Radic & Mario Zúñiga, Comments of the International Center for Law & Economics, Ministry of Finance Public Consultation – Economic and Competitive Aspects of Digital Platforms, Int’l Ctr. L. & Econ., 2 (2024), available at https://laweconcenter.org/wp-content/uploads/2024/05/ICLE-Brazil-MoF-Consultation-on-Digital-Competition-1.pdf (“Ex-ante regulations like the European Union’s Digital Markets Act (DMA) can have unintended consequences, such as stifling innovation, reducing consumer welfare, and increasing compliance costs. They can also lead to increased risks of regulatory capture and rent seeking, as the verdict on whether a gatekeeper has complied with the law often comes down to the degree to which rivals are satisfied. Of course, rivals have a clear personal stake in never being satisfied. By tethering intervention to a comparatively clear public-benefit standard—consumer welfare—competition laws minimize the potential for error costs and decrease the chances that the law will be coopted for private gain.”); and Dirk Auer, The Broken Promises of Europe’s Digital Regulation, Truth on the Mkt. (12 Mar. 2024), https://truthonthemarket.com/2024/03/12/the-broken-promises-of-europes-digital-regulation.

[39] DMCC, S.6(1)(a).

[40] DMCC, S.6(1)(b).

[41] DMCC, S.3.

[42] Aaaron Wudrick, The View from Canada: A TOTM Q&A with Aaron Wudrick, Truth on the Mkt. (12 Jun. 2024), https://truthonthemarket.com/2024/06/12/the-view-from-canada-a-totm-qa-with-aaron-wudrick.

[43] By contrast, the DMA does not require gatekeepers to have market power.

[44] Edith Hancock, Severe Pain in the Butt: EU’s Digital Competition Rules Make New Enemies on the Internet, Politico (25 Mar. 2024), https://www.politico.eu/article/european-union-digital-markets-act-google-search-malicious-compliance.

[45] Andrew Mercado, The Paradox of Choice Meets the Information Age, Truth on the Mkt. (19 Apr. 2022), https://truthonthemarket.com/2022/04/19/the-paradox-of-choice-meets-the-information-age; Kay Jebelli, Confronting the DMA’s Shaky Suppositions, Truth on the Mkt. (16 Apr. 2024), https://truthonthemarket.com/2024/04/16/confronting-the-dmas-shaky-suppositions; Dirk Auer & Lazar Radic, What Have the Intermediaries Ever Done for Us, CPI Antitrust Chronicle (Jun. 2022), available at https://laweconcenter.org/wp-content/uploads/2022/06/4-WHAT-HAVE-THE-INTERMEDIARIES-EVER-DONE-FOR-US-Dirk-Auer-Lazar-Radic.pdf.

[46] DMCC, S.3.

[47] A term popular among critics of gatekeepers’ compliance efforts with the DMA. See, e.g., Andy Yen, Apple’s DMA Compliance Plan Is a Trap and a Slap in the Face for the European Commission, Proton Blog (5 Feb. 2024), https://proton.me/blog/apple-dma-compliance-plan-trap.

[48] Adam Kovacevich, The Digital Markets Act’s “Statler & Waldorf” Problem, Chamber of Progress (7 Mar. 2024), https://medium.com/chamber-of-progress/the-digital-markets-acts-statler-waldorf-problem-2c9b6786bb55.

[49] Draft Guidance, Section 4.12 (“4.12 Typically, however, the indicators that the CMA will consider may include (but are not limited to) whether: (a) SMS firms’ profits reflect a reasonable rate of return based on the nature of competition; (b) the competitive positions of SMS firms and their rivals are based on the merits of their respective offerings; (c) SMS firms and their competitors flex parameters of competition in response to rivals and wider developments; (d) SMS firms’ users and customers can make effective decisions between a range of alternatives and are able to switch between these; (e) SMS firms and their competitors are rewarded for operating efficiently, innovating and competing to supply the products that users and customers want; and/or (f) competitors and potential competitors to SMS firms face limited barriers to entry and expansion.”)

[50] Draft Guidance, Section 4.13 (“When assessing whether a factor or combination of factors is having an AEC, the CMA will also consider in its assessment any competition-enhancing efficiencies that have resulted, or may be expected to result, from such factor(s).”)

[51] Although written with antitrust litigation in mind, this passage from Herbert Hovenkamp is relevant to our point: “Every complex antitrust case must begin by considering the remedy. Anticipating the appropriate fix is like having an exit strategy in battle. Court injunctions that prohibit a specific behavior or action are easier to obtain, but they may also accomplish less. “Structural” relief, such as a breakup, requires proof of conduct that only a structural change can fix, as well as proof that the new structure will be better. The recent platform monopolization cases raise a recurring issue in antitrust law: creating the right remedy is often more difficult than establishing unlawful conduct.” See Herbert Hovenkamp, Fixing Platform Monopoly in the Google Search Case, ProMarket (6 Jun. 2023), https://www.promarket.org/2023/10/06/fixing-platform-monopoly-in-the-google-search-case.

[52] Draft Guidance, Section 4.31.

[53] See, e.g., Auer, The Future of the DMA, supra note 7; Auer, Broken Promises, supra note 38.

[54] Draft Guidance, Sections 4.65 to 4.81.

[55] Id. Section 4.65.

[56] Id. Sections 4.12 (a) and (b).

[57] See Auer, Future of the DMA, supra note 38; Auer, Broken Promises, supra note 7.

[58] Kate Harden-England, European Digital Markets Act Law Should be Rethought, Says Mirai, Travolution (28 May 2024), https://www.travolution.com/news/travel-sectors/accommodation/european-digital-markets-act-law-should-be-rethought-says-mirai.

Continue reading
Antitrust & Consumer Protection

The Legacy of Neo-Brandeisianism: History or Footnote?

Popular Media The movement that some call “neo-Brandeisianism,” after its putative inspiration in the works of the late U.S. Supreme Court Justice Louis Brandeis (others have less-charitably . . .

The movement that some call “neo-Brandeisianism,” after its putative inspiration in the works of the late U.S. Supreme Court Justice Louis Brandeis (others have less-charitably termed it “antitrust populism” or “hipster antitrust”), has indisputably taken the competition world by storm. Indeed, it has arguably led to one of the fastest policy swings in antitrust history.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

ICLE and Macdonald-Laurier Institute Comments to Competition Bureau Canada Consultation on AI and Competition

Regulatory Comments Executive Summary We thank the Competition Bureau Canada for promoting this dialogue on competition and artificial intelligence (AI) by publishing its Artificial Intelligence and Competition . . .

Executive Summary

We thank the Competition Bureau Canada for promoting this dialogue on competition and artificial intelligence (AI) by publishing its Artificial Intelligence and Competition Discussion Paper (“Discussion Paper”)[1]. The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates, and has longstanding expertise in the evaluation of competition law and policy in several jurisdictions. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis. The Macdonald-Laurier Institute (MLI) is an independent and nonpartisan think tank based in Ottawa with the ambition to drive the national conversation and make Canada the best-governed country in the world.

In our comments, we express concern that policymakers may equate the rapid rise of AI services and products with a need to intervene in these markets—when, in fact, the opposite is true. As we explain, the rapid growth of AI markets (or, more precisely, products and services based on AI technology), as well as the fact that new market players are thriving, suggests that competition is intense. If incumbent firms could easily leverage their dominance into burgeoning generative AI markets, we would not have seen the growth of generative AI unicorns such as OpenAI, Midjourney, and Anthropic, to name but a few.

Of course, this is not to say that AI markets are not important—quite the opposite. AI is already changing the ways that many firms do business and improving employee productivity in many industries.[2] The technology is also increasingly useful in the field of scientific research, where it has enabled creation of complex models that expand scientists’ reach.[3] Against this backdrop, EU Commissioner Margrethe Vestager was right to point out that it “is fundamental that these new markets stay competitive, and that nothing stands in the way of businesses growing and providing the best and most innovative products to consumers.”[4]

But while sensible enforcement is of vital importance to maintain competition and consumer welfare, kneejerk reactions may yield the opposite outcome. As our comments explain, overenforcement in the field of AI could cause the very harms that policymakers seek to avert. For instance, preventing so-called “Big Tech” firms from competing in these markets (for example, by threatening competition intervention as soon as they embed AI services in their ecosystems or seek to build strategic relationships with AI startups) may thwart an important source of competition needed to keep today’s leading AI firms in check. In short, competition in AI markets is important, but trying naïvely to hold incumbent tech firms back, out of misguided fears they will come to dominate this space, is likely to do more harm than good.

Our comments proceed as follows. Section I summarizes recent calls for competition intervention in AI markets. Section II argues that many of these calls are underpinned by fears of data-related incumbency advantages (often referred to as “data-network effects”). Section III explains why these effects are unlikely to play a meaningful role in AI markets. Section IV explains why current merger policy is sufficient to address any potential anticompetitive acquisition or partnership in the AI sector without need for any special rules, like presumptions or inverse burdens of proof. Section V explains how balancing user protection with innovation in AI markets is particularly important in the Canadian context. Finally, Section VI concludes by offering five key takeaways to help policymakers and agencies (including the Competition Bureau Canada) better weigh the tradeoffs inherent to competition intervention in generative-AI markets.

I. Calls for Intervention in AI Markets

It was once (and frequently) said that Google’s “data monopoly” was unassailable: “If ‘big data’ is the oil of the information economy, Google has Standard Oil-like monopoly dominance—and uses that control to maintain its dominant position.”[5] Similar claims of data dominance have been attached to nearly all large online platforms, including Facebook (Meta), Amazon, and Uber.[6]

While some of these claims continue even today (for example, “big data” is a key component of the U.S. Justice Department’s (DOJ) Google Search and ad-tech antitrust suits),[7] a shiny new data target has emerged in the form of generative artificial intelligence (AI). The launch of ChatGPT in November 2022, as well as the advent of AI image-generation services like Midjourney and Dall-E, have dramatically expanded the public’s conception of what is—and what might be—possible to achieve with generative-AI technologies built on massive datasets.

While these services remain in the early stages of mainstream adoption and remain in the throes of rapid, unpredictable technological evolution, they nevertheless already appear to be on the radar of competition policymakers around the world. Several antitrust enforcers appear to believe that, by acting now, they can avoid the “mistakes” that were purportedly made during the formative years of Web 2.0.[8] These mistakes, critics assert, include failing to appreciate the centrality of data in online markets, as well as letting mergers go unchecked and allowing early movers to entrench their market positions.[9] As Lina Khan, chair of the U.S. Federal Trade Commission (FTC), put it: “we are still reeling from the concentration that resulted from Web 2.0, and we don’t want to repeat the mis-steps of the past with AI.”[10]

This response from the competition-policy world is deeply troubling. Rather than engage in critical self-assessment and adopt an appropriately restrained stance, the enforcement community appears to be champing at the bit. Rather than reassess their prior assumptions based on the current technological moment, enforcers’ top priority appears to be figuring out how to rapidly and almost reflexively deploy existing competition tools to address the presumed competitive failures presented by generative AI.[11]

It is increasingly common for competition enforcers to argue that so-called “data-network effects” serve not only to entrench incumbents in those markets where the data is collected, but also to confer similar, self-reinforcing benefits in adjacent markets. Several enforcers have, for example, prevented large online platforms from acquiring smaller firms in adjacent markets, citing the risk that they could use their vast access to data to extend their dominance into these new markets.[12]

They have also launched consultations to ascertain the role that data plays in AI competition. For instance, in an ongoing consultation, the European Commission asks: “What is the role of data and what are its relevant characteristics for the provision of generative AI systems and/or components, including AI models?”[13] Unsurprisingly, the FTC has likewise been bullish about the risks posed by incumbents’ access to data. In comments submitted to the U.S. Copyright Office, for example, the FTC argued that:

The rapid development and deployment of AI also poses potential risks to competition. The rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms. These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.[14]

Certainly, it stands to reason that the largest online platforms—including Alphabet, Meta, Apple, and Amazon—should have a meaningful advantage in the burgeoning markets for AI services. After all, it is widely recognized that data is an essential input for generative AI.[15] This competitive advantage should be all the more significant, given that these firms have been at the forefront of AI technology for more than a decade. Over this period, Google’s DeepMind and AlphaGo and Meta’s Llama have routinely made headlines.[16] Apple and Amazon also have vast experience with AI assistants, and all of these firms use AI technology throughout their platforms.[17]

Contrary to what one might expect, however, the tech giants have, to date, been largely unable to leverage their vast data troves to outcompete startups like OpenAI and Midjourney. At the time of writing, OpenAI’s ChatGPT appears to be, by far, the most successful chatbot,[18] despite the large tech platforms’ apparent access to far more (and more up-to-date) data.

In these comments, we suggest that there are important lessons to glean from these developments, if only enforcers would stop to reflect. The meteoric rise of consumer-facing AI services should offer competition enforcers and policymakers an opportunity for introspection. As we explain, the rapid emergence of generative-AI technology may undercut many core assumptions of today’s competition-policy debates, which have largely focused on the rueful after-effects of the purported failure of 20th-century antitrust to address the allegedly manifest harms of 21st-century technology. These include the notions that data advantages constitute barriers to entry and can be leveraged to project dominance into adjacent markets; that scale itself is a market failure to be addressed by enforcers; and that the use of consumer data is inherently harmful to those consumers.

II. Data-Network Effects Theory and Enforcement

Proponents of tougher interventions by competition enforcers into digital markets often cite data-network effects as a source of competitive advantage and barrier to entry (though terms like “economies of scale and scope” may offer more precision).[19] The crux of the argument is that “the collection and use of data creates a feedback loop of more data, which ultimately insulates incumbent platforms from entrants who, but for their data disadvantage, might offer a better product.”[20] This self-reinforcing cycle purportedly leads to market domination by a single firm. Thus, it is argued, for example, that Google’s “ever-expanding control of user personal data, and that data’s critical value to online advertisers, creates an insurmountable barrier to entry for new competition.”[21]

Right off the bat, it is important to note the conceptual problem these claims face. Because data can be used to improve the quality of products and/or to subsidize their use, the idea of data as an entry barrier suggests that any product improvement or price reduction made by an incumbent could be a problematic entry barrier to any new entrant. This is tantamount to an argument that competition itself is a cognizable barrier to entry. Of course, it would be a curious approach to antitrust if competition were treated as a problem, as it would imply that firms should under-compete—i.e., should forego consumer-welfare enhancements—in order to inculcate a greater number of firms in a given market simply for its own sake.[22]

Meanwhile, actual economic studies of data-network effects have been few and far between, with scant empirical evidence to support the theory.[23] Andrei Hagiu and Julian Wright’s theoretical paper offers perhaps the most comprehensive treatment of the topic to date.[24] The authors ultimately conclude that data-network effects can be of different magnitudes and have varying effects on firms’ incumbency advantage.[25] They cite Grammarly (an AI writing-assistance tool) as a potential example: “As users make corrections to the suggestions offered by Grammarly, its language experts and artificial intelligence can use this feedback to continue to improve its future recommendations for all users.”[26]

This is echoed by other economists who contend that “[t]he algorithmic analysis of user data and information might increase incumbency advantages, creating lock-in effects among users and making them more reluctant to join an entrant platform.”[27] Crucially, some scholars take this logic a step further, arguing that platforms may use data from their “origin markets” in order to enter and dominate adjacent ones:

First, as we already mentioned, data collected in the origin market can be used, once the enveloper has entered the target market, to provide products more efficiently in the target market. Second, data collected in the origin market can be used to reduce the asymmetric information to which an entrant is typically subject when deciding to invest (for example, in R&D) to enter a new market. For instance, a search engine could be able to predict new trends from consumer searches and therefore face less uncertainty in product design.[28]

This possibility is also implicit in Hagiu and Wright’s paper.[29] Indeed, the authors’ theoretical model rests on an important distinction between within-user data advantages (that is, having access to more data about a given user) and across-user data advantages (information gleaned from having access to a wider user base). In both cases, there is an implicit assumption that platforms may use data from one service to gain an advantage in another market (because what matters is information about aggregate or individual user preferences, regardless of its origin).

Our review of the economic evidence suggests that several scholars have, with varying degrees of certainty, raised the possibility that incumbents may leverage data advantages to stifle competitors in their primary market or in adjacent ones (be it via merger or organic growth). As we explain below, however, there is ultimately little evidence to support such claims. Policymakers have, however, been keenly receptive to these limited theoretical findings, basing multiple decisions on these theories, often with little consideration given to the caveats that accompany them.[30]

Indeed, it is remarkable that, in its section on “[t]he data advantage for incumbents,” the “Furman Report” created for the UK government cited only two empirical economic studies, and they offer directly contradictory conclusions with respect to the question of the strength of data advantages.[31] Nevertheless, the Furman Report concludes that data “may confer a form of unmatchable advantage on the incumbent business, making successful rivalry less likely,”[32] and adopts without reservation “convincing” evidence from non-economists that have no apparent empirical basis.[33]

In the Google/Fitbit merger proceedings, the European Commission found that the combination of data from Google services with that of Fitbit devices would reduce competition in advertising markets:

Giving [sic] the large amount of data already used for advertising purposes that Google holds, the increase in Google’s data collection capabilities, which goes beyond the mere number of active users for which Fitbit has been collecting data so far, the Transaction is likely to have a negative impact on the development of an unfettered competition in the markets for online advertising.[34]

As a result, the Commission cleared the merger on the condition that Google refrain from using data from Fitbit devices for its advertising platform.[35] The Commission will likely focus on similar issues during its ongoing investigation of Microsoft’s investment into OpenAI.[36]

Along similar lines, the FTC’s complaint to enjoin Meta’s purchase of the virtual-reality (VR) fitness app Within relied, among other things, on the fact that Meta could leverage its data about VR-user behavior to inform its decisions and potentially outcompete rival VR-fitness apps: “Meta’s control over the Quest platform also gives it unique access to VR user data, which it uses to inform strategic decisions.”[37]

The DOJ’s twin cases against Google also implicate data leveraging and data barriers to entry. The agency’s ad-tech complaint charges that “Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”[38] Similarly, in its search complaint, the agency argues that:

Google’s anticompetitive practices are especially pernicious because they deny rivals scale to compete effectively. General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms.[39]

Finally, updated merger guidelines published in recent years by several competition enforcers cite the acquisition of data as a potential source of competition concerns. For instance, the FTC and DOJ’s newly published guidelines state that “acquiring data that helps facilitate matching, sorting, or prediction services may enable the platform to weaken rival platforms by denying them that data.”[40] Likewise, the UK Competition and Markets Authority (CMA) warns against incumbents acquiring firms in order to obtain their data and foreclose other rivals:

Incentive to foreclose rivals…

7.19(e) Particularly in complex and dynamic markets, firms may not focus on short term margins but may pursue other objectives to maximise their long-run profitability, which the CMA may consider. This may include… obtaining access to customer data….[41]

In short, competition authorities around the globe have been taking an increasingly aggressive stance on data-network effects. Among the ways this has manifested is in basing enforcement decisions on fears that data collected by one platform might confer a decisive competitive advantage in adjacent markets. Unfortunately, these concerns rest on little to no empirical evidence, either in the economic literature or the underlying case records.

III. Data-Incumbency Advantages in Generative-AI Markets

Given the assertions canvassed in the previous section, it would be reasonable to assume that firms like Google, Meta, and Amazon should be in pole position to dominate the burgeoning market for generative AI. After all, these firms have not only been at the forefront of the field for the better part of a decade, but they also have access to vast troves of data, the likes of which their rivals could only dream when they launched their own services. Thus, the authors of the Furman Report caution that “to the degree that the next technological revolution centres around artificial intelligence and machine learning, then the companies most able to take advantage of it may well be the existing large companies because of the importance of data for the successful use of these tools.”[42]

To date, however, this is not how things have unfolded—although it bears noting that these markets remain in flux and the competitive landscape is susceptible to change. The first significantly successful generative-AI service was arguably not from either Meta—which had been working on chatbots for years and had access to, arguably, the world’s largest database of actual chats—or Google. Instead, the breakthrough came from a previously unknown firm called OpenAI.

OpenAI’s ChatGPT service currently holds an estimated 60% of the market (though reliable numbers are somewhat elusive).[43] It broke the record for the fastest online service to reach 100 million users (in only a couple of months), more than four times faster than the previous record holder, TikTok.[44] Based on Google Trends data, ChatGPT is nine times more popular worldwide than Google’s own Bard service, and 14 times more popular in the United States.[45] In April 2023, ChatGPT reportedly registered 206.7 million unique visitors, compared to 19.5 million for Google’s Bard.[46] In short, at the time we are writing, ChatGPT appears to be the most popular chatbot. The entry of large players such as Google Bard or Meta AI appear to have had little effect thus far on its market position.[47]

The picture is similar in the field of AI-image generation. As of August 2023, Midjourney, Dall-E, and Stable Diffusion appear to be the three market leaders in terms of user visits.[48] This is despite competition from the likes of Google and Meta, who arguably have access to unparalleled image and video databases by virtue of their primary platform activities.[49]

This raises several crucial questions: how have these AI upstarts managed to be so successful, and is their success just a flash in the pan before Web 2.0 giants catch up and overthrow them? While we cannot answer either of these questions dispositively, we offer what we believe to be some relevant observations concerning the role and value of data in digital markets.

A first important observation is that empirical studies suggest that data exhibits diminishing marginal returns. In other words, past a certain point, acquiring more data does not confer a meaningful edge to the acquiring firm. As Catherine Tucker put it following a review of the literature: “Empirically there is little evidence of economies of scale and scope in digital data in the instances where one would expect to find them.”[50]

Likewise, following a survey of the empirical literature on this topic, Geoffrey Manne and Dirk Auer conclude that:

Available evidence suggests that claims of “extreme” returns to scale in the tech sector are greatly overblown. Not only are the largest expenditures of digital platforms unlikely to become proportionally less important as output increases, but empirical research strongly suggests that even data does not give rise to increasing returns to scale, despite routinely being cited as the source of this effect.[51]

In other words, being the firm with the most data appears to be far less important than having enough data. This lower bar may be accessible to far more firms than one might initially think possible. And obtaining enough data could become even easier—that is, the volume of required data could become even smaller—with technological progress. For instance, synthetic data may provide an adequate substitute to real-world data,[52] or may even outperform real-world data.[53] As Thibault Schrepel and Alex Pentland surmise:

[A]dvances in computer science and analytics are making the amount of data less relevant every day. In recent months, important technological advances have allowed companies with small data sets to compete with larger ones.[54]

Indeed, past a certain threshold, acquiring more data might not meaningfully improve a service, where other improvements (such as better training methods or data curation) could have a large impact. In fact, there is some evidence that excessive data impedes a service’s ability to generate results appropriate for a given query: “[S]uperior model performance can often be achieved with smaller, high-quality datasets than massive, uncurated ones. Data curation ensures that training datasets are devoid of noise, irrelevant instances, and duplications, thus maximizing the efficiency of every training iteration.”[55]

Consider, for instance, a user who wants to generate an image of a basketball. Using a model trained on an indiscriminate range and number of public photos in which a basketball appears, but surrounded by copious other image data, the user may end up with an inordinately noisy result. By contrast, a model trained with a better method on fewer, more carefully selected images, could readily yield far superior results.[56] In one important example:

The model’s performance is particularly remarkable, given its small size. “This is not a large language model trained on the whole Internet; this is a relatively small transformer trained for these tasks,” says Armando Solar-Lezama, a computer scientist at the Massachusetts Institute of Technology, who was not involved in the new study…. The finding implies that instead of just shoving ever more training data into machine-learning models, a complementary strategy might be to offer AI algorithms the equivalent of a focused linguistics or algebra class.[57]

Platforms’ current efforts are thus focused on improving the mathematical and logical reasoning of large language models (LLMs), rather than maximizing training datasets.[58] Two points stand out. The first is that firms like OpenAI rely largely on publicly available datasets—such as GSM8K—to train their LLMs.[59] Second, the real challenge to create cutting-edge AI is not so much in collecting data, but rather in creating innovative AI-training processes and architectures:

[B]uilding a truly general reasoning engine will require a more fundamental architectural innovation. What’s needed is a way for language models to learn new abstractions that go beyond their training data and have these evolving abstractions influence the model’s choices as it explores the space of possible solutions.

We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.[60]

Furthermore, it is worth noting that the data most relevant to startups in a given market may not be those data held by large incumbent platforms in other markets, but rather data specific to the market in which the startup is active or, even better, to the given problem it is attempting to solve:

As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use—they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market and mounted their successful challenges—not before.[61]

The bottom line is that data is not the be-all and end-all that many in competition circles make it out to be. While data may often confer marginal benefits, there is little sense that these are ultimately decisive.[62] As a result, incumbent platforms’ access to vast numbers of users and data in their primary markets might only marginally affect their AI competitiveness.

A related observation is that firms’ capabilities and other features of their products arguably play a more important role than the data they own.[63] Examples of this abound in digital markets. Google overthrew Yahoo, despite initially having access to far fewer users and far less data; Google and Apple overcame Microsoft in the smartphone operating-system market, despite having comparatively tiny ecosystems (at the time) to leverage; and TikTok rose to prominence despite intense competition from incumbents like Instagram, which had much larger user bases. In each of these cases, important product-design decisions (such as the PageRank algorithm, recognizing the specific needs of mobile users,[64] and TikTok’s clever algorithm) appear to have played a far more significant role than initial user and data endowments (or lack thereof).

All of this suggests that the early success of OpenAI likely has more to do with its engineering decisions than what data it did (or did not) own. Going forward, OpenAI and its rivals’ ability to offer and monetize compelling custom versions of their generative-AI technology will arguably play a much larger role than (and contribute to) their ownership of data.[65] In other words, the ultimate challenge is arguably to create a valuable platform, of which data ownership is a consequence, but not a cause.

It is also important to note that, in those instances where it is valuable, data does not just fall from the sky. Instead, it is through smart business and engineering decisions that firms can generate valuable information (which does not necessarily correlate with owning more data).

For instance, OpenAI’s success with ChatGPT is often attributed to its more efficient algorithms and training models, which arguably have enabled the service to improve more rapidly than its rivals.[66] Likewise, the ability of firms like Meta and Google to generate valuable data for advertising arguably depends more on design decisions that elicit the right data from users, rather than the raw number of users in their networks.

Put differently, setting up a business so as to extract and organize the right information is more important than simply owning vast troves of data.[67] Even in those instances where high-quality data is an essential parameter of competition, it does not follow that having vaster databases or more users on a platform necessarily leads to better information for the platform.

Indeed, if data ownership consistently conferred a significant competitive advantage, these new firms would not be where they are today. This does not mean that data is worthless, of course. Rather, it means that competition authorities should not assume that merely possessing data is a dispositive competitive advantage, absent compelling empirical evidence to support such a finding. In this light, the current wave of decisions and competition-policy pronouncements that rely on data-related theories of harm are premature.

IV. Merger Policy and AI

According to the Discussion Paper, some mergers that involve firms offering AI services or products deserve special scrutiny:

Mergers, of any form, involving a firm who supplies compute inputs, such as AI chips and cloud services, could warrant additional scrutiny due to the existing high levels of concentration in these markets. Mergers in AI markets may require additional scrutiny as large established firms may seek to acquire emerging competitors as a means of preventing or lessening competition.[68]

The Discussion Paper does not explain what form this “additional scrutiny” may take. It may entail anything from prioritization of resources to procedural rules (presumptions, burden of proof). In any case, while we understand why the two mentioned instances of mergers may raise competition concerns, it is important to acknowledge that these are theoretical concerns. To date, there is no evidence to support differentiated scrutiny for mergers involving AI firms or, in general, firms working with information technology. The view that so-called “killer acquisitions,” for instance, pose a significant competition risk is not supported by solid evidence.[69] To the contrary, the evidence suggests that acquisitions increase competition by allowing larger firms to acquire abilities relevant to innovation and by generating incentives for startups.[70]

Companies with “deep pockets” that invest in AI startups may provide those firms the resources to compete with current market leaders. Firms like Amazon, Google, Meta, and Microsoft, for instance, are investing in creating their own chips for building AI systems, aiming to be less dependent on Nvidia.[71] The availability of this source of funding may thus increase competition at all levels of the AI industry.[72]

There has been also some concern in other jurisdictions regarding recent partnerships among and investments by Big Tech firms into AI “unicorns,”[73] in particular, Amazon’s partnership with Anthropic; Microsoft’s partnership with Mistral AI; and Microsoft’s hiring of former Inflection AI employees (including, notably, founder Mustafa Suleyman) and related arrangements with the company.

Publicly available information, however, suggests that these transactions may not warrant merger-control investigation, let alone the heightened scrutiny that comes with potential Phase II proceedings. At the very least, given the AI industry’s competitive landscape, there is little to suggest these transactions merit closer scrutiny than similar deals in other sectors.

Overenforcement in the field of generative AI could paradoxically engender the very harms that policymakers currently seek to avert. Preventing Big Tech firms from competing in these markets (for example, by threatening competition intervention as soon as they build strategic relationships with AI startups) may thwart an important source of competition needed to keep today’s leading generative-AI firms in check. In short, competition in AI markets is important,[74] but trying naïvely to hold incumbent (in adjacent markets) tech firms back, out of misguided fears they will come to dominate this space, is likely to do more harm than good.

At a more granular level, there are important reasons to believe these kinds of agreements will have no negative impact on competition and may, in fact, benefit consumers—e.g., by enabling those startups to raise capital and deploy their services at an even larger scale. In other words, they do not bear any of the prima facie traits of “killer acquisitions” or even of the acquisition of “nascent potential competitors.”[75]

Most importantly, these partnerships all involve the acquisition of minority stakes and do not entail any change of control over the target companies. Amazon, for instance, will not have “ownership control” of Anthropic. The precise amount of shares acquired has not been made public, but a reported investment of $4 billion in a company valued at $18.4 billion does not give Amazon a majority stake or sufficient voting rights to control the company or its competitive strategy. [76] It has also been reported that the deal will not give Amazon any seats on the Anthropic board or special voting rights (such as the power to veto some decisions).[77] There is thus little reason to believe Amazon has acquired indirect or de facto control over Anthropic.

Microsoft’s investment in Mistral AI is even smaller, in both absolute and relative terms. Microsoft is reportedly investing just $16 million in a company valued at $2.1 billion.[78] This represents less than 1% of Mistral’s equity, making it all but impossible for Microsoft to exert any significant control or influence over Mistral AI’s competitive strategy. Likewise, there have been no reports of Microsoft acquiring seats on Mistral AI’s board or special voting rights. We can therefore be confident that the deal will not affect competition in AI markets.

Much the same applies to Microsoft’s dealings with Inflection AI. Microsoft hired two of the company’s three founders (which currently does not fall under the scope of merger laws), and also paid $620 million for nonexclusive rights to sell access to the Inflection AI model through its Azure Cloud.[79] Admittedly, the latter could entail (depending on deal’s specifics) some limited control over Inflection AI’s competitive strategy, but there is currently no evidence to suggest this will be the case.

Finally, none of these deals entail any competitively significant behavioral commitments from the target companies. There are no reports of exclusivity agreements or other commitments that would restrict third parties’ access to these firms’ underlying AI models. Again, this means the deals are extremely unlikely to negatively impact the competitive landscape in these markets.

V. Balancing Innovation and Regulation in Canada’s AI Landscape

AI presents significant opportunities and challenges for competition policy in Canada. As the technology continues to evolve, it is crucial to establish a regulatory framework that promotes innovation, while safeguarding competition and consumer protection.

The European AI Act, for example, categorizes AI systems into different risk levels—unacceptable risk, high risk, limited risk, and minimal risk. This framework allows for regulation proportional to the potential impact of the AI system. By adopting a similar risk-based approach, Canada could ensure that high-risk AI systems are subject to stringent requirements, while lower-risk systems benefit from lighter-touch regulations that encourage innovation.

To foster a competitive AI market in Canada, it is essential to avoid overly restrictive regulations that could stifle technological progress. If implemented reasonably, the EU AI Act’s flexible framework may support the development and deployment of innovative AI technologies by imposing rigorous requirements only on high-risk systems. In turn, this could support innovation by balancing the need for public safety and the protection of fundamental rights with the imperative to maintain a dynamic and competitive market environment. Overenforcement, in contrast, could lead to the opposite outcome.

Canada is currently a world leader in AI talent concentration[80] and Canada’s existing AI strategy has, to date, created significant social and economic benefits for the nation. Overly restrictive regulation (such as the proposed Artificial Intelligence and Data Act (AIDA)[81]) could lead to challenges in attracting and retaining talent, which would inevitably hamper competition.[82] Meta’s response to the proposed AIDA serves as a practical example to illustrate the potential impact of overregulation. Meta has indicated that the proposed laws could prevent the company from launching certain products in Canada due to onerous compliance costs.[83] Other tech companies share similar concerns, warning that misaligned regulations could place Canada at a competitive disadvantage globally and undermine robust competition at home.

The need to retain and attract top AI talent is another critical issue. Canada faces challenges in keeping AI talent due to more attractive opportunities abroad. To maintain its competitive edge, Canada must ensure that its regulatory frameworks do not discourage local talent from contributing to the domestic AI landscape.[84]

The Canadian government has recently committed in its federal budget to invest $2.4 billion for AI, focused primarily on computing power. Unfortunately, Meta’s subsequent release of Llama 3, a powerful open-source LLM, and Microsoft’s €4 billion investment in France’s AI capabilities highlight the need for a reassessment. Rather than computing power, Canada should instead focus on AI applications, education, and industry adoption.[85]

VI. Five Key Takeaways: Reconceptualizing the Role of Data in Generative-AI Competition

As we explain above, data (network effects) are not the source of barriers to entry that they are sometimes made out to be. The picture is far more nuanced. Indeed, as economist Andres Lerner demonstrated almost a decade ago (and the assessment is only truer today):

Although the collection of user data is generally valuable for online providers, the conclusion that such benefits of user data lead to significant returns to scale and to the entrenchment of dominant online platforms is based on unsupported assumptions. Although, in theory, control of an “essential” input can lead to the exclusion of rivals, a careful analysis of real-world evidence indicates that such concerns are unwarranted for many online businesses that have been the focus of the “big data” debate.[86]

While data can be an important part of the competitive landscape, incumbents’ data advantages are far less pronounced than today’s policymakers commonly assume. In that respect, five main lessons emerge:

  1. Data can be (very) valuable, but beyond a certain threshold, those benefits tend to diminish. In other words, having the most data is less important than having enough;
  2. The ability to generate valuable information does not depend on the number of users or the amount of data a platform has previously acquired;
  3. The most important datasets are not always proprietary;
  4. Technological advances and platforms’ engineering decisions affect their ability to generate valuable information, and this effect swamps effects stemming from the amount of data they own; and
  5. How platforms use data is arguably more important than what data or how much data they own.

These lessons have important ramifications for competition-policy debates over the competitive implications of data in technologically evolving areas.

First, it is not surprising that startups, rather than incumbents, have taken an early lead in generative AI (and in Web 2.0 before it). After all, if data-incumbency advantages are small or even nonexistent, then smaller and more nimble players may have an edge over established tech platforms. This is all the more likely given that, despite significant efforts, the biggest tech platforms were unable to offer compelling generative-AI chatbots and image-generation services before the emergence of ChatGPT, Dall-E, Midjourney, etc.

This failure suggests that, in a process akin to Clayton Christensen’s “innovator’s dilemma,”[87] something about the incumbent platforms’ existing services and capabilities was holding them back in those markets. Of course, this does not necessarily mean that those same services or capabilities could not become an advantage when the generative-AI market starts addressing issues of monetization and scale.[88] But it does mean that assumptions about a firm’s market power based on its possession of data are off the mark.

Another important implication is that, paradoxically, policymakers’ efforts to prevent Web 2.0 platforms from competing freely in generative AI markets may ultimately backfire and lead to less, not more, competition. Indeed, OpenAI is currently acquiring a sizeable lead in generative AI. While competition authorities might like to think that other startups will emerge and thrive in this space, it is important not to confuse desires with reality. While there currently exists a vibrant AI-startup ecosystem, there is at least a case to be made that the most significant competition for today’s AI leaders will come from incumbent Web 2.0 platforms—although nothing is certain at this stage. Policymakers should beware not to stifle that competition on the misguided assumption that competitive pressure from large incumbents is somehow less valuable to consumers than that which originates from smaller firms.

Finally, even if there were a competition-related market failure to be addressed in the field of generative AI (which is anything but clear), it is unclear that the remedies being contemplated would do more good than harm. Some of the solutions that have been put forward have highly ambiguous effects on consumer welfare. Scholars have shown that, e.g., mandated data sharing—a solution championed by EU policymakers, among others—may sometimes dampen competition in generative-AI markets.[89] This is also true of legislation like the General Data Protection Regulation (GDPR), which makes it harder for firms to acquire more data about consumers—assuming such data is, indeed, useful to generative-AI services.[90]

In sum, it is a flawed understanding of the economics and practical consequences of large agglomerations of data that leads competition authorities to believe that data-incumbency advantages are likely to harm competition in generative AI markets—or even in the data-intensive Web 2.0 markets that preceded them. Indeed, competition or regulatory intervention to “correct” data barriers and data network and scale effects is liable to do more harm than good.

[1] Competition Bureau Canada, Artificial Intelligence and Competition, Discussion Paper (Mar. 2024), https://competition-bureau.canada.ca/how-we-foster-competition/education-and-outreach/artificial-intelligence-and-competition#sec00.

[2] See, e.g., Michael Chui, et al., The Economic Potential of Generative AI: The Next Productivity Frontier, McKinsey (Jun. 14, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-AI-the-next-productivity-frontier.

[3] See, e.g., Zhuoran Qiao, Weili Nie, Arash Vahdat, Thomas F. Miller III, & Animashree Anandkumar, State-Specific Protein–Ligand Complex Structure Prediction with a Multiscale Deep Generative Model, 6 Nature Machine Intelligence, 195-208 (2024); see also Jaemin Seo, Sang Kyeun Kim, Azarakhsh Jalalvand, Rory Conlin, Andrew Rothstein, Joseph Abbate, Keith Erickson, Josiah Wai, Ricardo Shousha, & Egemen Kolemen, Avoiding Fusion Plasma Tearing Instability with Deep Reinforcement Learning, 626 Nature, 746-751 (2024).

[4] See, e.g., Press Release, Commission Launches Calls for Contributions on Competition in Virtual Worlds and Generative AI, European Commission (Jan. 9, 2024), https://ec.europa.eu/commission/presscorner/detail/en/IP_24_85.

[5] Nathan Newman, Taking on Google’s Monopoly Means Regulating Its Control of User Data, Huffington Post (Sep. 24, 2013), http://www.huffingtonpost.com/nathan-newman/taking-on-googlesmonopol_b_3980799.html.

[6] See, e.g., Lina Khan & K. Sabeel Rahman, Restoring Competition in the U.S. Economy, in Untamed: How to Check Corporate, Financial, and Monopoly Power (Nell Abernathy, Mike Konczal, & Kathryn Milani, eds., 2016), at 23 (“From Amazon to Google to Uber, there is a new form of economic power on display, distinct from conventional monopolies and oligopolies…, leverag[ing] data, algorithms, and internet-based technologies… in ways that could operate invisibly and anticompetitively.”); Mark Weinstein, I Changed My Mind—Facebook Is a Monopoly, Wall St. J. (Oct. 1, 2021), https://www.wsj.com/articles/facebook-is-monopoly-metaverse-users-advertising-platforms-competition-mewe-big-tech-11633104247 (“[T]he glue that holds it all together is Facebook’s monopoly over data…. Facebook’s data troves give it unrivaled knowledge about people, governments—and its competitors.”).

[7] See, generally, Abigail Slater, Why “Big Data” Is a Big Deal, The Reg. Rev. (Nov. 6, 2023), https://www.theregreview.org/2023/11/06/slater-why-big-data-is-a-big-deal; Amended Complaint at ¶36, United States v. Google, 1:20-cv-03010- (D.D.C. 2020); Complaint at ¶37, United States v. Google, 1:23-cv-00108 (E.D. Va. 2023), https://www.justice.gov/opa/pr/justice-department-sues-google-monopolizing-digital-advertising-technologies (“Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”).

[8] See, e.g., Press Release, European Commission, supra note 4; Krysten Crawford, FTC’s Lina Khan Warns Big Tech over AI, SIEPR (Nov. 3, 2020), https://siepr.stanford.edu/news/ftcs-lina-khan-warns-big-tech-over-ai (“Federal Trade Commission Chair Lina Khan delivered a sharp warning to the technology industry in a speech at Stanford on Thursday: Antitrust enforcers are watching what you do in the race to profit from artificial intelligence.”) (emphasis added).

[9] See, e.g., John M. Newman, Antitrust in Digital Markets, 72 Vand. L. Rev. 1497, 1501 (2019) (“[T]he status quo has frequently failed in this vital area, and it continues to do so with alarming regularity. The laissez-faire approach advocated for by scholars and adopted by courts and enforcers has allowed potentially massive harms to go unchecked.”); Bertin Martins, Are New EU Data Market Regulations Coherent and Efficient?, Bruegel Working Paper 21/23 (2023), https://www.bruegel.org/working-paper/are-new-eu-data-market-regulations-coherent-and-efficient (“Technical restrictions on access to and re-use of data may result in failures in data markets and data-driven services markets.”); Valéria Faure-Muntian, Competitive Dysfunction: Why Competition Law Is Failing in a Digital World, The Forum Network (Feb. 24, 2021), https://www.oecd-forum.org/posts/competitive-dysfunction-why-competition-law-is-failing-in-a-digital-world.

[10] See Rana Foroohar, The Great US-Europe Antitrust Divide, FT (Feb. 5, 2024), https://www.ft.com/content/065a2f93-dc1e-410c-ba9d-73c930cedc14.

[11] See, e.g., Press Release, European Commission, supra note 4.

[12] See infra, Section II. Commentators have also made similar claims; see, e.g., Ganesh Sitaram & Tejas N. Narechania, It’s Time for the Government to Regulate AI. Here’s How, Politico (Jan. 15, 2024) (“All that cloud computing power is used to train foundation models by having them “learn” from incomprehensibly huge quantities of data. Unsurprisingly, the entities that own these massive computing resources are also the companies that dominate model development. Google has Bard, Meta has Llama. Amazon recently invested $4 billion into one of OpenAI’s leading competitors, Anthropic. And Microsoft has a 49 percent ownership stake in OpenAI — giving it extraordinary influence, as the recent board struggles over Sam Altman’s role as CEO showed.”).

[13] Press Release, European Commission, supra note 4.

[14] Comment of U.S. Federal Trade Commission to the U.S. Copyright Office, Artificial Intelligence and Copyright, Docket No. 2023-6 (Oct. 30, 2023), at 4, https://www.ftc.gov/legal-library/browse/advocacy-filings/comment-federal-trade-commission-artificial-intelligence-copyright (emphasis added).

[15] See, e.g. Joe Caserta, Holger Harreis, Kayvaun Rowshankish, Nikhil Srinidhi, & Asin Tavakoli, The Data Dividend: Fueling Generative AI, McKinsey Digital (Sep. 15, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-data-dividend-fueling-generative-ai (“Your data and its underlying foundations are the determining factors to what’s possible with generative AI.”).

[16] See, e.g., Tim Keary, Google DeepMind’s Achievements and Breakthroughs in AI Research, Techopedia (Aug. 11, 2023), https://www.techopedia.com/google-deepminds-achievements-and-breakthroughs-in-ai-research; see also, e.g., Will Douglas Heaven, Google DeepMind Used a Large Language Model to Solve an Unsolved Math Problem, MIT Technology Review (Dec. 14, 2023), https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set; A Decade of Advancing the State-of-the-Art in AI Through Open Research, Meta (Nov. 30, 2023), https://about.fb.com/news/2023/11/decade-of-advancing-ai-through-open-research; 200 Languages Within a Single AI Model: A Breakthrough in High-Quality Machine Translation, Meta, https://ai.meta.com/blog/nllb-200-high-quality-machine-translation (last visited Jan. 18, 2023).

[17] See, e.g., Jennifer Allen, 10 Years of Siri: The History of Apple’s Voice Assistant, Tech Radar (Oct. 4, 2021), https://www.techradar.com/news/siri-10-year-anniversary; see also Evan Selleck, How Apple Is Already Using Machine Learning and AI in iOS, Apple Insider (Nov. 20, 2023), https://appleinsider.com/articles/23/09/02/how-apple-is-already-using-machine-learning-and-ai-in-ios; Kathleen Walch, The Twenty Year History Of AI At Amazon, Forbes (Jul. 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/07/19/the-twenty-year-history-of-ai-at-amazon.

[18] See infra Section III.

[19] See, e.g., Cédric Argenton & Jens Prüfer, Search Engine Competition with Network Externalities, 8 J. Comp. L. & Econ. 73, 74 (2012).

[20] John M. Yun, The Role of Big Data in Antitrust, in The Global Antitrust Institute Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg, eds., Nov. 11, 2020) at 233, https://gaidigitalreport.com/2020/08/25/big-data-and-barriers-to-entry/#_ftnref50; see also, e.g., Robert Wayne Gregory, Ola Henfridsson, Evgeny Kaganer, & Harris Kyriakou, The Role of Artificial Intelligence and Data Network Effects for Creating User Value, 46 Acad. of Mgmt. Rev. 534 (2020), final pre-print version at 4, http://wrap.warwick.ac.uk/134220 (“A platform exhibits data network effects if, the more that the platform learns from the data it collects on users, the more valuable the platform becomes to each user.”); see also, Karl Schmedders, José Parra-Moyano, & Michael Wade, Why Data Aggregation Laws Could be the Answer to Big Tech Dominance, Silicon Republic (Feb. 6, 2024), https://www.siliconrepublic.com/enterprise/data-ai-aggregation-laws-regulation-big-tech-dominance-competition-antitrust-imd.

[21] Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. Reg. 401, 409 (2014) (emphasis added); see also id. at 420 & 423 (“While there are a number of network effects that come into play with Google, [“its intimate knowledge of its users contained in its vast databases of user personal data”] is likely the most important one in terms of entrenching the company’s monopoly in search advertising…. Google’s overwhelming control of user data… might make its dominance nearly unchallengeable.”).

[22] See also Yun, supra note 20 at 229 (“[I]nvestments in big data can create competitive distance between a firm and its rivals, including potential entrants, but this distance is the result of a competitive desire to improve one’s product.”).

[23] For a review of the literature on increasing returns to scale in data (this topic is broader than data-network effects) see Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo Mason L. Rev. 1281, 1344 (2021).

[24] Andrei Hagiu & Julian Wright, Data-Enabled Learning, Network Effects, and Competitive Advantage, 54 RAND J. Econ. 638 (2023).

[25] Id. at 639. The authors conclude that “Data-enabled learning would seem to give incumbent firms a competitive advantage. But how strong is this advantage and how does it differ from that obtained from more traditional mechanisms…”.

[26] Id.

[27] Bruno Jullien & Wilfried Sand-Zantman, The Economics of Platforms: A Theory Guide for Competition Policy, 54 Info. Econ. & Pol’y 10080, 101031 (2021).

[28] Daniele Condorelli & Jorge Padilla, Harnessing Platform Envelopment in the Digital World, 16 J. Comp. L. & Pol’y 143, 167 (2020).

[29] See Hagiu & Wright, supra note 24.

[30] For a summary of these limitations, see generally Catherine Tucker, Network Effects and Market Power: What Have We Learned in the Last Decade?, Antitrust (2018) at 72, available at https://sites.bu.edu/tpri/files/2018/07/tucker-network-effects-antitrust2018.pdf; see also Manne & Auer, supra note 23, at 1330.

[31] See Jason Furman, Diane Coyle, Amelia Fletcher, Derek McAuley, & Philip Marsden (Dig. Competition Expert Panel), Unlocking Digital Competition (2019) at 32-35 (“Furman Report”), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[32] Id. at 34.

[33] Id. at 35. To its credit, it should be noted, the Furman Report counsels caution before mandating access to data as a remedy to promote competition. See id. at 75. With that said, the Furman Report maintains that such a remedy should certainly be on the table, because “the evidence suggests that large data holdings are at the heart of the potential for some platform markets to be dominated by single players and for that dominance to be entrenched in a way that lessens the potential for competition for the market.” Id. In fact, the evidence does not show this.

[34] Case COMP/M.9660 — Google/Fitbit, Commission Decision (Dec. 17, 2020) (Summary at O.J. (C 194) 7), available at https://ec.europa.eu/competition/mergers/cases1/202120/m9660_3314_3.pdf at 455.

[35] Id. at 896.

[36] See Natasha Lomas, EU Checking if Microsoft’s OpenAI Investment Falls Under Merger Rules, TechCrunch (Jan. 9, 2024), https://techcrunch.com/2024/01/09/openai-microsoft-eu-merger-rules.

[37] Amended Complaint at 11, Meta/Zuckerberg/Within, Fed. Trade Comm’n. (2022) (No. 605837), available at https://www.ftc.gov/system/files/ftc_gov/pdf/D09411%20-%20AMENDED%20COMPLAINT%20FILED%20BY%20COUNSEL%20SUPPORTING%20THE%20COMPLAINT%20-%20PUBLIC%20%281%29_0.pdf.

[38] Amended Complaint (D.D.C), supra note 7 at ¶37.

[39] Amended Complaint (E.D. Va), supra note 7 at ¶8.

[40] Merger Guidelines, US Dep’t of Justice & Fed. Trade Comm’n (2023) at 25, available at https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[41] Merger Assessment Guidelines, Competition and Mkts. Auth (2021) at  ¶7.19(e), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051823/MAGs_for_publication_2021_–_.pdf.

[42] Furman Report, supra note 31, at ¶4.

[43] See, e.g., Chris Westfall, New Research Shows ChatGPT Reigns Supreme in AI Tool Sector, Forbes (Nov. 16, 2023), https://www.forbes.com/sites/chriswestfall/2023/11/16/new-research-shows-chatgpt-reigns-supreme-in-ai-tool-sector/?sh=7de5de250e9c.

[44] See Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base, Reuters (Feb. 2, 2023), https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01; Google: The AI Race Is On, App Economy Insights (Feb. 7, 2023), https://www.appeconomyinsights.com/p/google-the-ai-race-is-on.

[45] See Google Trends, https://trends.google.com/trends/explore?date=today%205-y&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited, Jan. 12, 2024) and https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024).

[46] See David F. Carr, As ChatGPT Growth Flattened in May, Google Bard Rose 187%, Similarweb Blog (Jun. 5, 2023), https://www.similarweb.com/blog/insights/ai-news/chatgpt-bard.

[47] See Press Release, Introducing New AI Experiences Across Our Family of Apps and Devices, Meta (Sep. 27, 2023), https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools; Sundar Pichai, An Important Next Step on Our AI Journey, Google Keyword Blog (Feb. 6, 2023), https://blog.google/technology/ai/bard-google-ai-search-updates.

[48] See Ion Prodan, 14 Million Users: Midjourney’s Statistical Success, Yon (Aug. 19, 2023), https://yon.fun/midjourney-statistics; see also Andrew Wilson, Midjourney Statistics: Users, Polls, & Growth [Oct 2023], ApproachableAI (Oct. 13, 2023), https://approachableai.com/midjourney-statistics.

[49] See Hema Budaraju, New Ways to Get Inspired with Generative AI in Search, Google Keyword Blog (Oct. 12, 2023), https://blog.google/products/search/google-search-generative-ai-october-update; Imagine with Meta AI, Meta (last visited Jan. 12, 2024), https://imagine.meta.com.

[50] Catherine Tucker, Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility, 54 Rev. Indus. Org. 683, 686 (2019).

[51] Manne & Auer, supra note 23, at 1345.

[52] See, e.g., Stefanie Koperniak, Artificial Data Give the Same Results as Real Data—Without Compromising Privacy, MIT News (Mar. 3, 2017), https://news.mit.edu/2017/artificial-data-give-same-results-as-real-data-0303 (“[Authors] describe a machine learning system that automatically creates synthetic data—with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. While the use of authentic data can cause significant privacy concerns, this synthetic data is completely different from that produced by real users—but can still be used to develop and test data science algorithms and models.”).

[53] See, e.g., Rachel Gordon, Synthetic Imagery Sets New Bar in AI Training Efficiency, MIT News (Nov. 20, 2023), https://news.mit.edu/2023/synthetic-imagery-sets-new-bar-ai-training-efficiency-1120 (“By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional ‘real-image’ training methods.).

[54] Thibault Schrepel & Alex ‘Sandy’ Pentland, Competition Between AI Foundation Models: Dynamics and Policy Recommendations, MIT Connection Science Working Paper (Jun. 2023), at 8.

[55] Igor Susmelj, Optimizing Generative AI: The Role of Data Curation, Lightly (last visited Jan. 15, 2024), https://www.lightly.ai/post/optimizing-generative-ai-the-role-of-data-curation.

[56] See, e.g., Xiaoliang Dai, et al., Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack, ArXiv (Sep. 27, 2023) at 1, https://ar5iv.labs.arxiv.org/html/2309.15807 (“[S]upervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality.”); see also, Hu Xu, et al., Demystifying CLIP Data, ArXiv (Sep. 28, 2023), https://arxiv.org/abs/2309.16671.

[57] Lauren Leffer, New Training Method Helps AI Generalize like People Do, Sci. Am. (Oct. 26, 2023), https://www.scientificamerican.com/article/new-training-method-helps-ai-generalize-like-people-do (discussing Brendan M. Lake & Marco Baroni, Human-Like Systematic Generalization Through a Meta-Learning Neural Network, 623 Nature 115 (2023)).

[58] Timothy B. Lee, The Real Research Behind the Wild Rumors about OpenAI’s Q* Project, Ars Technica (Dec. 8, 2023), https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project.

[59] Id.; see also GSM8K, Papers with Code, available at https://paperswithcode.com/dataset/gsm8k (last visited Jan. 18, 2023); MATH Dataset, GitHub, available at https://github.com/hendrycks/math (last visited Jan. 18, 2024).

[60] Lee, supra note 58.

[61] Geoffrey Manne & Ben Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (Mar. 26, 2015), https://truthonthemarket.com/2015/03/26/debunking-the-myth-of-a-data-barrier-to-entry-for-online-services (citing Andres V. Lerner, The Role of ‘Big Data’ in Online Platform Competition (Aug. 26, 2014), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2482780).

[62] See Catherine Tucker, Digital Data as an Essential Facility: Control, CPI Antitrust Chron. (Feb. 2020), at 11 (“[U]ltimately the value of data is not the raw manifestation of the data itself, but the ability of a firm to use this data as an input to insight.”).

[63] Or, as John Yun puts it, data is only a small component of digital firms’ production function. See Yun, supra note 20, at 235 (“Second, while no one would seriously dispute that having more data is better than having less, the idea of a data-driven network effect is focused too narrowly on a single factor improving quality. As mentioned in supra Section I.A, there are a variety of factors that enter a firm’s production function to improve quality.”).

[64] Luxia Le, The Real Reason Windows Phone Failed Spectacularly, History–Computer (Aug. 8, 2023), https://history-computer.com/the-real-reason-windows-phone-failed-spectacularly.

[65] Introducing the GPT Store, Open AI (Jan. 10, 2024), https://openai.com/blog/introducing-the-gpt-store.

[66] See Michael Schade, How ChatGPT and Our Language Models are Developed, OpenAI, https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed; Sreejani Bhattacharyya, Interesting Innovations from OpenAI in 2021, AIM (Jan. 1, 2022), https://analyticsindiamag.com/interesting-innovations-from-openai-in-2021; Danny Hernadez & Tom B. Brown, Measuring the Algorithmic Efficiency of Neural Networks, ArXiv (May 8, 2020), https://arxiv.org/abs/2005.04305.

[67] See Yun, supra note 20 at 235 (“Even if data is primarily responsible for a platform’s quality improvements, these improvements do not simply materialize with the presence of more data—which differentiates the idea of data-driven network effects from direct network effects. A firm needs to intentionally transform raw, collected data into something that provides analytical insights. This transformation involves costs including those associated with data storage, organization, and analytics, which moves the idea of collecting more data away from a strict network effect to more of a ‘data opportunity.’”).

[68] Discussion Paper, Section 3.1.6, “Consideration for mergers”.

[69] See: Jonathan M. Barnett, “Killer Acquisitions” Reexamined: Economic Hyperbole in the Age of Populist Antitrust, 3 U. Chi. Bus. L. Rev. 39 (2023).

[70] Id. at 85. (“At the same time, these transactions enhance competitive conditions by supporting the profit expectations that elicit VC investment in the startups that deliver the most transformative types of innovation to the biopharmaceutical ecosystem (and, in some cases, mature into larger firms that can challenge incumbents).)”

[71] Cade Metz, Karen Weise, & Mike Isaac, Nvidia’s Big Tech Rivals Put Their Own A.I. Chips on the Table, N.Y. Times (Jan. 29, 2024), https://www.nytimes.com/2024/01/29/technology/ai-chips-nvidia-amazon-google-microsoft-meta.html.

[72] See, e.g., Chris Metinko, Nvidia’s Big Tech Rivals Put Their Own A.I. Chips on the Table, CrunchBase (Jun. 12, 2024), https://news.crunchbase.com/ai/msft-nvda-lead-big-tech-startup-investment.

[73] CMA Seeks Views on AI Partnerships and Other Arrangements, Competition and Mkts. Auth. (Apr. 24, 2024), https://www.gov.uk/government/news/cma-seeks-views-on-ai-partnerships-and-other-arrangements.

[74] AI, of course, is not a market (at least not a relevant antitrust market). Within the realm of what is called “AI”, companies offer myriad products and services, and specific relevant markets would need to be defined before assessing harm to competition in specific cases.

[75] Start-ups, Killer Acquisitions and Merger Control, OECD (2020), available at https://web-archive.oecd.org/2020-10-16/566931-start-ups-killer-acquisitions-and-merger-control-2020.pdf.

[76] Kate Rooney & Hayden Field, Amazon Spends $2.75 Billion on AI Startup Anthropic in Its Largest Venture Investment Yet, CNBC (Mar. 27, 2024), https://www.cnbc.com/2024/03/27/amazon-spends-2point7b-on-startup-anthropic-in-largest-venture-investment.html.

[77] Id.

[78] Tom Warren, Microsoft Partners with Mistral in Second AI Deal Beyond OpenAI, The Verge (Feb. 26, 2024), https://www.theverge.com/2024/2/26/24083510/microsoft-mistral-partnership-deal-azure-ai.

[79] Mark Sullivan, Microsoft’s Inflection AI Grab Likely Cost More Than $1 Billion, Says An Insider (Exclusive), Fast Company  (Mar. 26, 2024), https://www.fastcompany.com/91069182/microsoft-inflection-ai-exclusive; see also, Mustafa Suleyman, DeepMind and Inflection Co-Founder, Joins Microsoft to Lead Copilot, Microsoft Corporate Blogs (Mar. 19, 2024), https://blogs.microsoft.com/blog/2024/03/19/mustafa-suleyman-deepmind-and-inflection-co-founder-joins-microsoft-to-lead-copilot; Krystal Hu & Harshita Mary Varghese, Microsoft Pays Inflection $ 650 Mln in Licensing Deal While Poaching Top Talent, Source Says, Reuters (Mar. 21, 2024), https://www.reuters.com/technology/microsoft-agreed-pay-inflection-650-mln-while-hiring-its-staff-information-2024-03-21; The New Inflection: An Important Change to How We’ll Work, Inflection (Mar. 19, 2024), https://inflection.ai/the-new-inflection; Julie Bort, Here’s How Microsoft Is Providing a ‘Good Outcome’ for Inflection AI VCs, as Reid Hoffman Promised, Tech Crunch (Mar. 21, 2024), https://techcrunch.com/2024/03/21/microsoft-inflection-ai-investors-reid-hoffman-bill-gates.

[80] Canada Leads the World in AI Talent Concentration, Deloitte (Sep. 27, 2023),  https://www2.deloitte.com/ca/en/pages/press-releases/articles/impact-and-opportunities.html.

[81]Government of Canada, Bill C-27, https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading.

[82] See e.g., Aaron Wudrick, Government Overregulation Could Jeopardize Canada’s Artificial Intelligence Chances, Globe and Mail (Apr. 1, 2024), https://www.theglobeandmail.com/business/commentary/article-government-overregulation-could-jeopardize-canadas-artificial.

[83] Howard Solomon, Meta May Not Bring Some Products to Canada Unless Proposed AI Law Changed, Parliament Told, IT World Canada (Feb. 8, 2024), https://www.itworldcanada.com/article/meta-may-not-bring-some-products-to-canada-unless-proposed-ai-law-changed-parliament-told/558406.

[84] Elissa Strome, Canada’s Got AI Talent. Let’s Keep It Here, Policy Opinions (Feb. 2, 2024), https://policyoptions.irpp.org/magazines/february-2024/ai-talent-canada.

[85] Joel Blit & Jimmy Lin, Canada’s Planned $2.4-Billion Artificial Intelligence Investment Is Already Mostly Obsolete, Globe and Mail (May 19, 2024), https://www.theglobeandmail.com/business/commentary/article-canadas-planned-24-billion-artificial-intelligence-investment-is.

[86] Lerner, supra note 61, at 4-5 (emphasis added).

[87] See Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (2013).

[88] See David J. Teece, Dynamic Capabilities and Strategic Management: Organizing for Innovation and Growth (2009).

[89] See Hagiu & Wright, supra note 24, at 24 (“We use our dynamic framework to explore how data sharing works: we find that it in-creases consumer surplus when one firm is sufficiently far ahead of the other by making the laggard more competitive, but it decreases consumer surplus when the firms are sufficiently evenly matched by making firms compete less aggressively, which in our model means subsidizing consumers less.”); see also Lerner, supra note 61.

[90] See, e.g., Hagiu & Wright, id. (“We also use our model to highlight an unintended consequence of privacy policies. If such policies reduce the rate at which firms can extract useful data from consumers, they will tend to increase the incumbent’s competitive advantage, reflecting that the entrant has more scope for new learning and so is affected more by such a policy.”); Jian Jia, Ginger Zhe Jin, & Liad Wagman, The Short-Run Effects of the General Data Protection Regulation on Technology Venture Investment, 40 Marketing Sci. 593 (2021) (finding GDPR reduced investment in new and emerging technology firms, particularly in data-related ventures); James Campbell, Avi Goldfarb, & Catherine Tucker, Privacy Regulation and Market Structure, 24 J. Econ. & Mgmt. Strat. 47 (2015) (“Consequently, rather than increasing competition, the nature of transaction costs implied by privacy regulation suggests that privacy regulation may be anti-competitive.”).

Continue reading
Antitrust & Consumer Protection

Caution on Competition Law

Popular Media Margrethe Vestager, the European Union’s commissioner for competition, posits that competition law has not addressed “the structural entrenchment of companies holding market power”, and that . . .

Margrethe Vestager, the European Union’s commissioner for competition, posits that competition law has not addressed “the structural entrenchment of companies holding market power”, and that sweeping regulations like the European Union’s Digital Markets Act (DMA) are therefore justified (By invitation, June 3rd). She compares the case-by-case approach of competition enforcement to “playing a never-ending game of whack-a-mole”. However, enforcement is often slow and complex, especially in the kinds of “abuse of dominance” cases that have been brought against large online platforms. This deliberate pace is necessary, as the companies’ business models and the consequences of their behaviour are themselves complex.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

New York, Listen to California: Antitrust Legislation Threatens Our Innovation Economy

Popular Media California does not have a reputation for business-friendly legislation. This makes it all the more surprising that a California legislative report rejected a New York . . .

California does not have a reputation for business-friendly legislation. This makes it all the more surprising that a California legislative report rejected a New York bill as too anti-business for the Golden State. That bill, the 21st Century Antitrust Act, championed by New York State Senate Deputy Majority Leader Michael Gianaris (D-Queens), would import European competition-policy principles and expand on them, ultimately making New York an outlier in U.S. antitrust enforcement.

In its current form, Gianaris’ bill would lead enforcers to punish the mere possession of monopoly power, rather than anti-competitive behavior that harms consumers. This marks a firm rejection of longstanding U.S. antitrust principles. Indeed, not punishing monopolization has been a longstanding concern of U.S. antitrust law. As Albany native and Second Circuit Court of Appeals Judge Learned Hand wrote in 1945: “The successful competitor, having been urged to compete, must not be turned upon when he wins.”

Continue reading
Antitrust & Consumer Protection

AI Partnerships and Competition: Much Ado About Nothing?

TOTM Competition policymakers around the world have been expressing concerns about competition in emerging artificial-intelligence (AI) industries, with some taking steps to investigate them further. These . . .

Competition policymakers around the world have been expressing concerns about competition in emerging artificial-intelligence (AI) industries, with some taking steps to investigate them further. These fears are notably fueled by a sense that incumbent (albeit, in adjacent markets) digital platforms may use strategic partnerships with AI firms to stave off competition from this fast-growing field.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Against the ‘Europeanization’ of California’s Antitrust Law

Popular Media The California State Legislature is considering amendments to the state’s antitrust laws that would enable more stringent antitrust scrutiny of technology companies, particularly so-called “Big Tech.” A . . .

The California State Legislature is considering amendments to the state’s antitrust laws that would enable more stringent antitrust scrutiny of technology companies, particularly so-called “Big Tech.” A preliminary report on single-firm conduct authored by a group of experts recruited by the California Law Revision Commission suggests this could be achieved by mimicking several features of European competition law. Unfortunately, this “Europeanization” of Californian antitrust law would benefit neither California’s economy nor its consumers.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection