Regulatory Comments

ICLE and Macdonald-Laurier Institute Comments to Competition Bureau Canada Consultation on AI and Competition

Executive Summary

We thank the Competition Bureau Canada for promoting this dialogue on competition and artificial intelligence (AI) by publishing its Artificial Intelligence and Competition Discussion Paper (“Discussion Paper”)[1]. The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates, and has longstanding expertise in the evaluation of competition law and policy in several jurisdictions. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis. The Macdonald-Laurier Institute (MLI) is an independent and nonpartisan think tank based in Ottawa with the ambition to drive the national conversation and make Canada the best-governed country in the world.

In our comments, we express concern that policymakers may equate the rapid rise of AI services and products with a need to intervene in these markets—when, in fact, the opposite is true. As we explain, the rapid growth of AI markets (or, more precisely, products and services based on AI technology), as well as the fact that new market players are thriving, suggests that competition is intense. If incumbent firms could easily leverage their dominance into burgeoning generative AI markets, we would not have seen the growth of generative AI unicorns such as OpenAI, Midjourney, and Anthropic, to name but a few.

Of course, this is not to say that AI markets are not important—quite the opposite. AI is already changing the ways that many firms do business and improving employee productivity in many industries.[2] The technology is also increasingly useful in the field of scientific research, where it has enabled creation of complex models that expand scientists’ reach.[3] Against this backdrop, EU Commissioner Margrethe Vestager was right to point out that it “is fundamental that these new markets stay competitive, and that nothing stands in the way of businesses growing and providing the best and most innovative products to consumers.”[4]

But while sensible enforcement is of vital importance to maintain competition and consumer welfare, kneejerk reactions may yield the opposite outcome. As our comments explain, overenforcement in the field of AI could cause the very harms that policymakers seek to avert. For instance, preventing so-called “Big Tech” firms from competing in these markets (for example, by threatening competition intervention as soon as they embed AI services in their ecosystems or seek to build strategic relationships with AI startups) may thwart an important source of competition needed to keep today’s leading AI firms in check. In short, competition in AI markets is important, but trying naïvely to hold incumbent tech firms back, out of misguided fears they will come to dominate this space, is likely to do more harm than good.

Our comments proceed as follows. Section I summarizes recent calls for competition intervention in AI markets. Section II argues that many of these calls are underpinned by fears of data-related incumbency advantages (often referred to as “data-network effects”). Section III explains why these effects are unlikely to play a meaningful role in AI markets. Section IV explains why current merger policy is sufficient to address any potential anticompetitive acquisition or partnership in the AI sector without need for any special rules, like presumptions or inverse burdens of proof. Section v explains how balancing user protection with innovation in AI markets is particularly important in the Canadian context. Finally, Section VI concludes by offering five key takeaways to help policymakers and agencies (including the Competition Bureau Canada) better weigh the tradeoffs inherent to competition intervention in generative-AI markets.

I. Calls for Intervention in AI Markets

It was once (and frequently) said that Google’s “data monopoly” was unassailable: “If ‘big data’ is the oil of the information economy, Google has Standard Oil-like monopoly dominance—and uses that control to maintain its dominant position.”[5] Similar claims of data dominance have been attached to nearly all large online platforms, including Facebook (Meta), Amazon, and Uber.[6]

While some of these claims continue even today (for example, “big data” is a key component of the U.S. Justice Department’s (DOJ) Google Search and ad-tech antitrust suits),[7] a shiny new data target has emerged in the form of generative artificial intelligence (AI). The launch of ChatGPT in November 2022, as well as the advent of AI image-generation services like Midjourney and Dall-E, have dramatically expanded the public’s conception of what is—and what might be—possible to achieve with generative-AI technologies built on massive datasets.

While these services remain in the early stages of mainstream adoption and remain in the throes of rapid, unpredictable technological evolution, they nevertheless already appear to be on the radar of competition policymakers around the world. Several antitrust enforcers appear to believe that, by acting now, they can avoid the “mistakes” that were purportedly made during the formative years of Web 2.0.[8] These mistakes, critics assert, include failing to appreciate the centrality of data in online markets, as well as letting mergers go unchecked and allowing early movers to entrench their market positions.[9] As Lina Khan, chair of the U.S. Federal Trade Commission (FTC), put it: “we are still reeling from the concentration that resulted from Web 2.0, and we don’t want to repeat the mis-steps of the past with AI.”[10]

This response from the competition-policy world is deeply troubling. Rather than engage in critical self-assessment and adopt an appropriately restrained stance, the enforcement community appears to be champing at the bit. Rather than reassess their prior assumptions based on the current technological moment, enforcers’ top priority appears to be figuring out how to rapidly and almost reflexively deploy existing competition tools to address the presumed competitive failures presented by generative AI.[11]

It is increasingly common for competition enforcers to argue that so-called “data-network effects” serve not only to entrench incumbents in those markets where the data is collected, but also to confer similar, self-reinforcing benefits in adjacent markets. Several enforcers have, for example, prevented large online platforms from acquiring smaller firms in adjacent markets, citing the risk that they could use their vast access to data to extend their dominance into these new markets.[12]

They have also launched consultations to ascertain the role that data plays in AI competition. For instance, in an ongoing consultation, the European Commission asks: “What is the role of data and what are its relevant characteristics for the provision of generative AI systems and/or components, including AI models?”[13] Unsurprisingly, the FTC has likewise been bullish about the risks posed by incumbents’ access to data. In comments submitted to the U.S. Copyright Office, for example, the FTC argued that:

The rapid development and deployment of AI also poses potential risks to competition. The rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms. These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.[14]

Certainly, it stands to reason that the largest online platforms—including Alphabet, Meta, Apple, and Amazon—should have a meaningful advantage in the burgeoning markets for AI services. After all, it is widely recognized that data is an essential input for generative AI.[15] This competitive advantage should be all the more significant, given that these firms have been at the forefront of AI technology for more than a decade. Over this period, Google’s DeepMind and AlphaGo and Meta’s Llama have routinely made headlines.[16] Apple and Amazon also have vast experience with AI assistants, and all of these firms use AI technology throughout their platforms.[17]

Contrary to what one might expect, however, the tech giants have, to date, been largely unable to leverage their vast data troves to outcompete startups like OpenAI and Midjourney. At the time of writing, OpenAI’s ChatGPT appears to be, by far, the most successful chatbot,[18] despite the large tech platforms’ apparent access to far more (and more up-to-date) data.

In these comments, we suggest that there are important lessons to glean from these developments, if only enforcers would stop to reflect. The meteoric rise of consumer-facing AI services should offer competition enforcers and policymakers an opportunity for introspection. As we explain, the rapid emergence of generative-AI technology may undercut many core assumptions of today’s competition-policy debates, which have largely focused on the rueful after-effects of the purported failure of 20th-century antitrust to address the allegedly manifest harms of 21st-century technology. These include the notions that data advantages constitute barriers to entry and can be leveraged to project dominance into adjacent markets; that scale itself is a market failure to be addressed by enforcers; and that the use of consumer data is inherently harmful to those consumers.

II. Data-Network Effects Theory and Enforcement

Proponents of tougher interventions by competition enforcers into digital markets often cite data-network effects as a source of competitive advantage and barrier to entry (though terms like “economies of scale and scope” may offer more precision).[19] The crux of the argument is that “the collection and use of data creates a feedback loop of more data, which ultimately insulates incumbent platforms from entrants who, but for their data disadvantage, might offer a better product.”[20] This self-reinforcing cycle purportedly leads to market domination by a single firm. Thus, it is argued, for example, that Google’s “ever-expanding control of user personal data, and that data’s critical value to online advertisers, creates an insurmountable barrier to entry for new competition.”[21]

Right off the bat, it is important to note the conceptual problem these claims face. Because data can be used to improve the quality of products and/or to subsidize their use, the idea of data as an entry barrier suggests that any product improvement or price reduction made by an incumbent could be a problematic entry barrier to any new entrant. This is tantamount to an argument that competition itself is a cognizable barrier to entry. Of course, it would be a curious approach to antitrust if competition were treated as a problem, as it would imply that firms should under-compete—i.e., should forego consumer-welfare enhancements—in order to inculcate a greater number of firms in a given market simply for its own sake.[22]

Meanwhile, actual economic studies of data-network effects have been few and far between, with scant empirical evidence to support the theory.[23] Andrei Hagiu and Julian Wright’s theoretical paper offers perhaps the most comprehensive treatment of the topic to date.[24] The authors ultimately conclude that data-network effects can be of different magnitudes and have varying effects on firms’ incumbency advantage.[25] They cite Grammarly (an AI writing-assistance tool) as a potential example: “As users make corrections to the suggestions offered by Grammarly, its language experts and artificial intelligence can use this feedback to continue to improve its future recommendations for all users.”[26]

This is echoed by other economists who contend that “[t]he algorithmic analysis of user data and information might increase incumbency advantages, creating lock-in effects among users and making them more reluctant to join an entrant platform.”[27] Crucially, some scholars take this logic a step further, arguing that platforms may use data from their “origin markets” in order to enter and dominate adjacent ones:

First, as we already mentioned, data collected in the origin market can be used, once the enveloper has entered the target market, to provide products more efficiently in the target market. Second, data collected in the origin market can be used to reduce the asymmetric information to which an entrant is typically subject when deciding to invest (for example, in R&D) to enter a new market. For instance, a search engine could be able to predict new trends from consumer searches and therefore face less uncertainty in product design.[28]

This possibility is also implicit in Hagiu and Wright’s paper.[29] Indeed, the authors’ theoretical model rests on an important distinction between within-user data advantages (that is, having access to more data about a given user) and across-user data advantages (information gleaned from having access to a wider user base). In both cases, there is an implicit assumption that platforms may use data from one service to gain an advantage in another market (because what matters is information about aggregate or individual user preferences, regardless of its origin).

Our review of the economic evidence suggests that several scholars have, with varying degrees of certainty, raised the possibility that incumbents may leverage data advantages to stifle competitors in their primary market or in adjacent ones (be it via merger or organic growth). As we explain below, however, there is ultimately little evidence to support such claims. Policymakers have, however, been keenly receptive to these limited theoretical findings, basing multiple decisions on these theories, often with little consideration given to the caveats that accompany them.[30]

Indeed, it is remarkable that, in its section on “[t]he data advantage for incumbents,” the “Furman Report” created for the UK government cited only two empirical economic studies, and they offer directly contradictory conclusions with respect to the question of the strength of data advantages.[31] Nevertheless, the Furman Report concludes that data “may confer a form of unmatchable advantage on the incumbent business, making successful rivalry less likely,”[32] and adopts without reservation “convincing” evidence from non-economists that have no apparent empirical basis.[33]

In the Google/Fitbit merger proceedings, the European Commission found that the combination of data from Google services with that of Fitbit devices would reduce competition in advertising markets:

Giving [sic] the large amount of data already used for advertising purposes that Google holds, the increase in Google’s data collection capabilities, which goes beyond the mere number of active users for which Fitbit has been collecting data so far, the Transaction is likely to have a negative impact on the development of an unfettered competition in the markets for online advertising.[34]

As a result, the Commission cleared the merger on the condition that Google refrain from using data from Fitbit devices for its advertising platform.[35] The Commission will likely focus on similar issues during its ongoing investigation of Microsoft’s investment into OpenAI.[36]

Along similar lines, the FTC’s complaint to enjoin Meta’s purchase of the virtual-reality (VR) fitness app Within relied, among other things, on the fact that Meta could leverage its data about VR-user behavior to inform its decisions and potentially outcompete rival VR-fitness apps: “Meta’s control over the Quest platform also gives it unique access to VR user data, which it uses to inform strategic decisions.”[37]

The DOJ’s twin cases against Google also implicate data leveraging and data barriers to entry. The agency’s ad-tech complaint charges that “Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”[38] Similarly, in its search complaint, the agency argues that:

Google’s anticompetitive practices are especially pernicious because they deny rivals scale to compete effectively. General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms.[39]

Finally, updated merger guidelines published in recent years by several competition enforcers cite the acquisition of data as a potential source of competition concerns. For instance, the FTC and DOJ’s newly published guidelines state that “acquiring data that helps facilitate matching, sorting, or prediction services may enable the platform to weaken rival platforms by denying them that data.”[40] Likewise, the UK Competition and Markets Authority (CMA) warns against incumbents acquiring firms in order to obtain their data and foreclose other rivals:

Incentive to foreclose rivals…

7.19(e) Particularly in complex and dynamic markets, firms may not focus on short term margins but may pursue other objectives to maximise their long-run profitability, which the CMA may consider. This may include… obtaining access to customer data….[41]

In short, competition authorities around the globe have been taking an increasingly aggressive stance on data-network effects. Among the ways this has manifested is in basing enforcement decisions on fears that data collected by one platform might confer a decisive competitive advantage in adjacent markets. Unfortunately, these concerns rest on little to no empirical evidence, either in the economic literature or the underlying case records.

III. Data-Incumbency Advantages in Generative-AI Markets

Given the assertions canvassed in the previous section, it would be reasonable to assume that firms like Google, Meta, and Amazon should be in pole position to dominate the burgeoning market for generative AI. After all, these firms have not only been at the forefront of the field for the better part of a decade, but they also have access to vast troves of data, the likes of which their rivals could only dream when they launched their own services. Thus, the authors of the Furman Report caution that “to the degree that the next technological revolution centres around artificial intelligence and machine learning, then the companies most able to take advantage of it may well be the existing large companies because of the importance of data for the successful use of these tools.”[42]

To date, however, this is not how things have unfolded—although it bears noting that these markets remain in flux and the competitive landscape is susceptible to change. The first significantly successful generative-AI service was arguably not from either Meta—which had been working on chatbots for years and had access to, arguably, the world’s largest database of actual chats—or Google. Instead, the breakthrough came from a previously unknown firm called OpenAI.

OpenAI’s ChatGPT service currently holds an estimated 60% of the market (though reliable numbers are somewhat elusive).[43] It broke the record for the fastest online service to reach 100 million users (in only a couple of months), more than four times faster than the previous record holder, TikTok.[44] Based on Google Trends data, ChatGPT is nine times more popular worldwide than Google’s own Bard service, and 14 times more popular in the United States.[45] In April 2023, ChatGPT reportedly registered 206.7 million unique visitors, compared to 19.5 million for Google’s Bard.[46] In short, at the time we are writing, ChatGPT appears to be the most popular chatbot. The entry of large players such as Google Bard or Meta AI appear to have had little effect thus far on its market position.[47]

The picture is similar in the field of AI-image generation. As of August 2023, Midjourney, Dall-E, and Stable Diffusion appear to be the three market leaders in terms of user visits.[48] This is despite competition from the likes of Google and Meta, who arguably have access to unparalleled image and video databases by virtue of their primary platform activities.[49]

This raises several crucial questions: how have these AI upstarts managed to be so successful, and is their success just a flash in the pan before Web 2.0 giants catch up and overthrow them? While we cannot answer either of these questions dispositively, we offer what we believe to be some relevant observations concerning the role and value of data in digital markets.

A first important observation is that empirical studies suggest that data exhibits diminishing marginal returns. In other words, past a certain point, acquiring more data does not confer a meaningful edge to the acquiring firm. As Catherine Tucker put it following a review of the literature: “Empirically there is little evidence of economies of scale and scope in digital data in the instances where one would expect to find them.”[50]

Likewise, following a survey of the empirical literature on this topic, Geoffrey Manne and Dirk Auer conclude that:

Available evidence suggests that claims of “extreme” returns to scale in the tech sector are greatly overblown. Not only are the largest expenditures of digital platforms unlikely to become proportionally less important as output increases, but empirical research strongly suggests that even data does not give rise to increasing returns to scale, despite routinely being cited as the source of this effect.[51]

In other words, being the firm with the most data appears to be far less important than having enough data. This lower bar may be accessible to far more firms than one might initially think possible. And obtaining enough data could become even easier—that is, the volume of required data could become even smaller—with technological progress. For instance, synthetic data may provide an adequate substitute to real-world data,[52] or may even outperform real-world data.[53] As Thibault Schrepel and Alex Pentland surmise:

[A]dvances in computer science and analytics are making the amount of data less relevant every day. In recent months, important technological advances have allowed companies with small data sets to compete with larger ones.[54]

Indeed, past a certain threshold, acquiring more data might not meaningfully improve a service, where other improvements (such as better training methods or data curation) could have a large impact. In fact, there is some evidence that excessive data impedes a service’s ability to generate results appropriate for a given query: “[S]uperior model performance can often be achieved with smaller, high-quality datasets than massive, uncurated ones. Data curation ensures that training datasets are devoid of noise, irrelevant instances, and duplications, thus maximizing the efficiency of every training iteration.”[55]

Consider, for instance, a user who wants to generate an image of a basketball. Using a model trained on an indiscriminate range and number of public photos in which a basketball appears, but surrounded by copious other image data, the user may end up with an inordinately noisy result. By contrast, a model trained with a better method on fewer, more carefully selected images, could readily yield far superior results.[56] In one important example:

The model’s performance is particularly remarkable, given its small size. “This is not a large language model trained on the whole Internet; this is a relatively small transformer trained for these tasks,” says Armando Solar-Lezama, a computer scientist at the Massachusetts Institute of Technology, who was not involved in the new study…. The finding implies that instead of just shoving ever more training data into machine-learning models, a complementary strategy might be to offer AI algorithms the equivalent of a focused linguistics or algebra class.[57]

Platforms’ current efforts are thus focused on improving the mathematical and logical reasoning of large language models (LLMs), rather than maximizing training datasets.[58] Two points stand out. The first is that firms like OpenAI rely largely on publicly available datasets—such as GSM8K—to train their LLMs.[59] Second, the real challenge to create cutting-edge AI is not so much in collecting data, but rather in creating innovative AI-training processes and architectures:

[B]uilding a truly general reasoning engine will require a more fundamental architectural innovation. What’s needed is a way for language models to learn new abstractions that go beyond their training data and have these evolving abstractions influence the model’s choices as it explores the space of possible solutions.

We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.[60]

Furthermore, it is worth noting that the data most relevant to startups in a given market may not be those data held by large incumbent platforms in other markets, but rather data specific to the market in which the startup is active or, even better, to the given problem it is attempting to solve:

As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use—they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market and mounted their successful challenges—not before.[61]

The bottom line is that data is not the be-all and end-all that many in competition circles make it out to be. While data may often confer marginal benefits, there is little sense that these are ultimately decisive.[62] As a result, incumbent platforms’ access to vast numbers of users and data in their primary markets might only marginally affect their AI competitiveness.

A related observation is that firms’ capabilities and other features of their products arguably play a more important role than the data they own.[63] Examples of this abound in digital markets. Google overthrew Yahoo, despite initially having access to far fewer users and far less data; Google and Apple overcame Microsoft in the smartphone operating-system market, despite having comparatively tiny ecosystems (at the time) to leverage; and TikTok rose to prominence despite intense competition from incumbents like Instagram, which had much larger user bases. In each of these cases, important product-design decisions (such as the PageRank algorithm, recognizing the specific needs of mobile users,[64] and TikTok’s clever algorithm) appear to have played a far more significant role than initial user and data endowments (or lack thereof).

All of this suggests that the early success of OpenAI likely has more to do with its engineering decisions than what data it did (or did not) own. Going forward, OpenAI and its rivals’ ability to offer and monetize compelling custom versions of their generative-AI technology will arguably play a much larger role than (and contribute to) their ownership of data.[65] In other words, the ultimate challenge is arguably to create a valuable platform, of which data ownership is a consequence, but not a cause.

It is also important to note that, in those instances where it is valuable, data does not just fall from the sky. Instead, it is through smart business and engineering decisions that firms can generate valuable information (which does not necessarily correlate with owning more data).

For instance, OpenAI’s success with ChatGPT is often attributed to its more efficient algorithms and training models, which arguably have enabled the service to improve more rapidly than its rivals.[66] Likewise, the ability of firms like Meta and Google to generate valuable data for advertising arguably depends more on design decisions that elicit the right data from users, rather than the raw number of users in their networks.

Put differently, setting up a business so as to extract and organize the right information is more important than simply owning vast troves of data.[67] Even in those instances where high-quality data is an essential parameter of competition, it does not follow that having vaster databases or more users on a platform necessarily leads to better information for the platform.

Indeed, if data ownership consistently conferred a significant competitive advantage, these new firms would not be where they are today. This does not mean that data is worthless, of course. Rather, it means that competition authorities should not assume that merely possessing data is a dispositive competitive advantage, absent compelling empirical evidence to support such a finding. In this light, the current wave of decisions and competition-policy pronouncements that rely on data-related theories of harm are premature.

IV. Merger Policy and AI

According to the Discussion Paper, some mergers that involve firms offering AI services or products deserve special scrutiny:

Mergers, of any form, involving a firm who supplies compute inputs, such as AI chips and cloud services, could warrant additional scrutiny due to the existing high levels of concentration in these markets. Mergers in AI markets may require additional scrutiny as large established firms may seek to acquire emerging competitors as a means of preventing or lessening competition.[68]

The Discussion Paper does not explain what form this “additional scrutiny” may take. It may entail anything from prioritization of resources to procedural rules (presumptions, burden of proof). In any case, while we understand why the two mentioned instances of mergers may raise competition concerns, it is important to acknowledge that these are theoretical concerns. To date, there is no evidence to support differentiated scrutiny for mergers involving AI firms or, in general, firms working with information technology. The view that so-called “killer acquisitions,” for instance, pose a significant competition risk is not supported by solid evidence.[69] To the contrary, the evidence suggests that acquisitions increase competition by allowing larger firms to acquire abilities relevant to innovation and by generating incentives for startups.[70]

Companies with “deep pockets” that invest in AI startups may provide those firms the resources to compete with current market leaders. Firms like Amazon, Google, Meta, and Microsoft, for instance, are investing in creating their own chips for building AI systems, aiming to be less dependent on Nvidia.[71] The availability of this source of funding may thus increase competition at all levels of the AI industry.[72]

There has been also some concern in other jurisdictions regarding recent partnerships among and investments by Big Tech firms into AI “unicorns,”[73] in particular, Amazon’s partnership with Anthropic; Microsoft’s partnership with Mistral AI; and Microsoft’s hiring of former Inflection AI employees (including, notably, founder Mustafa Suleyman) and related arrangements with the company.

Publicly available information, however, suggests that these transactions may not warrant merger-control investigation, let alone the heightened scrutiny that comes with potential Phase II proceedings. At the very least, given the AI industry’s competitive landscape, there is little to suggest these transactions merit closer scrutiny than similar deals in other sectors.

Overenforcement in the field of generative AI could paradoxically engender the very harms that policymakers currently seek to avert. Preventing Big Tech firms from competing in these markets (for example, by threatening competition intervention as soon as they build strategic relationships with AI startups) may thwart an important source of competition needed to keep today’s leading generative-AI firms in check. In short, competition in AI markets is important,[74] but trying naïvely to hold incumbent (in adjacent markets) tech firms back, out of misguided fears they will come to dominate this space, is likely to do more harm than good.

At a more granular level, there are important reasons to believe these kinds of agreements will have no negative impact on competition and may, in fact, benefit consumers—e.g., by enabling those startups to raise capital and deploy their services at an even larger scale. In other words, they do not bear any of the prima facie traits of “killer acquisitions” or even of the acquisition of “nascent potential competitors.”[75]

Most importantly, these partnerships all involve the acquisition of minority stakes and do not entail any change of control over the target companies. Amazon, for instance, will not have “ownership control” of Anthropic. The precise amount of shares acquired has not been made public, but a reported investment of $4 billion in a company valued at $18.4 billion does not give Amazon a majority stake or sufficient voting rights to control the company or its competitive strategy. [76] It has also been reported that the deal will not give Amazon any seats on the Anthropic board or special voting rights (such as the power to veto some decisions).[77] There is thus little reason to believe Amazon has acquired indirect or de facto control over Anthropic.

Microsoft’s investment in Mistral AI is even smaller, in both absolute and relative terms. Microsoft is reportedly investing just $16 million in a company valued at $2.1 billion.[78] This represents less than 1% of Mistral’s equity, making it all but impossible for Microsoft to exert any significant control or influence over Mistral AI’s competitive strategy. Likewise, there have been no reports of Microsoft acquiring seats on Mistral AI’s board or special voting rights. We can therefore be confident that the deal will not affect competition in AI markets.

Much the same applies to Microsoft’s dealings with Inflection AI. Microsoft hired two of the company’s three founders (which currently does not fall under the scope of merger laws), and also paid $620 million for nonexclusive rights to sell access to the Inflection AI model through its Azure Cloud.[79] Admittedly, the latter could entail (depending on deal’s specifics) some limited control over Inflection AI’s competitive strategy, but there is currently no evidence to suggest this will be the case.

Finally, none of these deals entail any competitively significant behavioral commitments from the target companies. There are no reports of exclusivity agreements or other commitments that would restrict third parties’ access to these firms’ underlying AI models. Again, this means the deals are extremely unlikely to negatively impact the competitive landscape in these markets.

V. Balancing Innovation and Regulation in Canada’s AI Landscape

AI presents significant opportunities and challenges for competition policy in Canada. As the technology continues to evolve, it is crucial to establish a regulatory framework that promotes innovation, while safeguarding competition and consumer protection.

The European AI Act, for example, categorizes AI systems into different risk levels—unacceptable risk, high risk, limited risk, and minimal risk. This framework allows for regulation proportional to the potential impact of the AI system. By adopting a similar risk-based approach, Canada could ensure that high-risk AI systems are subject to stringent requirements, while lower-risk systems benefit from lighter-touch regulations that encourage innovation.

To foster a competitive AI market in Canada, it is essential to avoid overly restrictive regulations that could stifle technological progress. If implemented reasonably, the EU AI Act’s flexible framework may support the development and deployment of innovative AI technologies by imposing rigorous requirements only on high-risk systems. In turn, this could support innovation by balancing the need for public safety and the protection of fundamental rights with the imperative to maintain a dynamic and competitive market environment. Overenforcement, in contrast, could lead to the opposite outcome.

Canada is currently a world leader in AI talent concentration[80] and Canada’s existing AI strategy has, to date, created significant social and economic benefits for the nation. Overly restrictive regulation (such as the proposed Artificial Intelligence and Data Act (AIDA)[81]) could lead to challenges in attracting and retaining talent, which would inevitably hamper competition.[82] Meta’s response to the proposed AIDA serves as a practical example to illustrate the potential impact of overregulation. Meta has indicated that the proposed laws could prevent the company from launching certain products in Canada due to onerous compliance costs.[83] Other tech companies share similar concerns, warning that misaligned regulations could place Canada at a competitive disadvantage globally and undermine robust competition at home.

The need to retain and attract top AI talent is another critical issue. Canada faces challenges in keeping AI talent due to more attractive opportunities abroad. To maintain its competitive edge, Canada must ensure that its regulatory frameworks do not discourage local talent from contributing to the domestic AI landscape.[84]

The Canadian government has recently committed in its federal budget to invest $2.4 billion for AI, focused primarily on computing power. Unfortunately, Meta’s subsequent release of Llama 3, a powerful open-source LLM, and Microsoft’s €4 billion investment in France’s AI capabilities highlight the need for a reassessment. Rather than computing power, Canada should instead focus on AI applications, education, and industry adoption.[85]

VI. Five Key Takeaways: Reconceptualizing the Role of Data in Generative-AI Competition

As we explain above, data (network effects) are not the source of barriers to entry that they are sometimes made out to be. The picture is far more nuanced. Indeed, as economist Andres Lerner demonstrated almost a decade ago (and the assessment is only truer today):

Although the collection of user data is generally valuable for online providers, the conclusion that such benefits of user data lead to significant returns to scale and to the entrenchment of dominant online platforms is based on unsupported assumptions. Although, in theory, control of an “essential” input can lead to the exclusion of rivals, a careful analysis of real-world evidence indicates that such concerns are unwarranted for many online businesses that have been the focus of the “big data” debate.[86]

While data can be an important part of the competitive landscape, incumbents’ data advantages are far less pronounced than today’s policymakers commonly assume. In that respect, five main lessons emerge:

  1. Data can be (very) valuable, but beyond a certain threshold, those benefits tend to diminish. In other words, having the most data is less important than having enough;
  2. The ability to generate valuable information does not depend on the number of users or the amount of data a platform has previously acquired;
  3. The most important datasets are not always proprietary;
  4. Technological advances and platforms’ engineering decisions affect their ability to generate valuable information, and this effect swamps effects stemming from the amount of data they own; and
  5. How platforms use data is arguably more important than what data or how much data they own.

These lessons have important ramifications for competition-policy debates over the competitive implications of data in technologically evolving areas.

First, it is not surprising that startups, rather than incumbents, have taken an early lead in generative AI (and in Web 2.0 before it). After all, if data-incumbency advantages are small or even nonexistent, then smaller and more nimble players may have an edge over established tech platforms. This is all the more likely given that, despite significant efforts, the biggest tech platforms were unable to offer compelling generative-AI chatbots and image-generation services before the emergence of ChatGPT, Dall-E, Midjourney, etc.

This failure suggests that, in a process akin to Clayton Christensen’s “innovator’s dilemma,”[87] something about the incumbent platforms’ existing services and capabilities was holding them back in those markets. Of course, this does not necessarily mean that those same services or capabilities could not become an advantage when the generative-AI market starts addressing issues of monetization and scale.[88] But it does mean that assumptions about a firm’s market power based on its possession of data are off the mark.

Another important implication is that, paradoxically, policymakers’ efforts to prevent Web 2.0 platforms from competing freely in generative AI markets may ultimately backfire and lead to less, not more, competition. Indeed, OpenAI is currently acquiring a sizeable lead in generative AI. While competition authorities might like to think that other startups will emerge and thrive in this space, it is important not to confuse desires with reality. While there currently exists a vibrant AI-startup ecosystem, there is at least a case to be made that the most significant competition for today’s AI leaders will come from incumbent Web 2.0 platforms—although nothing is certain at this stage. Policymakers should beware not to stifle that competition on the misguided assumption that competitive pressure from large incumbents is somehow less valuable to consumers than that which originates from smaller firms.

Finally, even if there were a competition-related market failure to be addressed in the field of generative AI (which is anything but clear), it is unclear that the remedies being contemplated would do more good than harm. Some of the solutions that have been put forward have highly ambiguous effects on consumer welfare. Scholars have shown that, e.g., mandated data sharing—a solution championed by EU policymakers, among others—may sometimes dampen competition in generative-AI markets.[89] This is also true of legislation like the General Data Protection Regulation (GDPR), which makes it harder for firms to acquire more data about consumers—assuming such data is, indeed, useful to generative-AI services.[90]

In sum, it is a flawed understanding of the economics and practical consequences of large agglomerations of data that leads competition authorities to believe that data-incumbency advantages are likely to harm competition in generative AI markets—or even in the data-intensive Web 2.0 markets that preceded them. Indeed, competition or regulatory intervention to “correct” data barriers and data network and scale effects is liable to do more harm than good.

[1] Competition Bureau Canada, Artificial Intelligence and Competition, Discussion Paper (Mar. 2024),

[2] See, e.g., Michael Chui, et al., The Economic Potential of Generative AI: The Next Productivity Frontier, McKinsey (Jun. 14, 2023),

[3] See, e.g., Zhuoran Qiao, Weili Nie, Arash Vahdat, Thomas F. Miller III, & Animashree Anandkumar, State-Specific Protein–Ligand Complex Structure Prediction with a Multiscale Deep Generative Model, 6 Nature Machine Intelligence, 195-208 (2024); see also Jaemin Seo, Sang Kyeun Kim, Azarakhsh Jalalvand, Rory Conlin, Andrew Rothstein, Joseph Abbate, Keith Erickson, Josiah Wai, Ricardo Shousha, & Egemen Kolemen, Avoiding Fusion Plasma Tearing Instability with Deep Reinforcement Learning, 626 Nature, 746-751 (2024).

[4] See, e.g., Press Release, Commission Launches Calls for Contributions on Competition in Virtual Worlds and Generative AI, European Commission (Jan. 9, 2024),

[5] Nathan Newman, Taking on Google’s Monopoly Means Regulating Its Control of User Data, Huffington Post (Sep. 24, 2013),

[6] See, e.g., Lina Khan & K. Sabeel Rahman, Restoring Competition in the U.S. Economy, in Untamed: How to Check Corporate, Financial, and Monopoly Power (Nell Abernathy, Mike Konczal, & Kathryn Milani, eds., 2016), at 23 (“From Amazon to Google to Uber, there is a new form of economic power on display, distinct from conventional monopolies and oligopolies…, leverag[ing] data, algorithms, and internet-based technologies… in ways that could operate invisibly and anticompetitively.”); Mark Weinstein, I Changed My Mind—Facebook Is a Monopoly, Wall St. J. (Oct. 1, 2021), (“[T]he glue that holds it all together is Facebook’s monopoly over data…. Facebook’s data troves give it unrivaled knowledge about people, governments—and its competitors.”).

[7] See, generally, Abigail Slater, Why “Big Data” Is a Big Deal, The Reg. Rev. (Nov. 6, 2023),; Amended Complaint at ¶36, United States v. Google, 1:20-cv-03010- (D.D.C. 2020); Complaint at ¶37, United States v. Google, 1:23-cv-00108 (E.D. Va. 2023), (“Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”).

[8] See, e.g., Press Release, European Commission, supra note 4; Krysten Crawford, FTC’s Lina Khan Warns Big Tech over AI, SIEPR (Nov. 3, 2020), (“Federal Trade Commission Chair Lina Khan delivered a sharp warning to the technology industry in a speech at Stanford on Thursday: Antitrust enforcers are watching what you do in the race to profit from artificial intelligence.”) (emphasis added).

[9] See, e.g., John M. Newman, Antitrust in Digital Markets, 72 Vand. L. Rev. 1497, 1501 (2019) (“[T]he status quo has frequently failed in this vital area, and it continues to do so with alarming regularity. The laissez-faire approach advocated for by scholars and adopted by courts and enforcers has allowed potentially massive harms to go unchecked.”); Bertin Martins, Are New EU Data Market Regulations Coherent and Efficient?, Bruegel Working Paper 21/23 (2023), (“Technical restrictions on access to and re-use of data may result in failures in data markets and data-driven services markets.”); Valéria Faure-Muntian, Competitive Dysfunction: Why Competition Law Is Failing in a Digital World, The Forum Network (Feb. 24, 2021),

[10] See Rana Foroohar, The Great US-Europe Antitrust Divide, FT (Feb. 5, 2024),

[11] See, e.g., Press Release, European Commission, supra note 4.

[12] See infra, Section II. Commentators have also made similar claims; see, e.g., Ganesh Sitaram & Tejas N. Narechania, It’s Time for the Government to Regulate AI. Here’s How, Politico (Jan. 15, 2024) (“All that cloud computing power is used to train foundation models by having them “learn” from incomprehensibly huge quantities of data. Unsurprisingly, the entities that own these massive computing resources are also the companies that dominate model development. Google has Bard, Meta has Llama. Amazon recently invested $4 billion into one of OpenAI’s leading competitors, Anthropic. And Microsoft has a 49 percent ownership stake in OpenAI — giving it extraordinary influence, as the recent board struggles over Sam Altman’s role as CEO showed.”).

[13] Press Release, European Commission, supra note 4.

[14] Comment of U.S. Federal Trade Commission to the U.S. Copyright Office, Artificial Intelligence and Copyright, Docket No. 2023-6 (Oct. 30, 2023), at 4, (emphasis added).

[15] See, e.g. Joe Caserta, Holger Harreis, Kayvaun Rowshankish, Nikhil Srinidhi, & Asin Tavakoli, The Data Dividend: Fueling Generative AI, McKinsey Digital (Sep. 15, 2023), (“Your data and its underlying foundations are the determining factors to what’s possible with generative AI.”).

[16] See, e.g., Tim Keary, Google DeepMind’s Achievements and Breakthroughs in AI Research, Techopedia (Aug. 11, 2023),; see also, e.g., Will Douglas Heaven, Google DeepMind Used a Large Language Model to Solve an Unsolved Math Problem, MIT Technology Review (Dec. 14, 2023),; A Decade of Advancing the State-of-the-Art in AI Through Open Research, Meta (Nov. 30, 2023),; 200 Languages Within a Single AI Model: A Breakthrough in High-Quality Machine Translation, Meta, (last visited Jan. 18, 2023).

[17] See, e.g., Jennifer Allen, 10 Years of Siri: The History of Apple’s Voice Assistant, Tech Radar (Oct. 4, 2021),; see also Evan Selleck, How Apple Is Already Using Machine Learning and AI in iOS, Apple Insider (Nov. 20, 2023),; Kathleen Walch, The Twenty Year History Of AI At Amazon, Forbes (Jul. 19, 2019),

[18] See infra Section III.

[19] See, e.g., Cédric Argenton & Jens Prüfer, Search Engine Competition with Network Externalities, 8 J. Comp. L. & Econ. 73, 74 (2012).

[20] John M. Yun, The Role of Big Data in Antitrust, in The Global Antitrust Institute Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg, eds., Nov. 11, 2020) at 233,; see also, e.g., Robert Wayne Gregory, Ola Henfridsson, Evgeny Kaganer, & Harris Kyriakou, The Role of Artificial Intelligence and Data Network Effects for Creating User Value, 46 Acad. of Mgmt. Rev. 534 (2020), final pre-print version at 4, (“A platform exhibits data network effects if, the more that the platform learns from the data it collects on users, the more valuable the platform becomes to each user.”); see also, Karl Schmedders, José Parra-Moyano, & Michael Wade, Why Data Aggregation Laws Could be the Answer to Big Tech Dominance, Silicon Republic (Feb. 6, 2024),

[21] Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. Reg. 401, 409 (2014) (emphasis added); see also id. at 420 & 423 (“While there are a number of network effects that come into play with Google, [“its intimate knowledge of its users contained in its vast databases of user personal data”] is likely the most important one in terms of entrenching the company’s monopoly in search advertising…. Google’s overwhelming control of user data… might make its dominance nearly unchallengeable.”).

[22] See also Yun, supra note 20 at 229 (“[I]nvestments in big data can create competitive distance between a firm and its rivals, including potential entrants, but this distance is the result of a competitive desire to improve one’s product.”).

[23] For a review of the literature on increasing returns to scale in data (this topic is broader than data-network effects) see Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo Mason L. Rev. 1281, 1344 (2021).

[24] Andrei Hagiu & Julian Wright, Data-Enabled Learning, Network Effects, and Competitive Advantage, 54 RAND J. Econ. 638 (2023).

[25] Id. at 639. The authors conclude that “Data-enabled learning would seem to give incumbent firms a competitive advantage. But how strong is this advantage and how does it differ from that obtained from more traditional mechanisms…”.

[26] Id.

[27] Bruno Jullien & Wilfried Sand-Zantman, The Economics of Platforms: A Theory Guide for Competition Policy, 54 Info. Econ. & Pol’y 10080, 101031 (2021).

[28] Daniele Condorelli & Jorge Padilla, Harnessing Platform Envelopment in the Digital World, 16 J. Comp. L. & Pol’y 143, 167 (2020).

[29] See Hagiu & Wright, supra note 24.

[30] For a summary of these limitations, see generally Catherine Tucker, Network Effects and Market Power: What Have We Learned in the Last Decade?, Antitrust (2018) at 72, available at; see also Manne & Auer, supra note 23, at 1330.

[31] See Jason Furman, Diane Coyle, Amelia Fletcher, Derek McAuley, & Philip Marsden (Dig. Competition Expert Panel), Unlocking Digital Competition (2019) at 32-35 (“Furman Report”), available at

[32] Id. at 34.

[33] Id. at 35. To its credit, it should be noted, the Furman Report counsels caution before mandating access to data as a remedy to promote competition. See id. at 75. With that said, the Furman Report maintains that such a remedy should certainly be on the table, because “the evidence suggests that large data holdings are at the heart of the potential for some platform markets to be dominated by single players and for that dominance to be entrenched in a way that lessens the potential for competition for the market.” Id. In fact, the evidence does not show this.

[34] Case COMP/M.9660 — Google/Fitbit, Commission Decision (Dec. 17, 2020) (Summary at O.J. (C 194) 7), available at at 455.

[35] Id. at 896.

[36] See Natasha Lomas, EU Checking if Microsoft’s OpenAI Investment Falls Under Merger Rules, TechCrunch (Jan. 9, 2024),

[37] Amended Complaint at 11, Meta/Zuckerberg/Within, Fed. Trade Comm’n. (2022) (No. 605837), available at

[38] Amended Complaint (D.D.C), supra note 7 at ¶37.

[39] Amended Complaint (E.D. Va), supra note 7 at ¶8.

[40] Merger Guidelines, US Dep’t of Justice & Fed. Trade Comm’n (2023) at 25, available at

[41] Merger Assessment Guidelines, Competition and Mkts. Auth (2021) at  ¶7.19(e), available at–_.pdf.

[42] Furman Report, supra note 31, at ¶4.

[43] See, e.g., Chris Westfall, New Research Shows ChatGPT Reigns Supreme in AI Tool Sector, Forbes (Nov. 16, 2023),

[44] See Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base, Reuters (Feb. 2, 2023),; Google: The AI Race Is On, App Economy Insights (Feb. 7, 2023),

[45] See Google Trends,,%2Fg%2F11ts49p01g&hl=en (last visited, Jan. 12, 2024) and,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024).

[46] See David F. Carr, As ChatGPT Growth Flattened in May, Google Bard Rose 187%, Similarweb Blog (Jun. 5, 2023),

[47] See Press Release, Introducing New AI Experiences Across Our Family of Apps and Devices, Meta (Sep. 27, 2023),; Sundar Pichai, An Important Next Step on Our AI Journey, Google Keyword Blog (Feb. 6, 2023),

[48] See Ion Prodan, 14 Million Users: Midjourney’s Statistical Success, Yon (Aug. 19, 2023),; see also Andrew Wilson, Midjourney Statistics: Users, Polls, & Growth [Oct 2023], ApproachableAI (Oct. 13, 2023),

[49] See Hema Budaraju, New Ways to Get Inspired with Generative AI in Search, Google Keyword Blog (Oct. 12, 2023),; Imagine with Meta AI, Meta (last visited Jan. 12, 2024),

[50] Catherine Tucker, Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility, 54 Rev. Indus. Org. 683, 686 (2019).

[51] Manne & Auer, supra note 23, at 1345.

[52] See, e.g., Stefanie Koperniak, Artificial Data Give the Same Results as Real Data—Without Compromising Privacy, MIT News (Mar. 3, 2017), (“[Authors] describe a machine learning system that automatically creates synthetic data—with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. While the use of authentic data can cause significant privacy concerns, this synthetic data is completely different from that produced by real users—but can still be used to develop and test data science algorithms and models.”).

[53] See, e.g., Rachel Gordon, Synthetic Imagery Sets New Bar in AI Training Efficiency, MIT News (Nov. 20, 2023), (“By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional ‘real-image’ training methods.).

[54] Thibault Schrepel & Alex ‘Sandy’ Pentland, Competition Between AI Foundation Models: Dynamics and Policy Recommendations, MIT Connection Science Working Paper (Jun. 2023), at 8.

[55] Igor Susmelj, Optimizing Generative AI: The Role of Data Curation, Lightly (last visited Jan. 15, 2024),

[56] See, e.g., Xiaoliang Dai, et al., Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack, ArXiv (Sep. 27, 2023) at 1, (“[S]upervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality.”); see also, Hu Xu, et al., Demystifying CLIP Data, ArXiv (Sep. 28, 2023),

[57] Lauren Leffer, New Training Method Helps AI Generalize like People Do, Sci. Am. (Oct. 26, 2023), (discussing Brendan M. Lake & Marco Baroni, Human-Like Systematic Generalization Through a Meta-Learning Neural Network, 623 Nature 115 (2023)).

[58] Timothy B. Lee, The Real Research Behind the Wild Rumors about OpenAI’s Q* Project, Ars Technica (Dec. 8, 2023),

[59] Id.; see also GSM8K, Papers with Code, available at (last visited Jan. 18, 2023); MATH Dataset, GitHub, available at (last visited Jan. 18, 2024).

[60] Lee, supra note 58.

[61] Geoffrey Manne & Ben Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (Mar. 26, 2015), (citing Andres v. Lerner, The Role of ‘Big Data’ in Online Platform Competition (Aug. 26, 2014),

[62] See Catherine Tucker, Digital Data as an Essential Facility: Control, CPI Antitrust Chron. (Feb. 2020), at 11 (“[U]ltimately the value of data is not the raw manifestation of the data itself, but the ability of a firm to use this data as an input to insight.”).

[63] Or, as John Yun puts it, data is only a small component of digital firms’ production function. See Yun, supra note 20, at 235 (“Second, while no one would seriously dispute that having more data is better than having less, the idea of a data-driven network effect is focused too narrowly on a single factor improving quality. As mentioned in supra Section I.A, there are a variety of factors that enter a firm’s production function to improve quality.”).

[64] Luxia Le, The Real Reason Windows Phone Failed Spectacularly, History–Computer (Aug. 8, 2023),

[65] Introducing the GPT Store, Open AI (Jan. 10, 2024),

[66] See Michael Schade, How ChatGPT and Our Language Models are Developed, OpenAI,; Sreejani Bhattacharyya, Interesting Innovations from OpenAI in 2021, AIM (Jan. 1, 2022),; Danny Hernadez & Tom B. Brown, Measuring the Algorithmic Efficiency of Neural Networks, ArXiv (May 8, 2020),

[67] See Yun, supra note 20 at 235 (“Even if data is primarily responsible for a platform’s quality improvements, these improvements do not simply materialize with the presence of more data—which differentiates the idea of data-driven network effects from direct network effects. A firm needs to intentionally transform raw, collected data into something that provides analytical insights. This transformation involves costs including those associated with data storage, organization, and analytics, which moves the idea of collecting more data away from a strict network effect to more of a ‘data opportunity.’”).

[68] Discussion Paper, Section 3.1.6, “Consideration for mergers”.

[69] See: Jonathan M. Barnett, “Killer Acquisitions” Reexamined: Economic Hyperbole in the Age of Populist Antitrust, 3 U. Chi. Bus. L. Rev. 39 (2023).

[70] Id. at 85. (“At the same time, these transactions enhance competitive conditions by supporting the profit expectations that elicit VC investment in the startups that deliver the most transformative types of innovation to the biopharmaceutical ecosystem (and, in some cases, mature into larger firms that can challenge incumbents).)”

[71] Cade Metz, Karen Weise, & Mike Isaac, Nvidia’s Big Tech Rivals Put Their Own A.I. Chips on the Table, N.Y. Times (Jan. 29, 2024),

[72] See, e.g., Chris Metinko, Nvidia’s Big Tech Rivals Put Their Own A.I. Chips on the Table, CrunchBase (Jun. 12, 2024),

[73] CMA Seeks Views on AI Partnerships and Other Arrangements, Competition and Mkts. Auth. (Apr. 24, 2024),

[74] AI, of course, is not a market (at least not a relevant antitrust market). Within the realm of what is called “AI”, companies offer myriad products and services, and specific relevant markets would need to be defined before assessing harm to competition in specific cases.

[75] Start-ups, Killer Acquisitions and Merger Control, OECD (2020), available at

[76] Kate Rooney & Hayden Field, Amazon Spends $2.75 Billion on AI Startup Anthropic in Its Largest Venture Investment Yet, CNBC (Mar. 27, 2024),

[77] Id.

[78] Tom Warren, Microsoft Partners with Mistral in Second AI Deal Beyond OpenAI, The Verge (Feb. 26, 2024),

[79] Mark Sullivan, Microsoft’s Inflection AI Grab Likely Cost More Than $1 Billion, Says An Insider (Exclusive), Fast Company  (Mar. 26, 2024),; see also, Mustafa Suleyman, DeepMind and Inflection Co-Founder, Joins Microsoft to Lead Copilot, Microsoft Corporate Blogs (Mar. 19, 2024),; Krystal Hu & Harshita Mary Varghese, Microsoft Pays Inflection $ 650 Mln in Licensing Deal While Poaching Top Talent, Source Says, Reuters (Mar. 21, 2024),; The New Inflection: An Important Change to How We’ll Work, Inflection (Mar. 19, 2024),; Julie Bort, Here’s How Microsoft Is Providing a ‘Good Outcome’ for Inflection AI VCs, as Reid Hoffman Promised, Tech Crunch (Mar. 21, 2024),

[80] Canada Leads the World in AI Talent Concentration, Deloitte (Sep. 27, 2023),

[81]Government of Canada, Bill C-27,

[82] See e.g., Aaron Wudrick, Government Overregulation Could Jeopardize Canada’s Artificial Intelligence Chances, Globe and Mail (Apr. 1, 2024),

[83] Howard Solomon, Meta May Not Bring Some Products to Canada Unless Proposed AI Law Changed, Parliament Told, IT World Canada (Feb. 8, 2024),

[84] Elissa Strome, Canada’s Got AI Talent. Let’s Keep It Here, Policy Opinions (Feb. 2, 2024),

[85] Joel Blit & Jimmy Lin, Canada’s Planned $2.4-Billion Artificial Intelligence Investment Is Already Mostly Obsolete, Globe and Mail (May 19, 2024),

[86] Lerner, supra note 61, at 4-5 (emphasis added).

[87] See Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (2013).

[88] See David J. Teece, Dynamic Capabilities and Strategic Management: Organizing for Innovation and Growth (2009).

[89] See Hagiu & Wright, supra note 24, at 24 (“We use our dynamic framework to explore how data sharing works: we find that it in-creases consumer surplus when one firm is sufficiently far ahead of the other by making the laggard more competitive, but it decreases consumer surplus when the firms are sufficiently evenly matched by making firms compete less aggressively, which in our model means subsidizing consumers less.”); see also Lerner, supra note 61.

[90] See, e.g., Hagiu & Wright, id. (“We also use our model to highlight an unintended consequence of privacy policies. If such policies reduce the rate at which firms can extract useful data from consumers, they will tend to increase the incumbent’s competitive advantage, reflecting that the entrant has more scope for new learning and so is affected more by such a policy.”); Jian Jia, Ginger Zhe Jin, & Liad Wagman, The Short-Run Effects of the General Data Protection Regulation on Technology Venture Investment, 40 Marketing Sci. 593 (2021) (finding GDPR reduced investment in new and emerging technology firms, particularly in data-related ventures); James Campbell, Avi Goldfarb, & Catherine Tucker, Privacy Regulation and Market Structure, 24 J. Econ. & Mgmt. Strat. 47 (2015) (“Consequently, rather than increasing competition, the nature of transaction costs implied by privacy regulation suggests that privacy regulation may be anti-competitive.”).