Showing 9 of 122 Publications by Dirk Auer

ICLE Comments on India’s Draft Digital Competition Act

Regulatory Comments A year after it was created by the Government of India’s Ministry of Corporate Affairs to examine the need for a separate law on competition . . .

A year after it was created by the Government of India’s Ministry of Corporate Affairs to examine the need for a separate law on competition in digital markets, India’s Committee on Digital Competition Law (CDCL) in February both published its report[1] recommending adoption of such rules and submitted the draft Digital Competition Act (DCA), which is virtually identical to the European Union’s Digital Markets Act (DMA).[2]

The EU has touted its new regulation as essential to ensure “fairness and contestability” in digital markets. And since it entered into force early last month,[3] the DMA has imposed strict pre-emptive rules on so-called digital “gatekeepers,”[4] a cohort of mostly American tech giants like Google, Amazon, Apple, Meta, and Microsoft.

But despite the impressive public-relations campaign[5] that the DMA’s proponents have been able to mount internationally, India should be wary of reflexively importing these ready-made and putatively infallible solutions that promise to “fix” the world’s most successful digital platforms at little or no cost.

I. Not So Fast

The first question India should ask itself is why?[6] Echoing the European Commission, the CDCL argues that strict ex-ante rules are needed because competition-law investigations in digital markets are too time-consuming. But this could be a feature, not a bug, of competition law. Digital markets often involve novel business models and zero or low-price products, meaning that there is nearly always a plausible pro-competitive explanation for the impugned conduct.

When designing rules and presumptions in a world of imperfect information, the general theme is that, as confidence in public harm goes up, the evidentiary burden must go down. This is why antitrust law tilts the field in the enforcer’s favor in cases involving practices that are known to always, or almost always, be harmful. But none of the conduct covered by the DCA falls into this category. Unlike with, say, price-fixing cartels or territorial divisions, there is currently no consensus that the practices the DMA would prohibit are generally harmful or anticompetitive. To the contrary, when assessing a self-preferencing case against Google in 2018, the Competition Commission of India (CCI) found important consumer benefits[7] that outweighed any inconveniences they may impose on competitors.

By imposing per se rules with no scope for consumer-welfare or efficiency exemptions, the DCA could capture swaths of procompetitive conduct. This is a steep—and possibly irrational—price to pay for administrative expediency. Rather than adopt a “speed-at-all-costs” approach, India should design its rules to minimize error costs and ensure the system’s overall efficiency.

II. The Costs of Ignoring Cost-Benefit Analysis

But this cannot be done, or it cannot be done rationally, unless India is crystal clear about what the costs and benefits of digital-competition regulation are. As things stand, it is unclear whether this question has been given sufficient thought.

For one, the DCA’s goals do not seem to align well with competition law. While competition law protects competition for the ultimate benefit of consumers, the DCA—like the DMA—is concerned with aiding rivals, rather than benefiting consumers. Unmooring digital competition regulation from consumer welfare is ill-advised. It opens the enforcer to aggressive rent seeking by private parties with a vested interest in never being satisfied,[8] who may demand far-reaching product-design changes that don’t jibe with what consumers—i.e., the public at-large—actually want.

Indeed, when the system’s lodestar shifts from benefiting consumers to facilitating competitors, there is a risk that the only tangible measure of the law’s success will be the extent to which rivals are satisfied[9] with gatekeepers’ product-design changes, and their relative market-share fluctuations. Sure enough, the European Commission recently cited stakeholders’ dissatisfaction[10] as one of the primary reasons to launch five DMA noncompliance investigations, mere weeks after the law’s entry into force. In the DCA’s case, the Central Government’s ability to control CCI decisions further exacerbates the risk of capture and political decision making.

While digital-competition regulation’s expected benefits remain unclear and difficult to measure, there are at least three concrete types of costs that India can, and should, consider.

First, there is the cost of harming consumers and diminishing innovation. Mounting evidence from the EU demonstrates this to be a very real risk. For example, Meta’s Threads was delayed[11] in the EU block due to uncertainties about compliance with the DMA. The same happened with Gemini, Google’s AI program.[12] Some product functionalities have also been degraded. For instance, in order to comply with the DMA’s strict self-preferencing prohibitions, maps that appear in Google’s search results no longer link to Google Maps, much to the chagrin of European users.[13]

Google has also been forced to remove[14] features like hotel bookings and reviews from its search results. Until it can accommodate competitors who offer similar services (assuming that is even possible), these specialized search results will remain buried several clicks away from users’ general searches. Not only is this inconvenient for consumers, but it has important ramifications for business users.

Early estimates suggest that clicks from Google ads to hotel websites decreased by 17.6%[15]as a result of the DMA. Meanwhile, on iOS, rivals like Meta[16] and Epic Games[17] are finding it harder than they expected to offer competing app stores or payment services. At least some of this is due to the reality that offering safe online services is a costly endeavour. Apple reviews millions of apps every year[18] to weed out bad actors, and replicating this business is easier said than done. In other words, the DMA is falling short even on its own terms.

In other cases, consumers are likely to be saddled with a litany of pointless choices, as well as changes in product design that undermine user experience. For example, the European Commission appears to believe that the best way to ensure that Apple doesn’t favor its own browser on iOS is by requiring consumers to sift through 12 browser offerings[19] presented on a choice screen.[20] But consumers haven’t asked for this “choice.” The simple explanation for the policy’s failure is that, despite the DMA’s insistence to the contrary, users were always free to choose their preferred browser.

Supporters of digital-competition regulation will no doubt retort that India should also consider the costs of inaction. This is certainly true. But it should do so against the background of the existing legal framework, not a hypothetical legal and regulatory vacuum. Digital platforms are already subject to general (and fully functional) competition law, as well as to a range of other sector-specific regulations.

For instance, Amazon and Flipkart are precluded by India’s foreign-direct-investment (FDI) policy from offering first-party sales[21] to end-users on their e-commerce platforms. In addition, the CCI has launched several investigations of digital-platform conduct that would presumably be caught by the DCA, including by Google,[22] Amazon,[23] Meta,[24] Apple,[25] and Flipkart.[26]

The facile dichotomy made between digital-competition regulation and “the digital wild west[27] is essentially a red herring. Nobody is saying that digital platforms should be above the law. Rather, the question is whether a special competition law is necessary and justified considering the costs such a law would engender, as well as the availability of other legal and regulatory instruments to tackle the same conduct.

This is particularly the case when these legal and regulatory instruments incorporate time-honed analytical tools, heuristics, and procedural safeguards. In 2019, India’s Competition Law Review Committee[28] concluded that a special law was unnecessary. In a report titled “Competition Policy for the Digital Era,”[29] a panel of experts retained by the European Commission reached the same conclusion.

Complicating the question further still is that the DCA would mark a paradigm shift for Indian competition policy. In 2000, the Raghavan Committee Report was crucial in aligning Indian competition law with international best practices, including by moving analysis away from blunt structural presumptions and toward the careful observance of economic effects. As such, it paved the way for the 2002 Competition Act—a milestone of Indian law.

The DCA, by contrast, would overturn these advancements to target companies based on size, obviating any effects analysis. This would amount to taking Indian competition law back to the era of the Monopolies and Restrictive Trade Practices Act of 1969 (MRTP). Again, is the hodgepodge of products and services known collectively as “digital markets” sufficiently unique to warrant such a drastic deviation from well-established antitrust doctrine?

The third group of costs that the government must consider are the DCA’s enforcement costs. The five DMA noncompliance investigations launched recently by the European Commission have served to dispel the once-common belief that the law would be “self-executing[30] and that its enforcement would be collaborative, rather than adversarial. With just 80 dedicated staff,[31] many believe the Commission is understaffed[32] to enforce the DMA (initially, the most optimistic officials asked for 220 full-time employees).[33] If the EU—a sprawling regulatory superstate[34]—struggles to find the capacity to deploy digital-competition rules, can India expect to fare any better?

Enforcing the DCA would require expertise in a range of fields, including competition law, data privacy and security, telecommunications, and consumer protection, among others. Either India can produce these new experts, or it will have to siphon them from somewhere else. This raises the question of opportunity costs. Assuming that India even can build a team to enforce the DCA, the government would also need to be reasonably certain that, given the significant overlaps in expertise, these resources wouldn’t yield better returns if allocated elsewhere—such as, for example, in the fight against cartels or other more obviously nefarious conduct.

In short, if the government cannot answer the question of how much the Indian public stands to gain for every Rupee of public money invested into enforcing the DCA, it should go back to the drawing board and either redesign or drop the DCA altogether.

III. India Is Not Europe

When deciding whether to adopt digital-competition rules, India should consider its own interests and play to its strengths. These need not be the same as Europe’s and, indeed, it would be surprising if they were. Despite the European Commission’s insistence to the contrary, the DMA is not a law that enshrines general or universal economic truths. It is, and always has been, an industrial policy tool,[35] designed to align with the EU’s strengths, weaknesses, and strategic priorities. One cannot just assume that these idiosyncrasies translate into the Indian context.

As International Center for Law & Economics President Geoffrey Manne has written,[36] promotion of investment in the infrastructure required to facilitate economic growth and provision of a secure environment for ongoing innovation are both crucial to the success of developing markets like India’s. Securing these conditions demands dynamic and flexible competition policymaking.

For young, rapidly growing industries like e-commerce and other digital markets, it is essential to attract consistent investment and industry know-how in order to ensure that such markets are able to innovate and evolve to meet consumer demand. India has already witnessed a few leading platforms help build the necessary infrastructure during the nascent stages of sectoral development; continued investment along these lines will be essential to ensure continued consumer benefits.

In the above context, emulating the EU’s DMA approach could be a catastrophic mistake. Indian digital platforms are still not as mature as the EU’s, and a copy and paste of the DMA may prove unfit for the particular attributes of India’s market. The DCA could potentially capture many Indian companies. Paytm, Zomato, Ola Cabs, Nykaa, AllTheRooms, Squeaky, FlipCarK, MakeMyTrip, and Meesho (among others) are some of the companies that could be stifled by this new regulatory straitjacket.

This would not only harm India’s competitiveness, but would also deny consumers important benefits. Despite India’s remarkable economic growth over the last decade, it remains underserved by the most powerful consumer and business technologies, relative to its peers in Europe and North America. The priority should be to continue to attract and nurture investment, not to impose regulations that may further slow the deployment of critical infrastructure.

Indeed, this also raises the question of whether the EU’s objectives with the DMA are even ones that India would want to emulate. While the DMA’s effects are likely to be varied, it is clear that one major impetus for the law is distributional: to ensure that platform users earn a “fair share” of the benefits they generate. Such an approach could backfire, however, as using competition policy to reduce profits may simply lead to less innovation and significantly reduced benefits for the very consumers it is supposed to help. This risk is significantly magnified in India, where the primary need is to ensure the introduction and maintenance of innovative technology, rather than fine tuning the precise distribution of its rewards.

A DMA-like approach could imperil the domestic innovation that has been the backbone of initiatives like Digital India[37] and Startup India.[38] Implementation of a DMA-like regime would discourage growing companies that may not be able to cope with the increased compliance burden. It would also impose enormous regulatory burdens on the government and great uncertainty for businesses, as a DMA-like regime would require the government to define and quantify competitive benchmarks for industries that have not yet even grown out of their nascent stages. At a crucial juncture when India is seen as an investment-friendly nation,[39] implementation of a DMA-like regime could create significant roadblocks to investment—all without any obligation on the part of the government to ensure that consumers benefit.

This is because ex-ante regimes impose preemptive constraints on digital platforms, with no consideration of possible efficiencies that benefit consumers. While competition enforcement in general may tend to promote innovation, jurisdictions that do not allow for efficiency defenses tend to produce relatively less innovation, as careful, case-by-case competition enforcement is replaced with preemptive prohibitions that impede experimentation.

Regulation of digital markets that have yet to reach full maturity is bound to create a more restrictive environment that will harm economic growth, technological advancement, and investment. For India, it is crucial that a nuanced approach is taken to ensure that digital markets can sustain their momentum, without being bogged down by various and unnecessary compliance requirements that are likely to do more harm than good.

IV. Conclusion

In a multi-polar world, developing countries can no longer be expected to mechanically adopt the laws and regulations demanded of them by senior partners to trade agreements and international organizations. Nor should they blindly defer to foreign legislatures, who may (and likely do) have vastly different interests and priorities than their own.

Nobody is denying that the EU has provided many useful legal and regulatory blueprints in the past, many of which work just as well abroad as they do at home. But based on what we know so far, the DMA is not poised to become one of them. It is overly stringent, ignores efficiencies, is indifferent about effects on consumers, incorporates few procedural safeguards, is lukewarm on cost-benefit analysis, and risks subverting well-established competition-law principles. These notably include that the law should ultimately protect competition, not competitors.

Rather than instinctively playing catch up, India could ask the hard questions that the EU eschewed for the sake of a quick political victory against popular bogeymen. What is this law trying to achieve? What are the DCA’s supposed benefits? What are its potential costs? Do those benefits outweigh those costs? If the answer to these questions is ambivalent or negative, India’s digital future may well lay elsewhere.

[1] Report of the Committee on Digital Competition Law, Government of India Ministry of Corporate Affairs (Feb. 27, 2024), https://www.mca.gov.in/bin/dms/getdocument?mds=gzGtvSkE3zIVhAuBe2pbow%253D%253D&type=open.

[2] Regulation (EU) 2022/1925 of the European Parliament and of the Council, on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (Text with EEA relevance), Official Journal of the European Union, available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022R1925.

[3] Press Release, Designated Gatekeepers Must Now Comply With All Obligations Under the Digital Markets Act, European Commission (Mar. 7, 2024), https://digital-markets-act.ec.europa.eu/designated-gatekeepers-must-now-comply-all-obligations-under-digital-markets-act-2024-03-07_en.

[4] Press Release, Digital Markets Act: Commission Designates Six Gatekeepers, European Commission (Sep. 6, 2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_4328.

[5] Press Release, Cade and European Commission Discuss Collaboration on Digital Market Agenda Ministério da Justiça e Segurança Pública (Mar. 29, 2023), https://www.gov.br/cade/en/matters/news/cade-and-european-commission-discuss-collaboration-on-digital-market-agenda.

[6] Summary of Remarks by Jean Tirole, Analysis Group (Sep. 27, 2018), available at https://www.analysisgroup.com/globalassets/uploadedimages/content/insights/ag_features/summary-of-remarks-by-jean-tirole_english.pdf.

[7] Geoffrey A. Manne, Google’s India Case and a Return to Consumer-Focused Antitrust, Truth on the Market (Feb. 8, 2018), https://truthonthemarket.com/2018/02/08/return-to-consumer-focused-antitrust-in-india.

[8] Adam Kovacevich, The Digital Markets Act’s “Statler & Waldorf” Problem, Chamber of Progress, Medium (Mar. 7, 2024), https://medium.com/chamber-of-progress/the-digital-markets-acts-statler-waldorf-problem-2c9b6786bb55.

[9] Id.

[10] Remarks by Executive-Vice President Vestager and Commissioner Breton on the Opening of Non-Compliance Investigations Under the Digital Markets Act, European Commission (Mar. 25, 2024), https://ec.europa.eu/commission/presscorner/detail/en/speech_24_1702.

[11] Makena Kelly, Here’s Why Threads Is Delayed in Europe, The Verge (Jul. 10, 2023), https://www.theverge.com/23789754/threads-meta-twitter-eu-dma-digital-markets.

[12] Andrew Grush, Did You Know Google Gemini Isn’t Available in Europe Yet?, Android Authority (Dec. 7, 2023), https://www.androidauthority.com/did-you-know-google-gemini-isnt-available-in-europe-yet-3392451.

[13] Edith Hancock, ‘Severe Pain in the Butt’: EU’s Digital Competition Rules Make New Enemies on the Internet, Politico (Mar. 25, 2024), https://www.politico.eu/article/european-union-digital-markets-act-google-search-malicious-compliance.

[14] Oliver Bethell, An Update on Our Preparations for the DMA, Google Blog (Jan. 17, 2024), https://blog.google/around-the-globe/google-europe/an-update-on-our-preparations-for-the-dma.

[15] Mirai, Linkedin (Apr. 17, 2024), https://www.linkedin.com/feed/update/urn:li:activity:7161330551709138945.

[16] Alex Heath, Meta Says Apple Has Made It ‘Very Difficult’ To Build Rival App Stores in the EU, The Verge (Feb. 2, 2024), https://www.theverge.com/2024/2/1/24058572/zuckerberg-meta-apple-app-store-iphone-eu-sideloading.

[17] Id.

[18] 2022 App Store Transparency Report, Apple Inc. (2023), available at https://www.apple.com/legal/more-resources/docs/2022-App-Store-Transparency-Report.pdf.

[19] About the Browser Choice Screen in iOS 17, Apple Developer, (Feb. 2024), https://developer.apple.com/support/browser-choice-screen.

[20] Remarks by Executive-Vice President Vestager and Commissioner Breton on the Opening of Non-Compliance Investigations Under the Digital Markets Act, EUROPEAN COMMISSION, https://ec.europa.eu/commission/presscorner/detail/en/speech_24_1702.

[21] Saheli Roy Choudhury, If You Hold Amazon Shares, Here’s What You Need to Know About India’s E-Commerce Law, CNBC (Feb. 4, 2019), https://www.cnbc.com/2019/02/05/amazon-how-india-ecommerce-law-will-affect-the-retailer.html.

[22] Press Release, CCI Imposes a Monetary Penalty of Rs.1337.76 Crore on Google for Anti-Competitive Practices in Relation to Android Mobile Devices, Competition Commission of India (Oct. 20, 2022), https://www.cci.gov.in/antitrust/press-release/details/261/0; CCI Orders Probe Into Google’s Play Store Billing Policies, The Economic Times, (Sep. 7, 2023), https://economictimes.indiatimes.com/tech/startups/competition-watchdog-orders-probe-into-googles-play-store-billing-policies/articleshow/108528079.cms.

[23] Why Competition Commission of India Is Investigating Amazon, Outlook, (May. 1, 2022), https://business.outlookindia.com/news/explained-why-is-competition-commission-of-india-probing-amazon-news-194362.

[24] HC Dismisses Facebook India’s Plea Challenging CCI Probe Into Whatsapp’s 2021 Privacy Policy, The Economic Times (Sep. 7, 2023), https://economictimes.indiatimes.com/tech/technology/women-participation-in-tech-roles-in-non-tech-sectors-to-grow-by-24-3-by-2027-report/articleshow/109374509.cms.

[25] Case No. 24 of 2021, Competition Commission of India, (Dec. 31, 2021), https://www.cci.gov.in/antitrust/orders/details/32/0.

[26] Supra note 23.

[27] Anne C. Witt, The Digital Markets Act: Regulating the Wild West, 60(3) Common Market Law Review 625 (2023).

[28] Report of Competition Law Review Committee, Indian Economic Service (Jul. 2019), available at https://www.ies.gov.in/pdfs/Report-Competition-CLRC.pdf.

[29] Jacques Crémer, Yves-Alexandre de Montjoye, & Heike Schweitzer, Competition Policy for the Digital Era, European Commission Directorate-General for Competition (2019), https://data.europa.eu/doi/10.2763/407537.

[30] Strengthening the Digital Markets Act and Its Enforcement, Bundesministerium für Wirtschaft und Klimaschutz (Sep. 7, 2021), available at https://www.bmwk.de/Redaktion/DE/Downloads/XYZ/zweites-gemeinsames-positionspapier-der-friends-of-an-effective-digital-markets-act.pdf.

[31] Meghan McCarty Carino, A New EU Law Aims to Tame Tech Giants. But Enforcing It Could Turn out to Be Tricky Marketplace (Mar. 7, 2024), https://www.marketplace.org/2024/03/07/a-new-eu-law-aims-to-tame-tech-giants-but-enforcing-it-could-turn-out-to-be-tricky.

[32] Id.

[33] Luca Bertuzzi & Molly Killeen, Digital Brief: DSA Fourth Trilogue, DMA Diverging Views, France’s Fine for Google, EurActiv (Apr. 1, 2022), https://www.euractiv.com/section/digital/news/digital-brief-dsa-fourth-trilogue-dma-diverging-views-frances-fine-for-google.

[34] Anu Bradford, The Brussels Effect: The Rise of a Regulatory Superstate in Europe, Columbia Law School (Jan. 8, 2013), https://www.law.columbia.edu/news/archive/brussels-effect-rise-regulatory-superstate-europe.

[35] Lazar Radic, Gatekeeping, the DMA, and the Future of Competition Regulation, Truth on the Market (Nov. 8, 2023), https://truthonthemarket.com/2023/11/08/gatekeeping-the-dma-and-the-future-of-competition-regulation.

[36] Geoffrey A. Manne, European Union’s Digital Markets Act Not Suitable for Developing Economies, Including India, The Times of India (Feb. 14, 2023), https://timesofindia.indiatimes.com/blogs/voices/european-unions-digital-markets-act-not-suitable-for-developing-economies-including-india.

[37] Digital India, Common Services Centre (Apr. 18, 2024), https://csc.gov.in/digitalIndia.

[38] Startup India, Government of India (Apr. 16, 2024), https://www.startupindia.gov.in.

[39] Invest India, Government of India (Mar. 20, 2024), https://www.investindia.gov.in/why-india.

 

Continue reading
Antitrust & Consumer Protection

The Future of the DMA: Judge Dredd or Juror 8?

TOTM When it was passed into law, the European Union’s Digital Markets Act (DMA) was heralded by supporters as a key step toward fairness and contestability . . .

When it was passed into law, the European Union’s Digital Markets Act (DMA) was heralded by supporters as a key step toward fairness and contestability in online markets. It has unfortunately become increasingly clear that reality might not live up to those expectations. Indeed, there is mounting evidence that European consumers’ online experiences have been degraded following the DMA’s entry into force.

The perception that the DMA has been a failure is beginning to motivate a not insignificant amount of finger pointing in Brussels. So-called “gatekeeper” firms have blamed heavy-handed regulation for their degraded services, while smaller rivals finger “malicious compliance.”

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Bill C-59 and the Use of Structural Merger Presumptions in Canada

Regulatory Comments We, the undersigned, are scholars from the International Center for Law & Economics (ICLE) with experience in the academy, enforcement agencies, and private practice in . . .

We, the undersigned, are scholars from the International Center for Law & Economics (ICLE) with experience in the academy, enforcement agencies, and private practice in competition law. We write to address a key aspect of proposed amendments to Canadian competition law. Specifically, we focus on clauses in Bill C-59 pertinent to mergers and acquisitions and, in particular, the Bureau of Competition’s recommendation that the Bill should:

Amend Clauses 249-250 to enact rebuttable presumptions for mergers consistent with those set out in the U.S. Merger Guidelines.[1]

The Bureau’s recommendation seeks to codify in Canadian competition law the structural presumptions outlined in the 2023 U.S. Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) Merger Guidelines.  On balance, however, adoption of that recommendation would impede, rather than promote, fair competition and the welfare of Canadian consumers.

The cornerstone of the proposed change lies in the introduction of rebuttable presumptions of illegality for mergers that exceed specified market-share or concentration thresholds. While this approach may seem intuitive, the economic literature and U.S. enforcement experience militate against its adoption in Canadian law.

The goal of enhancing—indeed, strengthening—Canadian competition law should not be conflated with the adoption of foreign regulatory guidelines. The most recent U.S. Merger Guidelines establish new structural thresholds, based primarily on the Herfindahl-Hirschman Index (HHI) and market share, to establish presumptions of anticompetitive effects and illegality. Those structural presumptions, adopted a few short months ago, are inconsistent with established economic literature and are untested in U.S. courts. Those U.S. guidelines should not be codified in Canadian law without robust deliberation to ensure alignment with Canadian legal principles, on the one hand, and with economic realities and evidence, on the other.

Three points are especially important. First, concentration measures are widely considered to be a poor proxy for the level of competition that prevails in a given market. Second, lower merger thresholds may lead to enforcement errors that discourage investment and entrepreneurial activity and allocate enforcement resources to the wrong cases. Finally, these risks are particularly acute when concentration thresholds are used not as useful indicators but, instead, as actual legal presumptions (albeit rebuttable ones). We discuss each of these points in more detail below.

What Concentration Measures Can and Cannot Tell Us About Competition

While the use of concentration measures and thresholds can provide a useful preliminary-screening mechanism to identify potentially problematic mergers, substantially lowering the thresholds to establish a presumption of illegality is inadvisable for several reasons.

First, too strong a reliance on concentration measures lacks economic foundation and is likely prone to frequent error. Economists have been studying the relationship between concentration and various potential indicia of anticompetitive effects—price, markup, profits, rate of return, etc.—for decades.[2] There are hundreds of empirical studies addressing this topic.[3]

The assumption that “too much” concentration is harmful assumes both that the structure of a market is what determines economic outcomes and that anyone could know what the “right” amount of concentration is. But as economists have understood since at least the 1970s (and despite an extremely vigorous, but futile, effort to show otherwise), market structure does not determine outcomes.[4]

This skepticism toward concentration measures as a guide for policy is well-supported, and is held by scholars across the political spectrum.  To take one prominent, recent example, professors Fiona Scott Morton (deputy assistant U.S. attorney general for economics in the DOJ Antitrust Division under President Barack Obama, now at Yale University); Martin Gaynor (former director of the FTC Bureau of Economics under President Obama, now serving as special advisor to Assistant U.S. Attorney General Jonathan Kanter, on leave from Carnegie Mellon University), and Steven Berry (an industrial-organization economist at Yale University) surveyed the industrial-organization literature and found that presumptions based on measures of concentration are unlikely to provide sound guidance for public policy:

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration.…

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates.[5]

As Chad Syverson recently summarized:

Perhaps the deepest conceptual problem with concentration as a measure of market power is that it is an outcome, not an immutable core determinant of how competitive an industry or market is… As a result, concentration is worse than just a noisy barometer of market power. Instead, we cannot even generally know which way the barometer is oriented.[6]

This does not mean that concentration measures have no use in merger screening. Rather, market concentration is often unrelated to antitrust-enforcement goals because it is driven by factors that are endogenous to each industry. Enforcers should not rely too heavily on structural presumptions based on concentration measures, as these may be poor indicators of the instances in which antitrust enforcement is most beneficial to competition and consumers.

At What Level Should Thresholds Be Set?

Second, if concentration measures are to be used in some fashion, at what level or levels should they be set?

The U.S. 2010 Horizontal Merger Guidelines were “b?ased on updated HHI thresholds that more accurately reflect actual enforcement practice.”[7] These numbers were updated in 2023, but without clear justification. While the U.S. enforcement authorities cite several old cases (cases that implicated considerably higher levels of concentration than those in their 2023 guidelines), we agree with comments submitted in 2022 by now-FTC Bureau of Economics Director Aviv Nevo and colleagues, who argued against such a change. They wrote:

Our view is that this would not be the most productive route for the agencies to pursue to successfully prevent harmful mergers, and could backfire by putting even further emphasis on market definition and structural presumptions.

If the agencies were to substantially change the presumption thresholds, they would also need to persuade courts that the new thresholds were at the right level. Is the evidence there to do so? The existing body of research on this question is, today, thin and mostly based on individual case studies in a handful of industries. Our reading of the literature is that it is not clear and persuasive enough, at this point in time, to support a substantially different threshold that will be applied across the board to all industries and market conditions. (emphasis added) [8]

Lower merger thresholds create several risks. One is that such thresholds will lead to excessive “false positives”; that is, too many presumptions against mergers that are likely to be procompetitive or benign. This is particularly likely to occur if enforcers make it harder for parties to rebut the presumptions, e.g., by requiring stronger evidence the higher the parties are above the (now-lowered) threshold. Raising barriers to establishing efficiencies and other countervailing factors makes it more likely that procompetitive mergers will be blocked. This not only risks depriving consumers of lower prices and greater innovation in specific cases, but chills beneficial merger-and-acquisition activity more broadly. The prospect of an overly stringent enforcement regime discourages investment and entrepreneurial activity. It also allocates scarce enforcement resources to the wrong cases.

Changing the Character of Structural Presumptions

Finally, the risks described above are particularly acute, given the change in the character of structural presumptions described in the U.S. Merger Guidelines. The 2023 Merger Guidelines—and only the 2023 Merger Guidelines—state that certain structural features of mergers will raise a “presumption of illegality.”[9]

U.S. merger guidelines published in 1982,[10] 1992 (revised in 1997),[11] and 2010[12] all describe structural thresholds seen by the agencies as pertinent to merger screening. None of them mention a “presumption of illegality.” In fact, as the U.S. agencies put it in the 2010 Horizontal Merger Guidelines:

The purpose of these thresholds is not to provide a rigid screen to separate competitively benign mergers from anticompetitive ones, although high levels of concentration do raise concerns. Rather, they provide one way to identify some mergers unlikely to raise competitive concerns and some others for which it is particularly important to examine whether other competitive factors confirm, reinforce, or counteract the potentially harmful effects of increased concentration.[13]

The most worrisome category of mergers identified in the 1992 U.S. merger guidelines were said to be presumed “likely to create or enhance market power or facilitate its exercise.” The 1982 guidelines did not describe “presumptions” so much as that certain mergers that may be matters of “significant competitive concern” and “likely” to be subject to challenge.

Hence, earlier editions of the U.S. merger guidelines describe the ways that structural features of mergers might inform, but not determine, internal agency analysis of those mergers. That was useful information for industry, the bar, and the courts. Equally useful were descriptions of mergers that were “unlikely to have adverse competitive effects and ordinarily require no further analysis,”[14] as well as intermediate types of mergers that “potentially raise significant competitive concerns and often warrant scrutiny.”[15]

Similarly, the 1992 U.S. merger guidelines identified a tier of mergers deemed “unlikely to have adverse competitive effects and ordinarily require no further analysis,” as well as intermediate categories of mergers either unlikely to have anticompetitive effects or, in the alternative, potentially raising significant competitive concerns, depending on various factors described elsewhere in the guidelines.[16]

By way of contrast, the new U.S. guidelines include no description of any mergers that are unlikely to have adverse competitive effects. And while the new merger guidelines do stipulate that the “presumption of illegality can be rebutted or disproved,” they offer very limited means of rebuttal.

This is at odds with prior U.S. agency practice and established U.S. law. Until very recently, U.S. agency staff sought to understand proposed mergers under the totality of their circumstances, much as U.S. courts came to do. Structural features of mergers (among many others) might raise concerns of greater or lesser degrees. These might lead to additional questions in some instances; more substantial inquiries under a “second request” in a minority of instances; or, eventually, a complaint against a very small minority of proposed mergers. In the alternative, they might help staff avoid wasting scarce resources on mergers “unlikely to have anticompetitive effects.”

Prior to a hearing or a trial on the merits, there might be strong, weak, or no appreciable assessments of likely liability, but there was no prima facie determination of illegality.

And while U.S. merger trials did tend to follow a burden-shifting framework for plaintiff and defendant production, they too looked to the “totality of the circumstances”[17] and a transaction’s “probable effect on future competition”[18] to determine liability, and they looked away from strong structural presumptions. As then-U.S. Circuit Judge Clarence Thomas observed in the Baker-Hughes case:

General Dynamics began a line of decisions differing markedly in emphasis from the Court’s antitrust cases of the 1960s. Instead of accepting a firm’s market share as virtually conclusive proof of its market power, the Court carefully analyzed defendants’ rebuttal evidence.[19]

Central to the holding in Baker Hughes—and contra the 2023 U.S. merger guidelines—was that, because the government’s prima facie burden of production was low, the defendant’s rebuttal burden should not be unduly onerous.[20] As the U.S. Supreme Court had put it, defendants would not be required to clearly disprove anticompetitive effects, but rather, simply to “show that the concentration ratios, which can be unreliable indicators of actual market behavior . . . did not accurately depict the economic characteristics of the [relevant] market.”[21]

Doing so would not end the matter. Rather, “the burden of producing additional evidence of anticompetitive effects shifts to the government, and merges with the ultimate burden of persuasion, which remains with the government at all times.”[22]

As the U.S. Supreme Court decision in Marine Bancorporation underscores, even by 1974, it was well understood that concentration ratios “can be unreliable indicators” of market behavior and competitive effects.

As explained above, research and enforcement over the ensuing decades have undermined reliance on structural presumptions even further. As a consequence, the structure/conduct/performance paradigm has been largely abandoned, because it’s widely recognized that market structure is not outcome–determinative.

That is not to say that high concentration cannot have any signaling value in preliminary agency screening of merger matters. But concentration metrics that have proven to be unreliable indicators of firm behavior and competitive effects should not be enshrined in Canadian statutory law. That would be a step back, not a step forward, for merger enforcement.

 

[1] Matthew Boswell, Letter to the Chair and Members of the House of Commons Standing Committee on Finance, Competition Bureau Canada (Mar. 1, 2024), available at https://sencanada.ca/Content/Sen/Committee/441/NFFN/briefs/SM-C-59_CompetitionBureauofCND_e.pdf.

[2] For a few examples from a very large body of literature, see, e.g., Steven Berry, Martin Gaynor, & Fiona Scott Morton, Do Increasing Markups Matter? Lessons from Empirical Industrial Organization, 33J. Econ. Perspectives 44 (2019); Richard Schmalensee, Inter-Industry Studies of Structure and Performance, in 2 Handbook of Industrial Organization 951-1009 (Richard Schmalensee & Robert Willig, eds., 1989); William N. Evans, Luke M. Froeb, & Gregory J. Werden, Endogeneity in the Concentration-Price Relationship: Causes, Consequences, and Cures, 41 J. Indus. Econ. 431 (1993); Steven Berry, Market Structure and Competition, Redux, FTC Micro Conference (Nov. 2017), available at https://www.ftc.gov/system/files/documents/public_events/1208143/22_-_steven_berry_keynote.pdf; Nathan Miller, et al., On the Misuse of Regressions of Price on the HHI in Merger Review, 10 J. Antitrust Enforcement 248 (2022).

[3] Id.

[4] See Harold Demsetz, Industry Structure, Market Rivalry, and Public Policy, 16 J. L. & Econ. 1 (1973).

[5] Berry, Gaynor, & Scott Morton, supra note 2.

[6] Chad Syverson, Macroeconomics and Market Power: Context, Implications, and Open Questions 33 J. Econ. Persp. 23, (2019) at 26.

[7] Joseph Farrell & Carl Shapiro, The 2010 Horizontal Merger Guidelines After 10 Years, 58 REV. IND. ORG. 58, (2021). https://link.springer.com/article/10.1007/s11151-020-09807-6.

[8] John Asker et al, Comments on the January 2022 DOJ and FTC RFI on Merger Enforcement (Apr. 20, 2022), available at https://www.regulations.gov/comment/FTC-2022-0003-1847 at 15-6.

[9] U.S. Dep’t Justice & Fed. Trade Comm’n, Merger Guidelines (Guideline One) (Dec. 18, 2023), available at https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[10] U.S. Dep’t Justice, 1982 Merger Guidelines (1982), https://www.justice.gov/archives/atr/1982-merger-guidelines.

[11] U.S. Dep’t Justice & Fed. Trade Comm’n, 1992 Merger Guidelines (1992), https://www.justice.gov/archives/atr/1992-merger-guidelines; U.S. Dep’t Justice & Fed. Trade Comm’n, 1997 Merger Guidelines (1997), https://www.justice.gov/archives/atr/1997-merger-guidelines.

[12] U.S. Dep’t Justice & Fed. Trade Comm’n, Horizontal Merger Guidelines (Aug. 19, 2010), https://www.justice.gov/atr/horizontal-merger-guidelines-08192010; The U.S. antitrust agencies also issued Vertical Merger Guidelines in 2020. Although these were formally withdrawn in 2021 by the FTC, but not DOJ, they too are supplanted by the 2023 Merger Guidelines. See U.S. Dep’t Justice & Fed. Trade Comm’n, Vertical Merger Guidelines (Jun. 30, 2020), available at https://www.ftc.gov/system/files/documents/public_statements/1580003/vertical_merger_guidelines_6-30-20.pdf.

[13] 2010 Horizontal Merger Guidelines.

[14] Id.

[15] Id.

[16] 1992 Merger Guidelines.

[17]  United States v. Baker-Hughes Inc., 908 F.2d 981, 984 (D.C. Cir. 1990).

[18] Id. at 991.

[19] Id. at 990 (citing Hospital Corp. of Am. v. FTC, 807 F.2d 1381, 1386 (7th Cir.1986), cert. denied, 481 U.S. 1038, 107 S.Ct. 1975, 95 L.Ed.2d 815 (1987).

[20]  Id. at 987, 992.

[21]  United States v. Marine Bancorporation Inc., 418 U.S. 602, 631 (1974) (internal citations omitted).

[22]  Baker-Hughes, 908 F.2d at 983.

Continue reading
Antitrust & Consumer Protection

The Broken Promises of Europe’s Digital Regulation

TOTM If you live in Europe, you may have noticed issues with some familiar online services. From consent forms to reduced functionality and new fees, there . . .

If you live in Europe, you may have noticed issues with some familiar online services. From consent forms to reduced functionality and new fees, there is a sense that platforms like Amazon, Google, Meta, and Apple are changing the way they do business. 

Many of these changes are the result of a new European regulation called the Digital Markets Act (DMA), which seeks to increase competition in online markets. Under the DMA, so-called “gatekeepers” must allow rivals to access their platforms. Having taken effect March 7, firms now must comply with the regulation, which explains why we are seeing these changes unfold today.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

ICLE Comments to European Commission on Competition in Virtual Worlds

Regulatory Comments Executive Summary We welcome the opportunity to comment on the European Commission’s call for contributions on competition in “Virtual Worlds”.[1] The International Center for Law . . .

Executive Summary

We welcome the opportunity to comment on the European Commission’s call for contributions on competition in “Virtual Worlds”.[1] The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

The metaverse is an exciting and rapidly evolving set of virtual worlds. As with any new technology, concerns about the potential risks and negative consequences that the metaverse may bring have moved policymakers to explore how best to regulate this new space.

From the outset, it is important to recognize that simply because the metaverse is new does not mean that competition in this space is unregulated or somehow ineffective. Existing regulations may not explicitly or exclusively target metaverse ecosystems, but a vast regulatory apparatus already covers most aspects of business in virtual worlds. This includes European competition law, the Digital Markets Act (“DMA”), the General Data Protection Act (“GDPR), the Digital Services Act (“DSA”), and many more. Before it intervenes in this space, the commission should carefully consider whether there are any metaverse-specific problems not already addressed by these legal provisions.

This sense that competition intervention would be premature is reinforced by three important factors.

The first is that competition appears particularly intense in this space (Section I). There are currently multiple firms vying to offer compelling virtual worlds. At the time of writing, however, none appears close to dominating the market. In turn, this intense competition will encourage platforms to design services that meet consumers’ demands, notably in terms of safety and privacy. Nor does the market appear likely to fall into the hands of one of the big tech firms that command a sizeable share of more traditional internet services. Meta notoriously has poured more than $3.99 billion into its metaverse offerings during the first quarter of 2023, in addition to $13.72 billion the previous calendar year.[2] Despite these vast investments and a strategic focus on metaverse services, the company has, thus far, struggled to achieve meaningful traction in the space.[3]

Second, the commission’s primary concern appears to be that metaverses will become insufficiently “open and interoperable”.[4] But to the extent that these ecosystems do, indeed, become closed and proprietary, there is no reason to believe this to be a problem. Closed and proprietary ecosystems have several features that may be attractive to consumers and developers (Section II). These include improved product safety, performance, and ease of development. This is certainly not to say that closed ecosystems are always better than more open ones, but rather that it would be wrong to assume that one model or the other is optimal. Instead, the proper balance depends on tradeoffs that markets are better placed to decide.

Finally, timing is of the essence (Section III). Intervening so early in a fledgling industry’s life cycle is like shooting a moving target from a mile away. New rules or competition interventions might end up being irrelevant. Worse, by signaling that metaverses will be subject to heightened regulatory scrutiny for the foreseeable future, the commission may chill investment from the very firms is purports to support. In short, the commission should resist the urge to intervene so long as the industry is not fully mature.

I. Competing for Consumer Trust

The Commission is right to assume, in its call for contributions, that the extent to which metaverse services compete with each other (and continue to do so in the future) will largely determine whether they fulfil consumers’ expectations and meet the safety and trustworthiness requirements to which the commission aspires. As even the left-leaning Lessig put it:

Markets regulate behavior in cyberspace too. Prices structures often constrain access, and if they do not, then busy signals do. (America Online (AOL) learned this lesson when it shifted from an hourly to a flat-rate pricing plan.) Some sites on the web charge for access, as on-line services like AOL have for some time. Advertisers reward popular sites; online services drop unpopular forums. These behaviors are all a function of market constraints and market opportunity, and they all reflect the regulatory role of the market.[5]

Indeed, in a previous call for contributions, the Commission implicitly recognized the important role that competition plays, although it frames the subject primarily in terms of the problems that would arise if competition ceased to operate:

There is a risk of having a small number of big players becoming future gatekeepers of virtual worlds, creating market entry barriers and shutting out EU start-ups and SMEs from this emerging market. Such a closed ecosystem with the prevalence of proprietary systems can negatively affect the protection of personal information and data, the cybersecurity and the freedom and openness of virtual worlds at the same time.[6]

It is thus necessary to ask whether there is robust competition in the market for metaverse services. The short answer is a resounding yes.

A. Competition Without Tipping

While there is no precise definition of what constitutes a metaverse—much less a precise definition of the relevant market—available data suggests the space is highly competitive. This is evident in the fact that even a major global firm like Meta—having invested billions of dollars in its metaverse branch (and having rebranded the company accordingly)—has struggled to gain traction.[7]

Other major players in the space include the likes of Roblox, Fortnite, and Minecraft, which all have somewhere between 70 and 200 million active users.[8] This likely explains why Meta’s much-anticipated virtual world struggled to gain meaningful traction with consumers, stalling at around 300,000 active users.[9] Alongside these traditional players, there are also several decentralized platforms that are underpinned by blockchain technology. While these platforms have attracted massive investments, they are largely peripheral in terms of active users, with numbers often only in the low thousands.[10]

There are several inferences that can be drawn from these limited datasets. For one, it is clear that the metaverse industry is not yet fully mature. There are still multiple paradigms competing for consumer attention: game-based platforms versus social-network platforms; traditional platforms versus blockchain platforms, etc. In the terminology developed by David Teece, the metaverse industry has not yet reached a “paradigmatic” stage. It is fair to assume there is still significant scope for the entry of differentiated firms.[11]

It is also worth noting that metaverse competition does not appear to exhibit the same sort of network effects and tipping that is sometimes associated with more traditional social networks.[12] Despite competing for nearly a decade, no single metaverse project appears to be running away with the market.[13] This lack of tipping might be because these projects are highly differentiated.[14] It may also be due to the ease of multi-homing among them.[15]

More broadly, it is far from clear that competition will lead to a single metaverse for all uses. Different types of metaverse services may benefit from different user interfaces, graphics, and physics engines. This cuts in favor of multiple metaverses coexisting, rather than all services coordinating within a single ecosystem. Competition therefore appears likely lead to the emergence of multiple differentiated metaverses, rather than a single winner.

Ultimately, competition in the metaverse industry is strong and there is little sense these markets are about to tip towards a single firm in the year future.

B. Competing for Consumer Trust

As alluded to in the previous subsection, the world’s largest and most successful metaverse entrants to date are traditional videogaming platforms that have various marketplaces and currencies attached.[16] In other words, decentralized virtual worlds built upon blockchain technology remain marginal.

This has important policy implications. The primary legal issues raised by metaverses are the same as those encountered on other digital marketplaces. This includes issues like minor fraud, scams, and children buying content without their parents’ authorization.[17] To the extent these harms are not adequately deterred by existing laws, metaverse platforms themselves have important incentives to police them. In turn, these incentives may be compounded by strong competition among platforms.

Metaverses are generally multi-sided platforms that bring together distinct groups of users, including consumers and content creators. In order to maximize the value of their ecosystems, platforms have an incentive to balance the interests of these distinct groups.[18] In practice, this will often mean offering consumers various forms of protection against fraud and scams and actively policing platforms’ marketplaces. As David Evans puts it:

But as with any community, there are numerous opportunities for people and businesses to create negative externalities, or engage in other bad behavior, that can reduce economic efficiency and, in the extreme, lead to the tragedy of the commons. Multi-sided platforms, acting selfishly to maximize their own profits, often develop governance mechanisms to reduce harmful behavior. They also develop rules to manage many of the same kinds of problems that beset communities subject to public laws and regulations. They enforce these rules through the exercise of property rights and, most importantly, through the “Bouncer’s Right” to exclude agents from some quantum of the platform, including prohibiting some agents from the platform entirely…[19]

While there is little economic research to suggest that competition directly increases hosts’ incentive to policy their platforms, it stands to reason that doing so effectively can help platforms to expand the appeal of their ecosystems. This is particularly important for metaverse services whose userbases remain just a fraction of the size they could ultimately reach. While 100 or 200 million users already comprises a vast ecosystem, it pales in comparison to the sometimes billions of users that “traditional” online platforms attract.

The bottom line is that the market for metaverses is growing. This likely compounds platforms’ incentives to weed out undesirable behavior, thereby complementing government efforts to achieve the same goal.

II. Opening Platforms or Opening Pandora’s Box?

In its call for contributions, the commission seems concerned that the metaverse competition may lead to closed ecosystems that may be less beneficial to consumers than more open ones. But if this is indeed the commission’s fear, it is largely unfounded.

There are many benefits to closed ecosystems. Choosing the optimal degree of openness entails tradeoffs. At the very least, this suggests that policymakers should be careful not to assume that opening platforms up will systematically provide net benefits to consumers.

A. Antitrust Enforcement and Regulatory Initiatives

To understand why open (and weakly propertized) platforms are not always better for consumers, it is worth looking at past competition enforcement in the online space. Recent interventions by competition authorities have generally attempted (or are attempting) to move platforms toward more openness and less propertization. For their part, these platforms are already tremendously open (as the “platform” terminology implies) and attempt to achieve a delicate balance between centralization and decentralization.

Figure I: Directional Movement of Antitrust Intervention

The Microsoft cases and the Apple investigation both sought or seek to bring more openness and less propertization to those respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and to open its platform to rival media players and web browsers (more openness).[20] The same applies to Apple. Plaintiffs in private antitrust litigation brought in the United States[21] and government enforcement actions in Europe[22] are seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as to ensure that it cannot exclude rival mobile-payments solutions from its platform (more openness).

The various cases that were brought by EU and U.S. authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property.[23] The European Union’s Amazon investigation centers on the ways in which the company uses data from third-party sellers (and, ultimately, the distribution of revenue between those sellers and Amazon).[24] In both cases, authorities are ultimately trying to limit the extent to which firms can propertize their assets.

Finally, both of the EU’s Google cases sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals.[25] The separate Android decision sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing litigation brought by state attorneys general in the United States.[26]

Much of the same can be said of the numerous regulatory initiatives pertaining to digital markets. Indeed, draft regulations being contemplated around the globe mimic the features of the antitrust/competition interventions discussed above. For instance, it is widely accepted that Europe’s DMA effectively transposes and streamlines the enforcement of the theories harm described above.[27] Similarly, several scholars have argued that the proposed American Innovation and Choice Online Act (“AICOA”) in the United States largely mimics European competition policy.[28] The legislation would ultimately require firms to open up their platforms, most notably by forcing them to treat rival services as they would their own and to make their services more interoperable with those rivals.[29]

What is striking about these decisions and investigations is the extent to which authorities are pushing back against the very features that distinguish the platforms they are investigating. Closed (or relatively closed) platforms are forced to open up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

B. The Empty Quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be vanishingly few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems in both the mobile and desktop segments. Most have ended in failure. Ubuntu and other flavors of the Linux operating system remain fringe products. There have been attempts to create open-source search engines, but they have not met with success.[30] The picture is similar in the online retail space. Amazon appears to have beaten eBay, despite the latter being more open and less propertized. Indeed, Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the ways in which they may sell their goods.[31]

This theme is repeated in the standardization space. There have been innumerable attempts to impose open, royalty-free standards. At least in the mobile-internet industry, few (if any) of these have taken off. Instead, proprietary standards such as 5G and WiFi have been far more successful. That pattern is repeated in other highly standardized industries, like digital-video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format.[32]

Figure II: Open and Shared Platforms

This is not to say that there haven’t been any successful examples of open, royalty-free standards. Internet protocols, blockchain, and Wikipedia all come to mind. Nor does it mean that we will not see more decentralized goods in the future. But by and large, firms and consumers have not yet taken to the idea of fully open and shared platforms. Or, at least, those platforms have not yet achieved widespread success in the marketplace (potentially due to supply-side considerations, such as the difficulty of managing open platforms or the potentially lower returns to innovation in weakly propertized ones).[33] And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase in the blockchain space, or Android’s use of Linux).

C. Potential Explanations

The preceding section posited a recurring reality: the digital platforms that competition authorities wish to bring into existence are fundamentally different from those that emerge organically. But why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success?

Three potential explanations come to mind. First, “closed” and “propertized” platforms might systematically—and perhaps anticompetitively—thwart their “open” and “shared” rivals. Second, shared platforms might fail to persist (or grow pervasive) because they are much harder to monetize, and there is thus less incentive to invest in them. This is essentially a supply-side explanation. Finally, consumers might opt for relatively closed systems precisely because they prefer these platforms to marginally more open ones—i.e., a demand-side explanation.

In evaluating the first conjecture, the key question is whether successful “closed” and “propertized” platforms overcame their rivals before or after they achieved some measure of market dominance. If success preceded dominance, then anticompetitive foreclosure alone cannot explain the proliferation of the “closed” and “propertized” model.[34]

Many of today’s dominant platforms, however, often overcame open/shared rivals, well before they achieved their current size. It is thus difficult to make the case that the early success of their business models was due to anticompetitive behavior. This is not to say these business models cannot raise antitrust issues, but rather that anticompetitive behavior is not a good explanation for their emergence.

Both the second and the third conjectures essentially ask whether “closed” and “propertized” might be better adapted to their environment than “open” and “shared” rivals.

In that respect, it is not unreasonable to surmise that highly propertized platforms would generally be easier to monetize than shared ones. For example, to monetize open-source platforms often requires relying on complementarities, which tend to be vulnerable to outside competition and free-riding.[35] There is thus a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platform’s ability to propertize their assets may harm innovation.

Similarly, authorities should reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design. The European Commission, for example, has a long track record of seeking to open digital platforms, notably by requiring that platform owners do not preinstall their own web browsers (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept using the very business model that the commission reprimanded, rather than the “pro-consumer” model it sought to impose on the industry. For example, Apple tied the Safari browser to its iPhones; Google went to some length to ensure that Chrome was preloaded on devices; and Samsung phones come with Samsung Internet as default.[36] Yet this has not ostensibly steered consumers away from those platforms.

Along similar lines, a sizable share of consumers opt for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). In other words, it is hard to claim that opening platforms is inherently good for consumers when those same consumers routinely opt for platforms with the very features that policymakers are trying to eliminate.

Finally, it is worth noting that the remedies imposed by competition authorities have been anything but successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unmitigated flop, selling a paltry 1,787 copies.[37] Likewise, the internet-browser “ballot box” imposed by the commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the commission’s decision.[38]

One potential inference is that consumers do not value competition interventions that make dominant ecosystems marginally more open and less propertized. There are also many reasons why consumers might prefer “closed” systems (at least, relative to the model favored by many policymakers), even when they must pay a premium for them.

Take the example of app stores. Maintaining some control over the apps that can access the store enables platforms to easily weed out bad actors. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. Indeed, it may be that a measure of control facilitates the very innovations that consumers demand. Therefore, “authorities and courts should not underestimate the indispensable role control plays in achieving coordination and coherence in the context of systemic ef?ciencies. Without it, the attempted novelties and strategies might collapse under their own complexity.”[39]

Relatively centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers.[40] This is especially true when consumers will tend to attribute dips in performance to the overall platform, rather than to a particular app.[41] At the same time, they can take advantage of positive externalities to improve the quality of the overall platform.

And it is surely the case that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Absent false information at the time of the initial platform decision, this decision will effectively incorporate expectations about subsequent constraints.[42]

Furthermore, forcing users to make too many “within-platform” choices may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different.[43] In short, contrary to what antitrust authorities appear to believe, closed platforms might give most users exactly what they desire.

All of this suggests that consumers and firms often gravitate spontaneously toward both closed and highly propertized platforms, the opposite of what the commission and other competition authorities tend to favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. Instead, what some regard as “market failures” may in fact be features that explain the rapid emergence of the digital economy.

When considering potential policy reforms targeting the metaverse, policymakers would be wrong to assume openness (notably, in the form of interoperability) and weak propertization are always objectively superior. Instead, these platform designs entail important tradeoffs. Closed metaverse ecosystems may lead to higher consumer safety and better performance, while interoperable systems may reduce the frictions consumers face when moving from one service to another. There is little reason to believe policymakers are in a better position to weigh these tradeoffs than consumers, who vote with their virtual feet.

III. Conclusion: Competition Intervention Would be Premature

A final important argument against intervening today is that the metaverse industry is nowhere near mature. Tomorrow’s competition-related challenges and market failures might not be the same as today’s. This makes it exceedingly difficult for policymakers to design appropriate remedies and increases the risk that intervention might harm innovation.

As of 2023, the entire metaverse industry (both hardware and software) is estimated to be worth somewhere in the vicinity of $80 billion, and projections suggest this could grow by a factor of 10 by 2030.[44] Growth projections of this sort are notoriously unreliable. But in this case, they do suggest there is some consensus that the industry is not fully fledged.

Along similar lines, it remains unclear what types of metaverse services will gain the most traction with consumers, what sorts of hardware consumers will use to access these services, and what technologies will underpin the most successful metaverse platforms. In fact, it is still an open question whether the metaverse industry will foster any services that achieve widespread consumer adoption in the foreseeable future.[45] In other words, it is not exactly clear what metaverse products and services the Commission should focus on in the first place.

Given these uncertainties, competition intervention in the metaverse appears premature. Intervening so early in the industry’s life cycle is like aiming at a moving target. Ensuing remedies might end up being irrelevant before they have any influence on the products that firms develop. More worryingly, acting now signals that the metaverse industry will be subject to heightened regulatory scrutiny for the foreseeable future. In turn, this may deter large platforms from investing in the European market. It also may funnel venture-capital investments away from the European continent.

Competition intervention in burgeoning industries is no free lunch. The best evidence concerning these potential costs comes from the GDPR. While privacy regulation is obviously not the same as competition law, the evidence concerning the GDPR suggests that heavy-handed intervention may, at least in some instances, slow down innovation and reduce competition.

The most-cited empirical evidence concerning the effects of the GDPR comes from a paper by Garrett Johnson and co-authors, who link the GDPR to widespread increases to market concentration, particularly in the short-term:

We show that websites’ vendor use falls after the European Union’s (EU’s) General Data Protection Regulation (GDPR), but that market concentration also increases among technology vendors that provide support services to websites…. The week after the GDPR’s enforcement, website use of web technology vendors falls by 15% for EU residents. Websites are relatively more likely to retain top vendors, which increases the concentration of the vendor market by 17%. Increased concentration predominantly arises among vendors that use personal data, such as cookies, and from the increased relative shares of Facebook and Google-owned vendors, but not from website consent requests. Although the aggregate changes in vendor use and vendor concentration dissipate by the end of 2018, we find that the GDPR impact persists in the advertising vendor category most scrutinized by regulators.[46]

Along similar lines, an NBER working paper by Jian Jia and co-authors finds that enactment of the GDPR markedly reduced venture-capital investments in Europe:

Our findings indicate a negative differential effect on EU ventures after the rollout of GDPR relative to their US counterparts. These negative effects manifest in the overall number of financing rounds, the overall dollar amount raised across rounds, and in the dollar amount raised per individual round. Specifically, our findings suggest a $3.38 million decrease in the aggregate dollars raised by EU ventures per state per crude industry category per week, a 17.6% reduction in the number of weekly venture deals, and a 39.6% decrease in the amount raised in an average deal following the rollout of GDPR.[47]

In another paper, Samuel Goldberg and co-authors find that the GDPR led to a roughly 12% reduction in website pageviews and e-commerce revenue in Europe.[48] Finally, Rebecca Janssen and her co-authors show that the GDPR decreased the number of apps offered on Google’s Play Store between 2016 and 2019:

Using data on 4.1 million apps at the Google Play Store from 2016 to 2019, we document that GDPR induced the exit of about a third of available apps; and in the quarters following implementation, entry of new apps fell by half.[49]

Of course, the body of evidence concerning the GDPR’s effects is not entirely unambiguous. For example, Rajkumar Vekatesean and co-authors find that the GDPR had mixed effects on the returns of different types of firms.[50] Other papers also show similarly mixed effects.[51]

Ultimately, the empirical literature concerning the effects of the GDPR shows that regulation—in this case, privacy protection—is no free lunch. Of course, this does not mean that competition intervention targeting the metaverse would necessarily have these same effects. But in the absence of a clear market failure to solve, it is unclear why policymakers should run such a risk in the first place.

In the end, competition intervention in the metaverse is unlikely to be costless. The metaverse is still in its infancy, regulation could deter essential innovation, and the commission has thus far failed to identify any serious market failures that warrant public intervention. The result is that the commission’s call for contributions appears premature or, in other words, that the commission is putting the meta-cart before the meta-horse.

 

[1] Competition in Virtual Worlds and Generative AI – Calls for contributions, European Commission (Jan. 9, 2024) https://competition-policy.ec.europa.eu/document/download/e727c66a-af77-4014-962a-7c9a36800e2f_en?filename=20240109_call-for-contributions_virtual-worlds_and_generative-AI.pdf (hereafter, “Call for Contributions”).

[2] Jonathan Vaian, Meta’s Reality Labs Records $3.99 Billion Quarterly Loss as Zuckerberg Pumps More Cash into Metaverse, CNBC (Apr. 26, 2023), https://www.cnbc.com/2023/04/26/metas-reality-labs-unit-records-3point99-billion-first-quarter-loss-.html.

[3] Alan Truly, Horizon Worlds Leak: Only 1 in 10 Users Return & Web Launch Is Coming, Mixed News (Mar. 3, 2023), https://mixed-news.com/en/horizon-worlds-leak-only-1-in-10-users-return-web-launch-coming; Kevin Hurler, Hey Fellow Kids: Meta Is Revamping Horizon Worlds to Attract More Teen Users, Gizmodo (Feb. 7, 2023), https://gizmodo.com/meta-metaverse-facebook-horizon-worlds-vr-1850082068; Emma Roth, Meta’s Horizon Worlds VR Platform Is Reportedly Struggling to Keep Users, The Verge (Oct. 15, 2022),
https://www.theverge.com/2022/10/15/23405811/meta-horizon-worlds-losing-users-report; Paul Tassi, Meta’s ‘Horizon Worlds’ Has Somehow Lost 100,000 Players in Eight Months, Forbes, (Oct. 17, 2022), https://www.forbes.com/sites/paultassi/2022/10/17/metas-horizon-worlds-has-somehow-lost-100000-players-in-eight-months/?sh=57242b862a1b.

[4] Call for Contributions, supra note 1. (“6) Do you expect the technology incorporated into Virtual World platforms, enabling technologies of Virtual Worlds and services based on Virtual Worlds to be based mostly on open standards and/or protocols agreed through standard-setting organisations, industry associations or groups of companies, or rather the use of proprietary technology?”).

[5] Less Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 508 (1999).

[6] Virtual Worlds (Metaverses) – A Vision for Openness, Safety and Respect, European Commission, https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/13757-Virtual-worlds-metaverses-a-vision-for-openness-safety-and-respect/feedback_en?p_id=31962299H.

[7] Catherine Thorbecke, What Metaverse? Meta Says Its Single Largest Investment Is Now in ‘Advancing AI’, CNN Business (Mar. 15, 2023), https://www.cnn.com/2023/03/15/tech/meta-ai-investment-priority/index.html; Ben Marlow, Mark Zuckerberg’s Metaverse Is Shattering into a Million Pieces, The Telegraph (Apr. 23, 2023), https://www.telegraph.co.uk/business/2023/04/21/mark-zuckerbergs-metaverse-shattering-million-pieces; Will Gendron, Meta Has Reportedly Stopped Pitching Advertisers on the Metaverse, BusinessInsider (Apr. 18, 2023), https://www.businessinsider.com/meta-zuckerberg-stopped-pitching-advertisers-metaverse-focus-reels-ai-report-2023-4.

[8] Mansoor Iqbal, Fortnite Usage and Revenue Statistics, Business of Apps (Jan. 9, 2023), https://www.businessofapps.com/data/fortnite-statistics; Matija Ferjan, 76 Little-Known Metaverse Statistics & Facts (2023 Data), Headphones Addict (Feb. 13, 2023), https://headphonesaddict.com/metaverse-statistics.

[9] James Batchelor, Meta’s Flagship Metaverse Horizon Worlds Struggling to Attract and Retain Users, Games Industry (Oct. 17, 2022), https://www.gamesindustry.biz/metas-flagship-metaverse-horizon-worlds-struggling-to-attract-and-retain-users; Ferjan, id.

[10] Richard Lawler, Decentraland’s Billion-Dollar ‘Metaverse’ Reportedly Had 38 Active Users in One Day, The Verge (Oct. 13, 2022), https://www.theverge.com/2022/10/13/23402418/decentraland-metaverse-empty-38-users-dappradar-wallet-data; The Sandbox, DappRadar, https://dappradar.com/multichain/games/the-sandbox (last visited May 3, 2023); Decentraland, DappRadar, https://dappradar.com/multichain/social/decentraland (last visited May 3, 2023).

[11] David J. Teece, Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing and Public Policy, 15 Research Policy 285-305 (1986), https://www.sciencedirect.com/science/article/abs/pii/0048733386900272.

[12] Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo. Mason L. Rev. 1279 (2021).

[13] Roblox, Wikipedia, https://en.wikipedia.org/wiki/Roblox (last visited May 3, 2023); Minecraft, Wikipedia, https://en.wikipedia.org/wiki/Minecraft (last visited May 3, 2023); Fortnite, Wikipedia, https://en.wikipedia.org/wiki/Fortnite (last visited May 3, 2023); see Fiza Chowdhury, Minecraft vs Roblox vs Fortnite: Which Is Better?, Metagreats (Feb. 20, 2023), https://www.metagreats.com/minecraft-vs-roblox-vs-fortnite.

[14]  Marc Rysman, The Economics of Two-Sided Markets, 13 J. Econ. Perspectives 134 (2009) (“First, if standards can differentiate from each other, they may be able to successfully coexist (Chou and Shy, 1990; Church and Gandal, 1992). Arguably, Apple and Microsoft operating systems have both survived by specializing in different markets: Microsoft in business and Apple in graphics and education. Magazines are an obvious example of platforms that differentiate in many dimensions and hence coexist.”).

[15] Id. at 134 (“Second, tipping is less likely if agents can easily use multiple standards. Corts and Lederman (forthcoming) show that the fixed cost of producing a video game for one more standard have reduced over time relative to the overall fixed costs of producing a game, which has led to increased distribution of games across multiple game systems (for example, PlayStation, Nintendo, and Xbox) and a less-concentrated game system market.”).

[16] What Are Fortnite, Roblox, Minecraft and Among Us? A Parent’s Guide to the Most Popular Online Games Kids Are Playing, FTC Business (Oct. 5, 2021), https://www.ftc.net/blog/what-are-fortnite-roblox-minecraft-and-among-us-a-parents-guide-to-the-most-popular-online-games-kids-are-playing; Jay Peters, Epic Is Merging Its Digital Asset Stores into One Huge Marketplace, The Verge (Mar. 22, 2023), https://www.theverge.com/2023/3/22/23645601/epic-games-fab-asset-marketplace-state-of-unreal-2023-gdc.

[17] Luke Winkie, Inside Roblox’s Criminal Underworld, Where Kids Are Scamming Kids, IGN (Jan. 2, 2023), https://www.ign.com/articles/inside-robloxs-criminal-underworld-where-kids-are-scamming-kids; Fake Minecraft Updates Pose Threat to Users, Tribune (Sept. 11, 2022), https://tribune.com.pk/story/2376087/fake-minecraft-updates-pose-threat-to-users; Ana Diaz, Roblox and the Wild West of Teenage Scammers, Polygon (Aug. 24, 2019) https://www.polygon.com/2019/8/24/20812218/roblox-teenage-developers-controversy-scammers-prison-roleplay; Rebecca Alter, Fortnite Tries Not to Scam Children and Face $520 Million in FTC Fines Challenge, Vulture (Dec. 19, 2022), https://www.vulture.com/2022/12/fortnite-epic-games-ftc-fines-privacy.html; Leonid Grustniy, Swindle Royale: Fortnite Scammers Get Busy, Kaspersky Daily (Dec. 3, 2020), https://www.kaspersky.com/blog/top-four-fortnite-scams/37896.

[18] See, generally, David Evans & Richard Schmalensee, Matchmakers: The New Economics of Multisided Platforms (Harvard Business Review Press, 2016).

[19] David S. Evans, Governing Bad Behaviour By Users of Multi-Sided Platforms, Berkley Technology Law Journal 27:2 (2012), 1201.

[20] See Case COMP/C-3/37.792, Microsoft, OJ L 32 (May 24, 2004). See also, Case COMP/39.530, Microsoft (Tying), OJ C 120 (Apr. 26, 2013).

[21] See Complaint, Epic Games, Inc. v. Apple Inc., 493 F. Supp. 3d 817 (N.D. Cal. 2020) (4:20-cv-05640-YGR).

[22] See European Commission Press Release IP/20/1073, Antitrust: Commission Opens Investigations into Apple’s App Store Rules (Jun. 16, 2020); European Commission Press Release IP/20/1075, Antitrust: Commission Opens Investigation into Apple Practices Regarding Apple Pay (Jun. 16, 2020).

[23] See European Commission Press Release IP/18/421, Antitrust: Commission Fines Qualcomm €997 Million for Abuse of Dominant Market Position (Jan. 24, 2018); Federal Trade Commission v. Qualcomm Inc., 969 F.3d 974 (9th Cir. 2020).

[24] See European Commission Press Release IP/19/4291, Antitrust: Commission Opens Investigation into Possible Anti-Competitive Conduct of Amazon (Jul. 17, 2019).

[25] See Case AT.39740, Google Search (Shopping), 2017 E.R.C. I-379. See also, Case AT.40099 (Google Android), 2018 E.R.C.

[26] See Complaint, United States v. Google, LLC, (2020), https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws; see also, Complaint, Colorado et al. v. Google, LLC, (2020), available at https://coag.gov/app/uploads/2020/12/Colorado-et-al.-v.-Google-PUBLIC-REDACTED-Complaint.pdf.

[27] See, e.g., Giorgio Monti, The Digital Markets Act: Institutional Design and Suggestions for Improvement, Tillburg L. & Econ. Ctr., Discussion Paper No. 2021-04 (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3797730 (“In sum, the DMA is more than an enhanced and simplified application of Article 102 TFEU: while the obligations may be criticised as being based on existing competition concerns, they are forward-looking in trying to create a regulatory environment where gatekeeper power is contained and perhaps even reduced.”) (Emphasis added).

[28] See, e.g., Aurelien Portuese, “Please, Help Yourself”: Toward a Taxonomy of Self-Preferencing, Information Technology & Innovation Foundation (Oct. 25, 2021), available at https://itif.org/sites/default/files/2021-self-preferencing-taxonomy.pdf. (“The latest example of such weaponization of self-preferencing by antitrust populists is provided by Sens. Amy Klobuchar (D-MN) and Chuck Grassley (R-IA). They introduced legislation in October 2021 aimed at prohibiting the practice.2 However, the legislation would ban self-preferencing only for a handful of designated companies—the so-called “covered platforms,” not the thousands of brick-and-mortar sellers that daily self-preference for the benefit of consumers. Mimicking the European Commission’s Digital Markets Act prohibiting self-preferencing, Senate and the House bills would degrade consumers’ experience and undermine competition, since self-preferencing often benefits consumers and constitutes an integral part, rather than an abnormality, of the process of competition.”).

[29] Efforts to saddle platforms with “non-discrimination” constraints are tantamount to mandating openness. See Geoffrey A. Manne, Against the Vertical Discrimination Presumption, Foreword, Concurrences No. 2-2020 (2020) at 2 (“The notion that platforms should be forced to allow complementors to compete on their own terms, free of constraints or competition from platforms is a species of the idea that platforms are most socially valuable when they are most ‘open.’ But mandating openness is not without costs, most importantly in terms of the effective operation of the platform and its own incentives for innovation.”).

[30] See, e.g., Klint Finley, Your Own Private Google: The Quest for an Open Source Search Engine, Wired (Jul. 12, 2021), https://www.wired.com/2012/12/solar-elasticsearch-google.

[31] See Brian Connolly, Selling on Amazon vs. eBay in 2021: Which Is Better?, JungleScout (Jan. 12, 2021), https://www.junglescout.com/blog/amazon-vs-ebay; Crucial Differences Between Amazon and eBay, SaleHOO, https://www.salehoo.com/educate/selling-on-amazon/crucial-differences-between-amazon-and-ebay (last visited Feb. 8, 2021).

[32] See, e.g., Dolby Vision Is Winning the War Against HDR10 +, It Requires a Single Standard, Tech Smart, https://voonze.com/dolby-vision-is-winning-the-war-against-hdr10-it-requires-a-single-standard (last visited June 6, 2022).

[33] On the importance of managers, see, e.g., Nicolai J Foss & Peter G Klein, Why Managers Still Matter, 56 MIT Sloan Mgmt. Rev., 73 (2014) (“In today’s knowledge-based economy, managerial authority is supposedly in decline. But there is still a strong need for someone to define and implement the organizational rules of the game.”).

[34] It is generally agreed upon that anticompetitive foreclosure is possible only when a firm enjoys some degree of market power. Frank H. Easterbrook, Limits of Antitrust, 63 Tex. L. Rev. 1, 20 (1984) (“Firms that lack power cannot injure competition no matter how hard they try. They may injure a few consumers, or a few rivals, or themselves (see (2) below) by selecting ‘anticompetitive’ tactics. When the firms lack market power, though, they cannot persist in deleterious practices. Rival firms will offer the consumers better deals. Rivals’ better offers will stamp out bad practices faster than the judicial process can. For these and other reasons many lower courts have held that proof of market power is an indispensable first step in any case under the Rule of Reason. The Supreme Court has established a market power hurdle in tying cases, despite the nominally per se character of the tying offense, on the same ground offered here: if the defendant lacks market power, other firms can offer the customer a better deal, and there is no need for judicial intervention.”).

[35] See, e.g., Josh Lerner & Jean Tirole, Some Simple Economics of Open Source, 50 J. Indus. Econ. 197 (2002).

[36] See Matthew Miller, Thanks, Samsung: Android’s Best Mobile Browser Now Available to All, ZDNet (Aug. 11, 2017), https://www.zdnet.com/article/thanks-samsung-androids-best-mobile-browser-now-available-to-all.

[37] FACT SHEET: Windows XP N Sales, RegMedia (Jun. 12, 2009), available at https://regmedia.co.uk/2009/06/12/microsoft_windows_xp_n_fact_sheet.pdf.

[38] See Case COMP/39.530, Microsoft (Tying), OJ C 120 (Apr. 26, 2013).

[39] Konstantinos Stylianou, Systemic Efficiencies in Competition Law: Evidence from the ICT Industry, 12 J. Competition L. & Econ. 557 (2016).

[40] See, e.g., Steven Sinofsky, The App Store Debate: A Story of Ecosystems, Medium (Jun. 21, 2020), https://medium.learningbyshipping.com/the-app-store-debate-a-story-of-ecosystems-938424eeef74.

[41] Id.

[42] See, e.g., Benjamin Klein, Market Power in Aftermarkets, 17 Managerial & Decision Econ. 143 (1996).

[43] See, e.g., Simon Hill, What Is Android Fragmentation, and Can Google Ever Fix It?, DigitalTrends (Oct. 31, 2018), https://www.digitaltrends.com/mobile/what-is-android-fragmentation-and-can-google-ever-fix-it.

[44] Metaverse Market Revenue Worldwide from 2022 to 2030, Statista, https://www.statista.com/statistics/1295784/metaverse-market-size (last visited May 3, 2023); Metaverse Market by Component (Hardware, Software (Extended Reality Software, Gaming Engine, 3D Mapping, Modeling & Reconstruction, Metaverse Platform, Financial Platform), and Professional Services), Vertical and Region – Global Forecast to 2027, Markets and Markets (Apr. 27, 2023), https://www.marketsandmarkets.com/Market-Reports/metaverse-market-166893905.html; see also, Press Release, Metaverse Market Size Worth $ 824.53 Billion, Globally, by 2030 at 39.1% CAGR, Verified Market Research (Jul. 13, 2022), https://www.prnewswire.com/news-releases/metaverse-market-size-worth–824-53-billion-globally-by-2030-at-39-1-cagr-verified-market-research-301585725.html.

[45] See, e.g., Megan Farokhmanesh, Will the Metaverse Live Up to the Hype? Game Developers Aren’t Impressed, Wired (Jan. 19, 2023), https://www.wired.com/story/metaverse-video-games-fortnite-zuckerberg; see also Mitch Wagner, The Metaverse Hype Bubble Has Popped. What Now?, Fierce Electronics (Feb. 24, 2023), https://www.fierceelectronics.com/embedded/metaverse-hype-bubble-has-popped-what-now.

[46] Garret A. Johnson, et al., Privacy and Market Concentration: Intended and Unintended Consequences of the GDPR, Forthcoming Management Science 1 (2023).

[47] Jian Jia, et al., The Short-Run Effects of GDPR on Technology Venture Investment, NBER Working Paper 25248, 4 (2018), available at https://www.nber.org/system/files/working_papers/w25248/w25248.pdf.

[48] Samuel G. Goldberg, Garrett A. Johnson, & Scott K. Shriver, Regulating Privacy Online: An Economic Evaluation of GDPR (2021), available at https://www.ftc.gov/system/files/documents/public_events/1588356/johnsongoldbergshriver.pdf.

[49] Rebecca Janßen, Reinhold Kesler, Michael Kummer, & Joel Waldfogel, GDPR and the Lost Generation of Innovative Apps, Nber Working Paper 30028, 2 (2022), available at https://www.nber.org/system/files/working_papers/w30028/w30028.pdf.

[50] Rajkumar Venkatesan, S. Arunachalam & Kiran Pedada, Short Run Effects of Generalized Data Protection Act on Returns from AI Acquisitions, University of Virginia Working Paper 6 (2022), available at: https://conference.nber.org/conf_papers/f161612.pdf. (“On average, GDPR exposure reduces the ROA of firms. We also find that GDPR exposure increases the ROA of firms that make AI acquisitions for improving customer experience, and cybersecurity. Returns on AI investments in innovation and operational efficiencies are unaffected by GDPR.”)

[51] For a detailed discussion of the empirical literature concerning the GDPR, see Garrett Johnson, Economic Research on Privacy Regulation: Lessons From the GDPR And Beyond, NBER Working Paper 30705 (2022), available at https://www.nber.org/system/files/working_papers/w30705/w30705.pdf.

Continue reading
Antitrust & Consumer Protection

ICLE Comments to European Commission on AI Competition

Regulatory Comments Executive Summary We thank the European Commission for launching this consultation on competition in generative AI. The International Center for Law & Economics (“ICLE”) is . . .

Executive Summary

We thank the European Commission for launching this consultation on competition in generative AI. The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

In our comments, we express concern that policymakers may equate the rapid rise of generative AI services with a need to intervene in these markets when, in fact, the opposite is true. As we explain, the rapid growth of AI markets, as well as the fact that new market players are thriving, suggests competition is intense. If incumbent firms could easily leverage their dominance into burgeoning generative AI markets, we would not have seen the growth of generative AI unicorns such as OpenAI, Midjourney, and Anthropic, to name but a few.

Of course, this is not to say that generative AI markets are not important—quite the opposite. Generative AI is already changing the ways that many firms do business and improving employee productivity in many industries.[1] The technology is also increasingly useful in the field of scientific research, where it has enabled creation of complex models that expand scientists’ reach.[2] Against this backdrop, Commissioner Margrethe Vestager was right to point out that it “is fundamental that these new markets stay competitive, and that nothing stands in the way of businesses growing and providing the best and most innovative products to consumers.”[3]

But while sensible enforcement is of vital importance to maintain competition and consumer welfare, knee-jerk reactions may yield the opposite outcomes. As our comments explain, overenforcement in the field of generative AI could cause the very harms that policymakers seek to avert. For instance, preventing so-called “big tech” firms from competing in these markets (for example, by threatening competition intervention as soon as they embed generative AI services in their ecosystems or seek to build strategic relationships with AI startups) may thwart an important source of competition needed to keep today’s leading generative-AI firms in check. In short, competition in AI markets is important, but trying naïvely to hold incumbent tech firms back out of misguided fears they will come to dominate this space is likely to do more harm than good.

Our comment proceeds as follows. Section I summarizes recent calls for competition intervention in generative AI markets. Section II argues that many of these calls are underpinned by fears of data-related incumbency advantages (often referred to as “data-network effects”). Section III explains why these effects are unlikely to play a meaningful role in generative-AI markets. Section IV concludes by offering five key takeaways to help policymakers (including the Commission) better weigh the tradeoffs inherent to competition intervention in generative-AI markets.

I. Calls for Intervention in AI Markets

It was once (and frequently) said that Google’s “data monopoly” was unassailable: “If ‘big data’ is the oil of the information economy, Google has Standard Oil-like monopoly dominance—and uses that control to maintain its dominant position.”[4] Similar claims of data dominance have been attached to nearly all large online platforms, including Facebook (Meta), Amazon, and Uber.[5]

While some of these claims continue even today (for example, “big data” is a key component of the U.S. Justice Department’s (DOJ) Google Search and adtech antitrust suits),[6] a shiny new data target has emerged in the form of generative artificial intelligence (AI). The launch of ChatGPT in November 2022, as well as the advent of AI image-generation services like Midjourney and Dall-E, have dramatically expanded the public’s conception of what is—and what might be—possible to achieve with generative-AI technologies built on massive datasets.

While these services remain in the early stages of mainstream adoption and remain in the throes of rapid, unpredictable technological evolution, they nevertheless already appear to be on the radar of competition policymakers around the world. Several antitrust enforcers appear to believe that, by acting now, they can avoid the “mistakes” that were purportedly made during the formative years of Web 2.0.[7] These mistakes, critics assert, include failing to appreciate the centrality of data in online markets, as well as letting mergers go unchecked and allowing early movers to entrench their market positions.[8] As Lina Khan, chair of the U.S. Federal Trade Commission (FTC), put it: “we are still reeling from the concentration that resulted from Web 2.0, and we don’t want to repeat the mis-steps of the past with AI”.[9]

This response from the competition-policy world is deeply troubling. Rather than engage in critical self-assessment and adopt an appropriately restrained stance, the enforcement community appears to be champing at the bit. Rather than assessing their prior assumptions based on the current technological moment, enforcers’ top priority appears to be figuring out how to rapidly and almost reflexively deploy existing competition tools to address the presumed competitive failures presented by generative AI.[10]

It is increasingly common for competition enforcers to argue that so-called “data-network effects” serve not only to entrench incumbents in those markets where the data is collected, but also confer similar, self-reinforcing benefits in adjacent markets. Several enforcers have, for example, prevented large online platforms from acquiring smaller firms in adjacent markets, citing the risk that they could use their vast access to data to extend their dominance into these new markets.[11]

They have also launched consultations to ascertain the role that data plays in AI competition. For instance, in an ongoing consultation, the European Commission asks: “What is the role of data and what are its relevant characteristics for the provision of generative AI systems and/or components, including AI models?”[12] Unsurprisingly, the FTC has likewise been bullish about the risks posed by incumbents’ access to data. In comments submitted to the U.S. Copyright Office, for example, the FTC argued that:

The rapid development and deployment of AI also poses potential risks to competition. The rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms. These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.[13]

Certainly, it stands to reason that the largest online platforms—including Alphabet, Meta, Apple, and Amazon—should have a meaningful advantage in the burgeoning markets for generative-AI services. After all, it is widely recognized that data is an essential input for generative AI.[14] This competitive advantage should be all the more significant, given that these firms have been at the forefront of AI technology for more than a decade. Over this period, Google’s DeepMind and AlphaGo and Meta’s have routinely made headlines.[15] Apple and Amazon also have vast experience with AI assistants, and all of these firms use AI technology throughout their platforms.[16]

Contrary to what one might expect, however, the tech giants have, to date, been largely unable to leverage their vast data troves to outcompete startups like OpenAI and Midjourney. At the time of writing, OpenAI’s ChatGPT appears to be, by far, the most successful chatbot,[17] despite the large tech platforms’ apparent access to far more (and more up-to-date) data.

In these comments, we suggest that there are important lessons to glean from these developments, if only enforcers would stop to reflect. The meteoric rise of consumer-facing AI services should offer competition enforcers and policymakers an opportunity for introspection. As we explain, the rapid emergence of generative-AI technology may undercut many core assumptions of today’s competition-policy debates, which have largely focused on the rueful after-effects of the purported failure of 20th-century antitrust to address the allegedly manifest harms of 21st-century technology. These include the notions that data advantages constitute barriers to entry and can be leveraged to project dominance into adjacent markets; that scale itself is a market failure to be addressed by enforcers; and that the use of consumer data is inherently harmful to those consumers.

II. Data-Network Effects Theory and Enforcement

Proponents of tougher interventions by competition enforcers into digital markets often cite data-network effects as a source of competitive advantage and barrier to entry (though terms like “economies of scale and scope” may offer more precision).[18] The crux of the argument is that “the collection and use of data creates a feedback loop of more data, which ultimately insulates incumbent platforms from entrants who, but for their data disadvantage, might offer a better product.”[19] This self-reinforcing cycle purportedly leads to market domination by a single firm. Thus, it is argued, for example, that Google’s “ever-expanding control of user personal data, and that data’s critical value to online advertisers, creates an insurmountable barrier to entry for new competition.”[20]

Right off the bat, it is important to note the conceptual problem these claims face. Because data can be used to improve the quality of products and/or to subsidize their use, the idea of data as an entry barrier suggests that any product improvement or price reduction made by an incumbent could be a problematic entry barrier to any new entrant. This is tantamount to an argument that competition itself is a cognizable barrier to entry. Of course, it would be a curious approach to antitrust if competition were treated as a problem, as it would imply that firms should under-compete—i.e., should forego consumer-welfare enhancements—in order to inculcate a greater number of firms in a given market simply for its own sake.[21]

Meanwhile, actual economic studies of data-network effects have been few and far between, with scant empirical evidence to support the theory.[22] Andrei Hagiu and Julian Wright’s theoretical paper offers perhaps the most comprehensive treatment of the topic to date.[23] The authors ultimately conclude that data-network effects can be of different magnitudes and have varying effects on firms’ incumbency advantage.[24] They cite Grammarly (an AI writing-assistance tool) as a potential example: “As users make corrections to the suggestions offered by Grammarly, its language experts and artificial intelligence can use this feedback to continue to improve its future recommendations for all users.”[25]

This is echoed by other economists who contend that “[t]he algorithmic analysis of user data and information might increase incumbency advantages, creating lock-in effects among users and making them more reluctant to join an entrant platform.”[26] Crucially, some scholars take this logic a step further, arguing that platforms may use data from their “origin markets” in order to enter and dominate adjacent ones:

First, as we already mentioned, data collected in the origin market can be used, once the enveloper has entered the target market, to provide products more efficiently in the target market. Second, data collected in the origin market can be used to reduce the asymmetric information to which an entrant is typically subject when deciding to invest (for example, in R&D) to enter a new market. For instance, a search engine could be able to predict new trends from consumer searches and therefore face less uncertainty in product design.[27]

This possibility is also implicit in Hagiu and Wright’s paper.[28] Indeed, the authors’ theoretical model rests on an important distinction between within-user data advantages (that is, having access to more data about a given user) and across-user data advantages (information gleaned from having access to a wider user base). In both cases, there is an implicit assumption that platforms may use data from one service to gain an advantage in another market (because what matters is information about aggregate or individual user preferences, regardless of its origin).

Our review of the economic evidence suggests that several scholars have, with varying degrees of certainty, raised the possibility that incumbents may leverage data advantages to stifle competitors in their primary market or in adjacent ones (be it via merger or organic growth). As we explain below, however, there is ultimately little evidence to support such claims. Policymakers have, however, been keenly receptive to these limited theoretical findings, basing multiple decisions on these theories, often with little consideration given to the caveats that accompany them.[29]

Indeed, it is remarkable that, in its section on “[t]he data advantage for incumbents,” the “Furman Report” created for the UK government cited only two empirical economic studies, and they offer directly contradictory conclusions with respect to the question of the strength of data advantages.[30] Nevertheless, the Furman Report concludes that data “may confer a form of unmatchable advantage on the incumbent business, making successful rivalry less likely,”[31] and adopts without reservation “convincing” evidence from non-economists that have no apparent empirical basis.[32]

In the Google/Fitbit merger proceedings, the European Commission found that the combination of data from Google services with that of Fitbit devices would reduce competition in advertising markets:

Giving [sic] the large amount of data already used for advertising purposes that Google holds, the increase in Google’s data collection capabilities, which goes beyond the mere number of active users for which Fitbit has been collecting data so far, the Transaction is likely to have a negative impact on the development of an unfettered competition in the markets for online advertising.[33]

As a result, the Commission cleared the merger on the condition that Google refrain from using data from Fitbit devices for its advertising platform.[34] The Commission will likely focus on similar issues during its ongoing investigation of Microsoft’s investment into OpenAI.[35]

Along similar lines, the FTC’s complaint to enjoin Meta’s purchase of a virtual-reality (VR) fitness app called “Within” relied, among other things, on the fact that Meta could leverage its data about VR-user behavior to inform its decisions and potentially outcompete rival VR-fitness apps: “Meta’s control over the Quest platform also gives it unique access to VR user data, which it uses to inform strategic decisions.”[36]

The DOJ’s twin cases against Google also implicate data leveraging and data barriers to entry. The agency’s adtech complaint charges that “Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”[37] Similarly, in its search complaint, the agency argues that:

Google’s anticompetitive practices are especially pernicious because they deny rivals scale to compete effectively. General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms.[38]

Finally, updated merger guidelines published in recent years by several competition enforcers cite the acquisition of data as a potential source of competition concerns. For instance, the FTC and DOJ’s newly published guidelines state that “acquiring data that helps facilitate matching, sorting, or prediction services may enable the platform to weaken rival platforms by denying them that data.”[39] Likewise, the UK Competition and Markets Authority (CMA) warns against incumbents acquiring firms in order to obtain their data and foreclose other rivals:

Incentive to foreclose rivals…

7.19(e) Particularly in complex and dynamic markets, firms may not focus on short term margins but may pursue other objectives to maximise their long-run profitability, which the CMA may consider. This may include… obtaining access to customer data….[40]

In short, competition authorities around the globe have been taking an increasingly aggressive stance on data-network effects. Among the ways this has manifested is in basing enforcement decisions on fears that data collected by one platform might confer a decisive competitive advantage in adjacent markets. Unfortunately, these concerns rest on little to no empirical evidence, either in the economic literature or the underlying case records.

III. Data-Incumbency Advantages in Generative-AI Markets

Given the assertions canvassed in the previous section, it would be reasonable to assume that firms such as Google, Meta, and Amazon should be in pole position to dominate the burgeoning market for generative AI. After all, these firms have not only been at the forefront of the field for the better part of a decade, but they also have access to vast troves of data, the likes of which their rivals could only dream when they launched their own services. Thus, the authors of the Furman Report caution that “to the degree that the next technological revolution centres around artificial intelligence and machine learning, then the companies most able to take advantage of it may well be the existing large companies because of the importance of data for the successful use of these tools.”[41]

To date, however, this is not how things have unfolded—although it bears noting these markets remain in flux and the competitive landscape is susceptible to change. The first significantly successful generative-AI service was arguably not from either Meta—which had been working on chatbots for years and had access to, arguably, the world’s largest database of actual chats—or Google. Instead, the breakthrough came from a previously unknown firm called OpenAI.

OpenAI’s ChatGPT service currently holds an estimated 60% of the market (though reliable numbers are somewhat elusive).[42] It broke the record for the fastest online service to reach 100 million users (in only a couple of months), more than four times faster than the previous record holder, TikTok.[43] Based on Google Trends data, ChatGPT is nine times more popular worldwide than Google’s own Bard service, and 14 times more popular in the United States.[44] In April 2023, ChatGPT reportedly registered 206.7 million unique visitors, compared to 19.5 million for Google’s Bard.[45] In short, at the time we are writing, ChatGPT appears to be the most popular chatbot. The entry of large players such as Google Bard or Meta AI appear to have had little effect thus far on its market position.[46]

The picture is similar in the field of AI-image generation. As of August 2023, Midjourney, Dall-E, and Stable Diffusion appear to be the three market leaders in terms of user visits.[47] This is despite competition from the likes of Google and Meta, who arguably have access to unparalleled image and video databases by virtue of their primary platform activities.[48]

This raises several crucial questions: how have these AI upstarts managed to be so successful, and is their success just a flash in the pan before Web 2.0 giants catch up and overthrow them? While we cannot answer either of these questions dispositively, we offer what we believe to be some relevant observations concerning the role and value of data in digital markets.

A first important observation is that empirical studies suggest that data exhibits diminishing marginal returns. In other words, past a certain point, acquiring more data does not confer a meaningful edge to the acquiring firm. As Catherine Tucker put it following a review of the literature: “Empirically there is little evidence of economies of scale and scope in digital data in the instances where one would expect to find them.”[49]

Likewise, following a survey of the empirical literature on this topic, Geoffrey Manne and Dirk Auer conclude that:

Available evidence suggests that claims of “extreme” returns to scale in the tech sector are greatly overblown. Not only are the largest expenditures of digital platforms unlikely to become proportionally less important as output increases, but empirical research strongly suggests that even data does not give rise to increasing returns to scale, despite routinely being cited as the source of this effect.[50]

In other words, being the firm with the most data appears to be far less important than having enough data. This lower bar may be accessible to far more firms than one might initially think possible. And obtaining enough data could become even easier—that is, the volume of required data could become even smaller—with technological progress. For instance, synthetic data may provide an adequate substitute to real-world data,[51] or may even outperform real-world data.[52] As Thibault Schrepel and Alex Pentland surmise:

[A]dvances in computer science and analytics are making the amount of data less relevant every day. In recent months, important technological advances have allowed companies with small data sets to compete with larger ones.[53]

Indeed, past a certain threshold, acquiring more data might not meaningfully improve a service, where other improvements (such as better training methods or data curation) could have a large impact. In fact, there is some evidence that excessive data impedes a service’s ability to generate results appropriate for a given query: “[S]uperior model performance can often be achieved with smaller, high-quality datasets than massive, uncurated ones. Data curation ensures that training datasets are devoid of noise, irrelevant instances, and duplications, thus maximizing the efficiency of every training iteration.”[54]

Consider, for instance, a user who wants to generate an image of a basketball. Using a model trained on an indiscriminate range and number of public photos in which a basketball appears surrounded by copious other image data, the user may end up with an inordinately noisy result. By contrast, a model trained with a better method on fewer, more carefully selected images, could readily yield far superior results.[55] In one important example:

[t]he model’s performance is particularly remarkable, given its small size. “This is not a large language model trained on the whole Internet; this is a relatively small transformer trained for these tasks,” says Armando Solar-Lezama, a computer scientist at the Massachusetts Institute of Technology, who was not involved in the new study…. The finding implies that instead of just shoving ever more training data into machine-learning models, a complementary strategy might be to offer AI algorithms the equivalent of a focused linguistics or algebra class.[56]

Platforms’ current efforts are thus focused on improving the mathematical and logical reasoning of large language models (LLMs), rather than maximizing training datasets.[57] Two points stand out. The first is that firms like OpenAI rely largely on publicly available datasets—such as GSM8K—to train their LLMs.[58] Second, the real challenge to create cutting-edge AI is not so much in collecting data, but rather in creating innovative AI-training processes and architectures:

[B]uilding a truly general reasoning engine will require a more fundamental architectural innovation. What’s needed is a way for language models to learn new abstractions that go beyond their training data and have these evolving abstractions influence the model’s choices as it explores the space of possible solutions.

We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.[59]

Furthermore, it is worth noting that the data most relevant to startups in a given market may not be those data held by large incumbent platforms in other markets, but rather data specific to the market in which the startup is active or, even better, to the given problem it is attempting to solve:

As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use—they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market and mounted their successful challenges—not before.[60]

The bottom line is that data is not the be-all and end-all that many in competition circles make it out to be. While data may often confer marginal benefits, there is little sense these are ultimately decisive.[61] As a result, incumbent platforms’ access to vast numbers of users and data in their primary markets might only marginally affect their AI competitiveness.

A related observation is that firms’ capabilities and other features of their products arguably play a more important role than the data they own.[62] Examples of this abound in digital markets. Google overthrew Yahoo, despite initially having access to far fewer users and far less data; Google and Apple overcame Microsoft in the smartphone OS market despite having comparatively tiny ecosystems (at the time) to leverage; and TikTok rose to prominence despite intense competition from incumbents like Instagram, which had much larger user bases. In each of these cases, important product-design decisions (such as the PageRank algorithm, recognizing the specific needs of mobile users,[63] and TikTok’s clever algorithm) appear to have played a far more significant role than initial user and data endowments (or lack thereof).

All of this suggests that the early success of OpenAI likely has more to do with its engineering decisions than what data it did (or did not) own. Going forward, OpenAI and its rivals’ ability to offer and monetize compelling stores offering custom versions of their generative-AI technology will arguably play a much larger role than (and contribute to) their ownership of data.[64] In other words, the ultimate challenge is arguably to create a valuable platform, of which data ownership is a consequence, but not a cause.

It is also important to note that, in those instances where it is valuable, data does not just fall from the sky. Instead, it is through smart business and engineering decisions that firms can generate valuable information (which does not necessarily correlate with owning more data).

For instance, OpenAI’s success with ChatGPT is often attributed to its more efficient algorithms and training models, which arguably have enabled the service to improve more rapidly than its rivals.[65] Likewise, the ability of firms like Meta and Google to generate valuable data for advertising arguably depends more on design decisions that elicit the right data from users, rather than the raw number of users in their networks.

Put differently, setting up a business so as to extract and organize the right information is more important than simply owning vast troves of data.[66] Even in those instances where high-quality data is an essential parameter of competition, it does not follow that having vaster databases or more users on a platform necessarily leads to better information for the platform.

Indeed, if data ownership consistently conferred a significant competitive advantage, these new firms would not be where they are today. This does not mean that data is worthless, of course. Rather, it means that competition authorities should not assume that the mere possession of data is a dispositive competitive advantage, absent compelling empirical evidence to support such a finding. In this light, the current wave of decisions and competition-policy pronouncements that rely on data-related theories of harm are premature.

IV. Five Key Takeaways: Reconceptualizing the Role of Data in Generative-AI Competition

As we explain above, data (network effects) are not the source of barriers to entry that they are sometimes made out to be. The picture is far more nuanced. Indeed, as economist Andres Lerner demonstrated almost a decade ago (and the assessment is only truer today):

Although the collection of user data is generally valuable for online providers, the conclusion that such benefits of user data lead to significant returns to scale and to the entrenchment of dominant online platforms is based on unsupported assumptions. Although, in theory, control of an “essential” input can lead to the exclusion of rivals, a careful analysis of real-world evidence indicates that such concerns are unwarranted for many online businesses that have been the focus of the “big data” debate.[67]

While data can be an important part of the competitive landscape, incumbents’ data advantages are far less pronounced than today’s policymakers commonly assume. In that respect, five main lessons emerge:

  1. Data can be (very) valuable, but beyond a certain threshold, those benefits tend to diminish. In other words, having the most data is less important than having enough;
  2. The ability to generate valuable information does not depend on the number of users or the amount of data a platform has previously acquired;
  3. The most important datasets are not always proprietary;
  4. Technological advances and platforms’ engineering decisions affect their ability to generate valuable information, and this effect swamps effects stemming from the amount of data they own; and
  5. How platforms use data is arguably more important than what data or how much data they own.

These lessons have important ramifications for competition-policy debates over the competitive implications of data in technologically evolving areas.

First, it is not surprising that startups, rather than incumbents, have taken an early lead in generative AI (and in Web 2.0 before it). After all, if data-incumbency advantages are small or even nonexistent, then smaller and more nimble players may have an edge over established tech platforms. This is all the more likely given that, despite significant efforts, the biggest tech platforms were unable to offer compelling generative-AI chatbots and image-generation services before the emergence of ChatGPT, Dall-E, Midjourney, etc.

This failure suggests that, in a process akin to Clayton Christensen’s “innovator’s dilemma,”[68] something about the incumbent platforms’ existing services and capabilities was holding them back in those markets. Of course, this does not necessarily mean that those same services or capabilities could not become an advantage when the generative-AI market starts addressing issues of monetization and scale.[69] But it does mean that assumptions about a firm’s market power based on its possession of data are off the mark.

Another important implication is that, paradoxically, policymakers’ efforts to prevent Web 2.0 platforms from competing freely in generative AI markets may ultimately backfire and lead to less, not more, competition. Indeed, OpenAI is currently acquiring a sizeable lead in generative AI. While competition authorities might like to think that other startups will emerge and thrive in this space, it is important not to confuse desires with reality. While there currently exists a vibrant AI-startup ecosystem, there is at least a case to be made that the most significant competition for today’s AI leaders will come from incumbent Web 2.0 platforms—although nothing is certain at this stage. Policymakers should beware not to stifle that competition on the misguided assumption that competitive pressure from large incumbents is somehow less valuable to consumers than that which originates from smaller firms.

Finally, even if there were a competition-related market failure to be addressed in the field of generative AI (which is anything but clear), it is unclear that the remedies being contemplated would do more good than harm. Some of the solutions that have been put forward have highly ambiguous effects on consumer welfare. Scholars have shown that, e.g., mandated data sharing—a solution championed by EU policymakers, among others—may sometimes dampen competition in generative-AI markets.[70] This is also true of legislation like the General Data Protection Regulation (GDPR), which makes it harder for firms to acquire more data about consumers—assuming such data is, indeed, useful to generative-AI services.[71]

In sum, it is a flawed understanding of the economics and practical consequences of large agglomerations of data that lead competition authorities to believe that data-incumbency advantages are likely to harm competition in generative AI markets—or even in the data-intensive Web 2.0 markets that preceded them. Indeed, competition or regulatory intervention to “correct” data barriers and data network and scale effects is liable to do more harm than good.

 

[1] See, e.g., Michael Chui, et al., The Economic Potential of Generative AI: The Next Productivity Frontier, McKinsey (Jun. 14, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-AI-the-next-productivity-frontier.

[2] See, e. g., Zhuoran Qiao, Weili Nie, Arash Vahdat, Thomas F. Miller III, & Animashree Anandkumar, State-Specific Protein–Ligand Complex Structure Prediction with a Multiscale Deep Generative Model, 6 Nature Machine Intelligence, 195-208 (2024); see also, Jaemin Seo, Sang Kyeun Kim, Azarakhsh Jalalvand, Rory Conlin, Andrew Rothstein, Joseph Abbate, Keith Erickson, Josiah Wai, Ricardo Shousha, & Egemen Kolemen, Avoiding Fusion Plasma Tearing Instability with Deep Reinforcement Learning, 626 Nature, 746-751 (2024).

[3] See, e.g., Press Release, Commission Launches Calls for Contributions on Competition in Virtual Worlds and Generative AI, European Commission (Jan. 9, 2024), https://ec.europa.eu/commission/presscorner/detail/en/IP_24_85.

[4] Nathan Newman, Taking on Google’s Monopoly Means Regulating Its Control of User Data, Huffington Post (Sep. 24, 2013), http://www.huffingtonpost.com/nathan-newman/taking-on-googlesmonopol_b_3980799.html.

[5] See, e.g., Lina Khan & K. Sabeel Rahman, Restoring Competition in the U.S. Economy, in Untamed: How to Check Corporate, Financial, and Monopoly Power (Nell Abernathy, Mike Konczal, & Kathryn Milani, eds., 2016), at 23 (“From Amazon to Google to Uber, there is a new form of economic power on display, distinct from conventional monopolies and oligopolies…, leverag[ing] data, algorithms, and internet-based technologies… in ways that could operate invisibly and anticompetitively.”); Mark Weinstein, I Changed My Mind—Facebook Is a Monopoly, Wall St. J. (Oct. 1, 2021), https://www.wsj.com/articles/facebook-is-monopoly-metaverse-users-advertising-platforms-competition-mewe-big-tech-11633104247 (“[T]he glue that holds it all together is Facebook’s monopoly over data…. Facebook’s data troves give it unrivaled knowledge about people, governments—and its competitors.”).

[6] See, generally, Abigail Slater, Why “Big Data” Is a Big Deal, The Reg. Rev. (Nov. 6, 2023), https://www.theregreview.org/2023/11/06/slater-why-big-data-is-a-big-deal; Amended Complaint at ¶36, United States v. Google, 1:20-cv-03010- (D.D.C. 2020); Complaint at ¶37, United States V. Google, 1:23-cv-00108 (E.D. Va. 2023), https://www.justice.gov/opa/pr/justice-department-sues-google-monopolizing-digital-advertising-technologies (“Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”).

[7] See, e.g., Press Release, European Commission, supra note 3; Krysten Crawford, FTC’s Lina Khan Warns Big Tech over AI, SIEPR (Nov. 3, 2020), https://siepr.stanford.edu/news/ftcs-lina-khan-warns-big-tech-over-ai (“Federal Trade Commission Chair Lina Khan delivered a sharp warning to the technology industry in a speech at Stanford on Thursday: Antitrust enforcers are watching what you do in the race to profit from artificial intelligence.”) (emphasis added).

[8] See, e.g., John M. Newman, Antitrust in Digital Markets, 72 Vand. L. Rev. 1497, 1501 (2019) (“[T]he status quo has frequently failed in this vital area, and it continues to do so with alarming regularity. The laissez-faire approach advocated for by scholars and adopted by courts and enforcers has allowed potentially massive harms to go unchecked.”);
Bertin Martins, Are New EU Data Market Regulations Coherent and Efficient?, Bruegel Working Paper 21/23 (2023), https://www.bruegel.org/working-paper/are-new-eu-data-market-regulations-coherent-and-efficient (“Technical restrictions on access to and re-use of data may result in failures in data markets and data-driven services markets.”); Valéria Faure-Muntian, Competitive Dysfunction: Why Competition Law Is Failing in a Digital World, The Forum Network (Feb. 24, 2021), https://www.oecd-forum.org/posts/competitive-dysfunction-why-competition-law-is-failing-in-a-digital-world.

[9] See Rana Foroohar, The Great US-Europe Antitrust Divide, FT (Feb. 5, 2024), https://www.ft.com/content/065a2f93-dc1e-410c-ba9d-73c930cedc14.

[10] See, e.g., Press Release, European Commission, supra note 3.

[11] See infra, Section II. Commentators have also made similar claims; see, e.g., Ganesh Sitaram & Tejas N. Narechania, It’s Time for the Government to Regulate AI. Here’s How, Politico (Jan. 15, 2024) (“All that cloud computing power is used to train foundation models by having them “learn” from incomprehensibly huge quantities of data. Unsurprisingly, the entities that own these massive computing resources are also the companies that dominate model development. Google has Bard, Meta has LLaMa. Amazon recently invested $4 billion into one of OpenAI’s leading competitors, Anthropic. And Microsoft has a 49 percent ownership stake in OpenAI — giving it extraordinary influence, as the recent board struggles over Sam Altman’s role as CEO showed.”).

[12] Press Release, European Commission, supra note 3.

[13] Comment of U.S. Federal Trade Commission to the U.S. Copyright Office, Artificial Intelligence and Copyright, Docket No. 2023-6 (Oct. 30, 2023), at 4, https://www.ftc.gov/legal-library/browse/advocacy-filings/comment-federal-trade-commission-artificial-intelligence-copyright (emphasis added).

[14] See, e.g. Joe Caserta, Holger Harreis, Kayvaun Rowshankish, Nikhil Srinidhi, & Asin Tavakoli, The Data Dividend: Fueling Generative AI, McKinsey Digital (Sep. 15, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-data-dividend-fueling-generative-ai (“Your data and its underlying foundations are the determining factors to what’s possible with generative AI.”).

[15] See, e.g., Tim Keary, Google DeepMind’s Achievements and Breakthroughs in AI Research, Techopedia (Aug. 11, 2023), https://www.techopedia.com/google-deepminds-achievements-and-breakthroughs-in-ai-research; See, e.g., Will Douglas Heaven, Google DeepMind Used a Large Language Model to Solve an Unsolved Math Problem, MIT Technology Review (Dec. 14, 2023), https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set; see also, A Decade of Advancing the State-of-the-Art in AI Through Open Research, Meta (Nov. 30, 2023), https://about.fb.com/news/2023/11/decade-of-advancing-ai-through-open-research; see also, 200 Languages Within a Single AI Model: A Breakthrough in High-Quality Machine Translation, Meta, https://ai.meta.com/blog/nllb-200-high-quality-machine-translation (last visited Jan. 18, 2023).

[16] See, e.g., Jennifer Allen, 10 Years of Siri: The History of Apple’s Voice Assistant, Tech Radar (Oct. 4, 2021), https://www.techradar.com/news/siri-10-year-anniversary; see also Evan Selleck, How Apple Is Already Using Machine Learning and AI in iOS, Apple Insider (Nov. 20, 2023), https://appleinsider.com/articles/23/09/02/how-apple-is-already-using-machine-learning-and-ai-in-ios; see also, Kathleen Walch, The Twenty Year History Of AI At Amazon, Forbes (July 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/07/19/the-twenty-year-history-of-ai-at-amazon.

[17] See infra Section III.

[18] See, e.g., Cédric Argenton & Jens Prüfer, Search Engine Competition with Network Externalities, 8 J. Comp. L. & Econ. 73, 74 (2012).

[19] John M. Yun, The Role of Big Data in Antitrust, in The Global Antitrust Institute Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg, eds., Nov. 11, 2020) at 233, https://gaidigitalreport.com/2020/08/25/big-data-and-barriers-to-entry/#_ftnref50; see also, e.g., Robert Wayne Gregory, Ola Henfridsson, Evgeny Kaganer, & Harris Kyriakou, The Role of Artificial Intelligence and Data Network Effects for Creating User Value, 46 Acad. of Mgmt. Rev. 534 (2020), final pre-print version at 4, http://wrap.warwick.ac.uk/134220) (“A platform exhibits data network effects if, the more that the platform learns from the data it collects on users, the more valuable the platform becomes to each user.”); see also, Karl Schmedders, José Parra-Moyano, & Michael Wade, Why Data Aggregation Laws Could be the Answer to Big Tech Dominance, Silicon Republic (Feb. 6, 2024), https://www.siliconrepublic.com/enterprise/data-ai-aggregation-laws-regulation-big-tech-dominance-competition-antitrust-imd.

[20] Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. Reg. 401, 409 (2014) (emphasis added); see also id. at 420 & 423 (“While there are a number of network effects that come into play with Google, [“its intimate knowledge of its users contained in its vast databases of user personal data”] is likely the most important one in terms of entrenching the company’s monopoly in search advertising…. Google’s overwhelming control of user data… might make its dominance nearly unchallengeable.”).

[21] See also Yun, supra note 19 at 229 (“[I]nvestments in big data can create competitive distance between a firm and its rivals, including potential entrants, but this distance is the result of a competitive desire to improve one’s product.”).

[22] For a review of the literature on increasing returns to scale in data (this topic is broader than data-network effects) see Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo Mason L. Rev. 1281, 1344 (2021).

[23] Andrei Hagiu & Julian Wright, Data-Enabled Learning, Network Effects, and Competitive Advantage, 54 RAND J. Econ. 638 (2023).

[24] Id. at 639. The authors conclude that “Data-enabled learning would seem to give incumbent firms a competitive advantage. But how strong is this advantage and how does it differ from that obtained from more traditional mechanisms…”.

[25] Id.

[26] Bruno Jullien & Wilfried Sand-Zantman, The Economics of Platforms: A Theory Guide for Competition Policy, 54 Info. Econ. & Pol’y 10080, 101031 (2021).

[27] Daniele Condorelli & Jorge Padilla, Harnessing Platform Envelopment in the Digital World, 16 J. Comp. L. & Pol’y 143, 167 (2020).

[28] See Hagiu & Wright, supra note 23.

[29] For a summary of these limitations, see generally Catherine Tucker, Network Effects and Market Power: What Have We Learned in the Last Decade?, Antitrust (2018) at 72, available at https://sites.bu.edu/tpri/files/2018/07/tucker-network-effects-antitrust2018.pdf; see also Manne & Auer, supra note 22, at 1330.

[30] See Jason Furman, Diane Coyle, Amelia Fletcher, Derek McAuley, & Philip Marsden (Dig. Competition Expert Panel), Unlocking Digital Competition (2019) at 32-35 (“Furman Report”), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[31] Id. at 34.

[32] Id. at 35. To its credit, it should be noted, the Furman Report does counsel caution before mandating access to data as a remedy to promote competition. See id. at 75. That said, the Furman Report does maintain that such a remedy should certainly be on the table because “the evidence suggests that large data holdings are at the heart of the potential for some platform markets to be dominated by single players and for that dominance to be entrenched in a way that lessens the potential for competition for the market.” Id. In fact, the evidence does not show this.

[33] Case COMP/M.9660 — Google/Fitbit, Commission Decision (Dec. 17, 2020) (Summary at O.J. (C 194) 7), available at https://ec.europa.eu/competition/mergers/cases1/202120/m9660_3314_3.pdf at 455.

[34] Id. at 896.

[35] See Natasha Lomas, EU Checking if Microsoft’s OpenAI Investment Falls Under Merger Rules, TechCrunch (Jan. 9, 2024), https://techcrunch.com/2024/01/09/openai-microsoft-eu-merger-rules.

[36] Amended Complaint at 11, Meta/Zuckerberg/Within, Fed. Trade Comm’n. (2022) (No. 605837), available at https://www.ftc.gov/system/files/ftc_gov/pdf/D09411%20-%20AMENDED%20COMPLAINT%20FILED%20BY%20COUNSEL%20SUPPORTING%20THE%20COMPLAINT%20-%20PUBLIC%20%281%29_0.pdf.

[37] Amended Complaint (D.D.C), supra note 6 at ¶37.

[38] Amended Complaint (E.D. Va), supra note 6 at ¶8.

[39] Merger Guidelines, US Dep’t of Justice & Fed. Trade Comm’n (2023) at 25, available at https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[40] Merger Assessment Guidelines, Competition and Mkts. Auth (2021) at  ¶7.19(e), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051823/MAGs_for_publication_2021_–_.pdf.

[41] Furman Report, supra note 30, at ¶4.

[42] See, e.g., Chris Westfall, New Research Shows ChatGPT Reigns Supreme in AI Tool Sector, Forbes (Nov. 16, 2023), https://www.forbes.com/sites/chriswestfall/2023/11/16/new-research-shows-chatgpt-reigns-supreme-in-ai-tool-sector/?sh=7de5de250e9c.

[43] See Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base, Reuters (Feb. 2, 2023), https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01; Google: The AI Race Is On, App Economy Insights (Feb. 7, 2023), https://www.appeconomyinsights.com/p/google-the-ai-race-is-on.

[44] See Google Trends, https://trends.google.com/trends/explore?date=today%205-y&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited, Jan. 12, 2024) and https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024).

[45] See David F. Carr, As ChatGPT Growth Flattened in May, Google Bard Rose 187%, Similarweb Blog (Jun. 5, 2023), https://www.similarweb.com/blog/insights/ai-news/chatgpt-bard.

[46] See Press Release, Introducing New AI Experiences Across Our Family of Apps and Devices, Meta (Sep. 27, 2023), https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools; Sundar Pichai, An Important Next Step on Our AI Journey, Google Keyword Blog (Feb. 6, 2023), https://blog.google/technology/ai/bard-google-ai-search-updates.

[47] See Ion Prodan, 14 Million Users: Midjourney’s Statistical Success, Yon (Aug. 19, 2023), https://yon.fun/midjourney-statistics; see also Andrew Wilson, Midjourney Statistics: Users, Polls, & Growth [Oct 2023], ApproachableAI (Oct. 13, 2023), https://approachableai.com/midjourney-statistics.

[48] See Hema Budaraju, New Ways to Get Inspired with Generative AI in Search, Google Keyword Blog (Oct. 12, 2023), https://blog.google/products/search/google-search-generative-ai-october-update; Imagine with Meta AI, Meta (last visited Jan. 12, 2024), https://imagine.meta.com.

[49] Catherine Tucker, Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility, 54 Rev. Indus. Org. 683, 686 (2019).

[50] Manne & Auer, supra note 22, at 1345.

[51] See, e.g., Stefanie Koperniak, Artificial Data Give the Same Results as Real Data—Without Compromising Privacy, MIT News (Mar. 3, 2017), https://news.mit.edu/2017/artificial-data-give-same-results-as-real-data-0303 (“[Authors] describe a machine learning system that automatically creates synthetic data—with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. While the use of authentic data can cause significant privacy concerns, this synthetic data is completely different from that produced by real users—but can still be used to develop and test data science algorithms and models.”).

[52] See, e.g., Rachel Gordon, Synthetic Imagery Sets New Bar in AI Training Efficiency, MIT News (Nov. 20, 2023), https://news.mit.edu/2023/synthetic-imagery-sets-new-bar-ai-training-efficiency-1120 (“By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional ‘real-image’ training methods.).

[53] Thibault Schrepel & Alex ‘Sandy’ Pentland, Competition Between AI Foundation Models: Dynamics and Policy Recommendations, MIT Connection Science Working Paper (Jun. 2023), at 8.

[54] Igor Susmelj, Optimizing Generative AI: The Role of Data Curation, Lightly (last visited Jan. 15, 2024), https://www.lightly.ai/post/optimizing-generative-ai-the-role-of-data-curation.

[55] See, e.g., Xiaoliang Dai, et al., Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack, ArXiv (Sep. 27, 2023) at 1, https://ar5iv.labs.arxiv.org/html/2309.15807 (“[S]upervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality.”); see also, Hu Xu, et al., Demystifying CLIP Data, ArXiv (Sep. 28, 2023), https://arxiv.org/abs/2309.16671.

[56] Lauren Leffer, New Training Method Helps AI Generalize like People Do, Sci. Am. (Oct. 26, 2023), https://www.scientificamerican.com/article/new-training-method-helps-ai-generalize-like-people-do (discussing Brendan M. Lake & Marco Baroni, Human-Like Systematic Generalization Through a Meta-Learning Neural Network, 623 Nature 115 (2023)).

[57] Timothy B. Lee, The Real Research Behind the Wild Rumors about OpenAI’s Q* Project, Ars Technica (Dec. 8, 2023), https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project.

[58] Id.; see also GSM8K, Papers with Code (last visited Jan. 18, 2023), available at https://paperswithcode.com/dataset/gsm8k; MATH Dataset, GitHub (last visited Jan. 18, 2024), available at https://github.com/hendrycks/math.

[59] Lee, supra note 57.

[60] Geoffrey Manne & Ben Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (Mar. 26, 2015), https://truthonthemarket.com/2015/03/26/debunking-the-myth-of-a-data-barrier-to-entry-for-online-services (citing Andres V. Lerner, The Role of ‘Big Data’ in Online Platform Competition (Aug. 26, 2014), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2482780.).

[61] See Catherine Tucker, Digital Data as an Essential Facility: Control, CPI Antitrust Chron. (Feb. 2020), at 11 (“[U]ltimately the value of data is not the raw manifestation of the data itself, but the ability of a firm to use this data as an input to insight.”).

[62] Or, as John Yun puts it, data is only a small component of digital firms’ production function. See Yun, supra note 19, at 235 (“Second, while no one would seriously dispute that having more data is better than having less, the idea of a data-driven network effect is focused too narrowly on a single factor improving quality. As mentioned in supra Section I.A, there are a variety of factors that enter a firm’s production function to improve quality.”).

[63] Luxia Le, The Real Reason Windows Phone Failed Spectacularly, History–Computer (Aug. 8, 2023), https://history-computer.com/the-real-reason-windows-phone-failed-spectacularly.

[64] Introducing the GPT Store, Open AI (Jan. 10, 2024), https://openai.com/blog/introducing-the-gpt-store.

[65] See Michael Schade, How ChatGPT and Our Language Models are Developed, OpenAI, https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed; Sreejani Bhattacharyya, Interesting Innovations from OpenAI in 2021, AIM (Jan. 1, 2022), https://analyticsindiamag.com/interesting-innovations-from-openai-in-2021; Danny Hernadez & Tom B. Brown, Measuring the Algorithmic Efficiency of Neural Networks, ArXiv (May 8, 2020), https://arxiv.org/abs/2005.04305.

[66] See Yun, supra note 19 at 235 (“Even if data is primarily responsible for a platform’s quality improvements, these improvements do not simply materialize with the presence of more data—which differentiates the idea of data-driven network effects from direct network effects. A firm needs to intentionally transform raw, collected data into something that provides analytical insights. This transformation involves costs including those associated with data storage, organization, and analytics, which moves the idea of collecting more data away from a strict network effect to more of a ‘data opportunity.’”).

[67] Lerner, supra note 60, at 4-5 (emphasis added).

[68] See Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (2013).

[69] See David J. Teece, Dynamic Capabilities and Strategic Management: Organizing for Innovation and Growth (2009).

[70] See Hagiu & Wright, supra note 23, at 23 (“We use our dynamic framework to explore how data sharing works: we find that it in-creases consumer surplus when one firm is sufficiently far ahead of the other by making the laggard more competitive, but it decreases consumer surplus when the firms are sufficiently evenly matched by making firms compete less aggressively, which in our model means subsidizing consumers less.”); see also Lerner, supra note 60.

[71] See, e.g., Hagiu & Wright, id. (“We also use our model to highlight an unintended consequence of privacy policies. If such policies reduce the rate at which firms can extract useful data from consumers, they will tend to increase the incumbent’s competitive advantage, reflecting that the entrant has more scope for new learning and so is affected more by such a policy.”); Jian Jia, Ginger Zhe Jin, & Liad Wagman, The Short-Run Effects of the General Data Protection Regulation on Technology Venture Investment, 40 Marketing Sci. 593 (2021) (finding GDPR reduced investment in new and emerging technology firms, particularly in data-related ventures); James Campbell, Avi Goldfarb, & Catherine Tucker, Privacy Regulation and Market Structure, 24 J. Econ. & Mgmt. Strat. 47 (2015) (“Consequently, rather than increasing competition, the nature of transaction costs implied by privacy regulation suggests that privacy regulation may be anti-competitive.”).

Continue reading
Antitrust & Consumer Protection

SEPs: The West Need Not Cede to China

TL;DR TL;DR Background: Policymakers on both sides of the Atlantic are contemplating new regulations on standard-essential patents (SEPs). While the European Union (EU) is attempting to . . .

TL;DR

Background: Policymakers on both sides of the Atlantic are contemplating new regulations on standard-essential patents (SEPs). While the European Union (EU) is attempting to pass legislation toward that end, U.S. authorities like the Department of Commerce and U.S. Patent and Trademark Office are examining the issues and potentially contemplating their own reforms to counteract changes made by the EU.

But… These efforts would ultimately hand an easy geopolitical win to rivals like China. Not only do the expected changes risk harming U.S. and EU innovators and the standardization procedures upon which they rely, but they lend legitimacy to concerning Chinese regulatory responses that clearly and intentionally place a thumb on the scale in favor of domestic firms. The SEP ecosystem is extremely complex, and knee-jerk regulations may create a global race to the bottom that ultimately harms the very firms and consumers they purport to protect.

KEY TAKEAWAYS

EUROPEAN LEGISLATION, GLOBAL REACH

In April 2023, the EU published its “Proposal for a Regulation on Standard Essential Patents.” The proposal seeks to improve transparency by creating a register of SEPs (and accompanying essentiality checks), and to accelerate the diffusion of these technologies by, among other things, implementing a system of nonbinding arbitration of aggregate royalties and “fair, reasonable, and non-discriminatory” (FRAND) terms. 

But while the proposal nominally applies only to European patents, its effects would be far broader. Notably, the opinions on aggregate royalties and FRAND terms would apply worldwide. European policymakers would thus rule (albeit in nonbinding fashion) on the appropriate royalties to be charged around the globe. This would further embolden foreign jurisdictions to respond in kind, often without the guardrails and independence that have traditionally served to cabin policymakers in the West.

CHINA’S EFFORTS TO BECOME A ‘CYBER GREAT POWER’

Chinese policymakers have long considered the SEPs to be of vital strategic importance, and have taken active steps to protect Chinese interests in this space. The latest move came from the Chongqing First Intermediate People’s Court in a dispute between Chinese firm Oppo and Finland’s Nokia. In a controversial December 2023 ruling, the court limited the maximum FRAND royalties that Nokia could charge Oppo for use of Nokia’s SEPs pertaining to the 5G standard.

Unfortunately, the ruling appears obviously biased toward Chinese interests. In calculating the royalties that Nokia could charge Oppo, the court applied a sizable discount in China. It’s been reported that, in reaching its conclusion, the court defined an aggregate royalty rate for all 5G patents, and divided the proceeds by the number of patents each firm held—a widely discredited metric.

The court’s ruling has widely been seen as a protectionist move, which has elicited concern from western policymakers. It appears to set a dangerous precedent in which geopolitical considerations will begin to play an increasingly large role in the otherwise highly complex and technical field of SEP policy.

TRANSPARENCY, AGGREGATE ROYALTY MANDATES, AND FRAND DETERMINATIONS

Leaving aside how China may respond, the EU’s draft regulation will likely be detrimental to innovators. The regulation would create a system of government-run essentiality checks and nonbinding royalty arbitrations. The goal would be to improve transparency and verify that patents declared “standard essential” truly qualify for that designation.

This system would, however, be both costly and difficult to operate. It would require such a large number of qualified experts to serve as evaluators and conciliators that it may prove exceedingly difficult (or impossible) to find them. The sheer volume of work required for these experts would likely be insurmountable, with the costs borne by industry players. Inventors would also be precluded from seeking out injunctions while arbitration is ongoing. Ultimately, while nonbinding, the system may lead to a de facto royalty cap that lowers innovation.

Finally, it’s unclear whether this form of coordinated information sharing and collective royalty setting may give rise to collusion at various points in the value chain. This threatens both to harm consumers and to deter firms from commercializing standardized technologies. 

In short, these kinds of top-down initiatives likely fail to capture the nuances of individualized patents and standards. They may also add confusion and undermine the incentives that drive affordable innovation.

WESTERN POLICYMAKERS MUST RESIST CHINA’S INDUSTRIAL POLICY

The bottom line is that the kinds of changes under consideration by both U.S. and EU policymakers may undermine innovation in the West. SEP entrepreneurs have been successful because they have been able to monetize their innovations. If authorities take steps that needlessly imbalance the negotiation process between innovators and implementers—as Chinese courts have started to do and Europe’s draft regulation may unintendedly achieve—it will harm both U.S. and EU leadership in intellectual-property-intensive industries. In turn, this would accelerate China’s goal of becoming “a cyber great power.”

For more on this issue, see the ICLE issue brief “FRAND Determinations Under the EU SEP Proposal: Discarding the Huawei Framework,” as well as the “ICLE Comments to USPTO on Issues at the Intersection of Standards and Intellectual Property.”

Continue reading
Intellectual Property & Licensing

From Data Myths to Data Reality: What Generative AI Can Tell Us About Competition Policy (and Vice Versa)

Scholarship I. Introduction It was once (and frequently) said that Google’s “data monopoly” was unassailable: “If ‘big data’ is the oil of the information economy, Google . . .

I. Introduction

It was once (and frequently) said that Google’s “data monopoly” was unassailable: “If ‘big data’ is the oil of the information economy, Google has Standard Oil-like monopoly dominance — and uses that control to maintain its dominant position.”[1] Similar epithets have been hurled at virtually all large online platforms, including Facebook (Meta), Amazon, and Uber.[2]

While some of these claims continue even today (for example, “big data” is a key component of the U.S. Justice Department’s (“DOJ”) Google Search and AdTech antitrust suits),[3] a shiny new data target has emerged in the form of generative artificial intelligence. The launch of ChatGPT in November 2022, as well as the advent of AI image-generation services like Midjourney and Dall-E, have dramatically expanded people’s conception of what is, and what might be, possible to achieve with generative AI technologies built on massive data sets.

While these services remain in the early stages of mainstream adoption and are in the throes of rapid, unpredictable technological evolution, they nevertheless already appear on the radar of competition policymakers around the world. Several antitrust enforcers appear to believe that, by acting now, they can avoid the “mistakes” that were purportedly made during the formative years of Web 2.0.[4] These mistakes, critics assert, include failing to appreciate the centrality of data in online markets, as well as letting mergers go unchecked and allowing early movers to entrench their market positions.[5] As Lina Khan, Chair of the FTC, put it: “we are still reeling from the concentration that resulted from Web 2.0, and we don’t want to repeat the mis-steps of the past with AI”.[6]

In that sense, the response from the competition-policy world is deeply troubling. Instead of engaging in critical self-assessment and adopting an appropriately restrained stance, the enforcement community appears to be chomping at the bit. Rather than assessing their prior assumptions based on the current technological moment, enforcers’ top priority appears to be figuring out how to deploy existing competition tools rapidly and almost reflexively to address the presumed competitive failures presented by generative AI.[7]

It is increasingly common for competition enforcers to argue that so-called “data network effects” serve not only to entrench incumbents in the markets where that data is collected, but also confer similar, self-reinforcing benefits in adjacent markets. Several enforcers have, for example, prevented large online platforms from acquiring smaller firms in adjacent markets, citing the risk that they could use their vast access to data to extend their dominance into these new markets.[8] They have also launched consultations to ascertain the role that data plays in AI competition. For instance, in an ongoing consultation, the European Commission asks: “What is the role of data and what are its relevant characteristics for the provision of generative AI systems and/or components, including AI models?”[9] Unsurprisingly, the U.S. Federal Trade Commission (“FTC”) has been bullish about the risks posed by incumbents’ access to data. In comments submitted to the U.S. Copyright Office, for example, the FTC argued that:

The rapid development and deployment of AI also poses potential risks to competition. The rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms. These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.[10]

Against this backdrop, it stands to reason that the largest online platforms—including Alphabet, Meta, Apple, and Amazon — should have a meaningful advantage in the burgeoning markets for generative AI services. After all, it is widely recognized that data is an essential input for generative AI.[11] This competitive advantage should be all the more significant given that these firms have been at the forefront of AI technology for more than a decade. Over this period, Google’s DeepMind and AlphaGo and Meta’s have routinely made headlines.[12] Apple and Amazon also have vast experience with AI assistants, and all of these firms use AI technology throughout their platforms.[13]

Contrary to what one might expect, however, the tech giants have, to date, been unable to leverage their vast data troves to outcompete startups like OpenAI and Midjourney. At the time of writing, OpenAI’s ChatGPT appears to be, by far, the most successful chatbot[14], despite the fact that large tech platforms arguably have access to far more (and more up-to-date) data.

This article suggests there are important lessons to be learned from the current technological moment, if only enforcers would stop to reflect. The meteoric rise of consumer-facing AI services should offer competition enforcers and policymakers an opportunity for introspection. As we explain, the rapid emergence of generative AI technology may undercut many core assumptions of today’s competition-policy debates — the rueful after-effects of the purported failure of 20th-century antitrust to address the allegedly manifest harms of 21st-century technology. These include the notions that data advantages constitute barriers to entry and can be leveraged to project dominance into adjacent markets; that scale itself is a market failure to be addressed by enforcers; and that the use of consumer data is inherently harmful to those consumers.

II. Data Network Effects Theory and Enforcement

Proponents of tougher interventions by competition enforcers into digital markets often cite data network effects as a source of competitive advantage and barrier to entry (though terms like “economies of scale and scope” may offer more precision).[15] The crux of the argument is that “the collection and use of data creates a feedback loop of more data, which ultimately insulates incumbent platforms from entrants who, but for their data disadvantage, might offer a better product.”[16] This self-reinforcing cycle purportedly leads to market domination by a single firm. Thus, for Google, for example, it is argued that its “ever-expanding control of user personal data, and that data’s critical value to online advertisers, creates an insurmountable barrier to entry for new competition.”[17]

Right off the bat, it is important to note the conceptual problem of these claims. Because data is used to improve the quality of products and/or to subsidize their use, the idea of data as an entry barrier suggests that any product improvement or price reduction made by an incumbent could be a problematic entry barrier to any new entrant. This is tantamount to an argument that competition itself is a cognizable barrier to entry. Of course, it would be a curious approach to antitrust if this were treated as a problem, as it would imply that firms should under-compete — should forego consumer-welfare enhancements—in order to bring about a greater number of firms in a given market simply for its own sake.[18]

Meanwhile, actual economic studies of data network effects are few and far between, with scant empirical evidence to support the theory.[19] Andrei Hagiu and Julian Wright’s theoretical paper offers perhaps the most comprehensive treatment of the topic.[20] The authors ultimately conclude that data network effects can be of different magnitudes and have varying effects on firms’ incumbency advantage.[21] They cite Grammarly (an AI writing-assistance tool) as a potential example: “As users make corrections to the suggestions offered by Grammarly, its language experts and artificial intelligence can use this feedback to continue to improve its future recommendations for all users.”[22]

This is echoed by other economists who contend that “[t]he algorithmic analysis of user data and information might increase incumbency advantages, creating lock-in effects among users and making them more reluctant to join an entrant platform.”[23]

Crucially, some scholars take this logic a step further, arguing that platforms may use data from their “origin markets” in order to enter and dominate adjacent ones:

First, as we already mentioned, data collected in the origin market can be used, once the enveloper has entered the target market, to provide products more efficiently in the target market. Second, data collected in the origin market can be used to reduce the asymmetric information to which an entrant is typically subject when deciding to invest (for example, in R&D) to enter a new market. For instance, a search engine could be able to predict new trends from consumer searches and therefore face less uncertainty in product design.[24]

This possibility is also implicit in the paper by Hagiu and Wright.[25] Indeed, the authors’ theoretical model rests on an important distinction between within-user data advantages (that is, having access to more data about a given user) and across-user data advantages (information gleaned from having access to a wider user base). In both cases, there is an implicit assumption that platforms may use data from one service to gain an advantage in another market (because what matters is information about aggregate or individual user preferences, regardless of its origin).

Our review of the economic evidence suggests that several scholars have, with varying degrees of certainty, raised the possibility that incumbents may leverage data advantages to stifle competitors in their primary market or adjacent ones (be it via merger or organic growth). As we explain below, however, there is ultimately little evidence to support such claims.

Policymakers, however, have largely been receptive to these limited theoretical findings, basing multiple decisions on these theories, often with little consideration of the caveats that accompany them.[26] Indeed, it is remarkable that, in the Furman Report’s section on “[t]he data advantage for incumbents,” only two empirical economic studies are cited, and they offer directly contradictory conclusions with respect to the question of the strength of data advantages.[27] Nevertheless, the Furman Report concludes that data “may confer a form of unmatchable advantage on the incumbent business, making successful rivalry less likely,”[28] and adopts without reservation “convincing” evidence from non-economists with apparently no empirical basis.[29]

In the Google/Fitbit merger proceedings, the European Commission found that the combination of data from Google services with that of Fitbit devices would reduce competition in advertising markets:

Giving [sic] the large amount of data already used for advertising purposes that Google holds, the increase in Google’s data collection capabilities, which goes beyond the mere number of active users for which Fitbit has been collecting data so far, the Transaction is likely to have a negative impact on the development of an unfettered competition in the markets for online advertising.[30]

As a result, the Commission cleared the merger on the condition that Google refrain from using data from Fitbit devices for its advertising platform.[31] The Commission will likely focus on similar issues during its ongoing investigation into Microsoft’s investment into OpenAI.[32]

Along similar lines, the FTC’s complaint to enjoin Meta’s purchase of a virtual-reality (VR) fitness app called “Within” relied, among other things, on the fact that Meta could leverage its data about VR-user behavior to inform its decisions and potentially outcompete rival VR-fitness apps: “Meta’s control over the Quest platform also gives it unique access to VR user data, which it uses to inform strategic decisions.”[33]

The U.S. Department of Justice’s twin cases against Google also raise data leveraging and data barriers to entry. The agency’s AdTech complaint that “Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”[34] Similarly, in its Search complaint, the agency argues that:

Google’s anticompetitive practices are especially pernicious because they deny rivals scale to compete effectively. General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms.[35]

Finally, the merger guidelines published by several competition enforcers cite the acquisition of data as a potential source of competitive concerns. For instance, the FTC and DOJ’s newly published guidelines state that “acquiring data that helps facilitate matching, sorting, or prediction services may enable the platform to weaken rival platforms by denying them that data.”[36] Likewise, the UK Competition and Markets Authority (“CMA”) warns against incumbents acquiring firms in order to obtain their data and foreclose other rivals:

Incentive to foreclose rivals…

7.19(e) Particularly in complex and dynamic markets, firms may not focus on short term margins but may pursue other objectives to maximise their long-run profitability, which the CMA may consider. This may include… obtaining access to customer data….[37]

In short, competition authorities around the globe are taking an aggressive stance on data network effects. Among the ways this has manifested is in basing enforcement decisions on fears that data collected by one platform might confer a decisive competitive advantage in adjacent markets. Unfortunately, these concerns rest on little to no empirical evidence, either in the economic literature or the underlying case records.

III. Data Incumbency Advantages in Generative AI Markets

Given the assertions canvassed in the previous section, it seems reasonable to assume that firms such as Google, Meta, and Amazon would be in pole position to dominate the burgeoning market for generative AI. After all, these firms have not only been at the forefront of the field for the better part of a decade, but they also have access to vast troves of data, the likes of which their rivals could only dream when they launched their own services. Thus the authors of the Furman Report caution that “to the degree that the next technological revolution centres around artificial intelligence and machine learning, then the companies most able to take advantage of it may well be the existing large companies because of the importance of data for the successful use of these tools.[38]

At the time of writing, however, this is not how things have unfolded — although it bears noting these markets remain in flux and the competitive landscape is susceptible to change. The first significantly successful generative AI service was arguably not from either Meta—which had been working on chatbots for years and had access to, arguably, the world’s largest database of actual chats—or Google. Instead, the breakthrough came from a previously unknown firm called OpenAI.

OpenAI’s ChatGPT service currently holds an estimated 60% of the market (though reliable numbers are somewhat elusive).[39] It broke the record for the fastest online service to reach 100 million users (in only a couple of months), more than four times faster than the previous record holder, TikTok.[40] Based on Google Trends data, ChatGPT is nine times more popular than Google’s own Bard service worldwide, and 14 times more popular in the U.S.[41] In April 2023, ChatGPT reportedly registered 206.7 million unique visitors, compared to 19.5 million for Google’s Bard.[42] In short, at the time of writing, ChatGPT appears to be the most popular chatbot. And, so far, the entry of large players such as Google Bard or Meta AI appear to have had little effect on its market position.[43]

The picture is similar in the field of AI image generation. As of August 2023, Midjourney, Dall-E, and Stable Diffusion appear to be the three market leaders in terms of user visits.[44] This is despite competition from the likes of Google and Meta, who arguably have access to unparalleled image and video databases by virtue of their primary platform activities.[45]

This raises several crucial questions: how have these AI upstarts managed to be so successful, and is their success just a flash in the pan before Web 2.0 giants catch up and overthrow them? While we cannot answer either of these questions dispositively, some observations concerning the role and value of data in digital markets would appear to be relevant.

A first important observation is that empirical studies suggest data exhibits diminishing marginal returns. In other words, past a certain point, acquiring more data does not confer a meaningful edge to the acquiring firm. As Catherine Tucker puts it, following a review of the literature: “Empirically there is little evidence of economies of scale and scope in digital data in the instances where one would expect to find them.”[46]

Likewise, following a survey of the empirical literature on this topic, Geoffrey Manne & Dirk Auer conclude that:

Available evidence suggests that claims of “extreme” returns to scale in the tech sector are greatly overblown. Not only are the largest expenditures of digital platforms unlikely to become proportionally less important as output increases, but empirical research strongly suggests that even data does not give rise to increasing returns to scale, despite routinely being cited as the source of this effect.[47]

In other words, being the firm with the most data appears to be far less important than having enough data, and this lower bar may be accessible to far more firms than one might initially think possible.

And obtaining enough data could become even easier — that is, the volume of required data could become even smaller — with technological progress. For instance, synthetic data may provide an adequate substitute to real-world data[48] — or may even outperform real-world data.[49] As Thibault Schrepel and Alex Pentland point out, “advances in computer science and analytics are making the amount of data less relevant every day. In recent months, important technological advances have allowed companies with small data sets to compete with larger ones.”[50]

Indeed, past a certain threshold, acquiring more data might not meaningfully improve a service, where other improvements (such as better training methods or data curation) could have a large effect. In fact, there is some evidence that excessive data impedes a service’s ability to generate results appropriate for a given query: “[S]uperior model performance can often be achieved with smaller, high-quality datasets than massive, uncurated ones. Data curation ensures that training datasets are devoid of noise, irrelevant instances, and duplications, thus maximizing the efficiency of every training iteration.”[51]

Consider, for instance, a user who wants to generate an image of a basketball. Using a model trained on an indiscriminate range and number of public photos in which a basketball appears, but is surrounded by copious other image data, the user may end up with an inordinately noisy result. By contrast, a model trained with a better method on fewer, more-carefully selected images, could readily yield far superior results.[52] In one important example,

[t]he model’s performance is particularly remarkable, given its small size. “This is not a large language model trained on the whole Internet; this is a relatively small transformer trained for these tasks,” says Armando Solar-Lezama, a computer scientist at the Massachusetts Institute of Technology, who was not involved in the new study…. The finding implies that instead of just shoving ever more training data into machine-learning models, a complementary strategy might be to offer AI algorithms the equivalent of a focused linguistics or algebra class.[53]

Current efforts are thus focused on improving the mathematical and logical reasoning of large language models (“LLMs”), rather than maximizing training datasets.[54] Two points stand out. The first is that firms like OpenAI rely largely on publicly available datasets — such as GSM8K — to train their LLMs.[55] Second, the real challenge to create cutting-edge AI is not so much in collecting data, but rather in creating innovative AI training processes and architectures:

[B]uilding a truly general reasoning engine will require a more fundamental architectural innovation. What’s needed is a way for language models to learn new abstractions that go beyond their training data and have these evolving abstractions influence the model’s choices as it explores the space of possible solutions.

We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.[56]

Furthermore, it is worth noting that the data most relevant to startups operating in a given market may not be those data held by large incumbent platforms in other markets, but rather data specific to the market in which the startup is active or, even better, to the given problem it is attempting to solve:

As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use — they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market and mounted their successful challenges — not before.[57]

The bottom line is that data is not the be-all and end-all that many in competition circles rather casually make it out to be.[58] While data may often confer marginal benefits, there is little sense these are ultimately decisive.[59] As a result, incumbent platforms’ access to vast numbers of users and data in their primary markets might only marginally affect their AI competitiveness.

A related observation is that firms’ capabilities and other features of their products arguably play a more important role than the data they own.[60] Examples of this abound in digital markets. Google overthrew Yahoo, despite initially having access to far fewer users and far less data; Google and Apple overcame Microsoft in the smartphone OS market despite having comparatively tiny ecosystems (at the time) to leverage; and TikTok rose to prominence despite intense competition from incumbents like Instagram, which had much larger user bases. In each of these cases, important product-design decisions (such as the PageRank algorithm, recognizing the specific needs of mobile users,[61] and TikTok’s clever algorithm) appear to have played a far greater role than initial user and data endowments (or lack thereof).

All of this suggests that the early success of OpenAI likely has more to do with its engineering decisions than the data it did (or did not) own. And going forward, OpenAI and its rivals’ ability to offer and monetize compelling stores offering custom versions of their generative AI technology will arguably play a much larger role than (and contribute to) their ownership of data.[62] In other words, the ultimate challenge is arguably to create a valuable platform, of which data ownership is a consequence, but not a cause.

It is also important to note that, in those instances where it is valuable, data does not just fall from the sky. Instead, it is through smart business and engineering decisions that firms can generate valuable information (which does not necessarily correlate with owing more data).

For instance, OpenAI’s success with ChatGPT is often attributed to its more efficient algorithms and training models, which arguably have enabled the service to improve more rapidly than its rivals.[63] Likewise, the ability of firms like Meta and Google to generate valuable data for advertising arguably depends more on design decisions that elicit the right data from users, rather than the raw number of users in their networks.

Put differently, setting up a business so as to generate the right information is more important than simply owning vast troves of data.[64] Even in those instances where high-quality data is an essential parameter of competition, it does not follow that having vaster databases or more users on a platform necessarily leads to better information for the platform.

Given what precedes, it seems clear that OpenAI and other generative AI startups’ early success, as well as their chances of prevailing in the future, hinge on a far broader range of factors than the mere ownership of data. Indeed, if data ownership consistently conferred a significant competitive advantage, these new firms would not be where they are today. This does not mean that data is worthless, of course. Rather, it means that competition authorities should not assume that merely possessing data is a dispositive competitive advantage, absent compelling empirical evidence to support such a finding. In this light, the current wave of decisions and competition-policy pronouncements that rely on data-related theories of harm are premature.

IV. Five Key Takeaways: Reconceptualizing the Role of Data in Generative AI Competition

As we explain above, data (network effects) are not the source of barriers to entry that they are sometimes made out to be; rather, the picture is far more nuanced. Indeed, as economist Andres Lerner demonstrated almost a decade ago (and the assessment is only truer today):

Although the collection of user data is generally valuable for online providers, the conclusion that such benefits of user data lead to significant returns to scale and to the entrenchment of dominant online platforms is based on unsupported assumptions. Although, in theory, control of an “essential” input can lead to the exclusion of rivals, a careful analysis of real-world evidence indicates that such concerns are unwarranted for many online businesses that have been the focus of the “big data” debate.[65]

While data can be an important part of the competitive landscape, incumbent data advantages are far less pronounced than today’s policymakers commonly assume. In that respect, five main lessons emerge:

  1. Data can be (very) valuable, but past a certain threshold, the benefits tend to diminish. In other words, having the most data is less important than having enough;
  2. The ability to generate valuable information does not depend on the number of users or the amount of data a platform has previously acquired;
  3. The most important datasets are not always proprietary;
  4. Technological advances and platforms’ engineering decisions affect their ability to generate valuable information, and this effect swamps the effect of the amount of data they own; and
  5. How platforms use data is arguably more important than what data or how much data they own.

These lessons have important ramifications for competition-policy debates over the competitive implications of data in technologically evolving areas.

First, it is not surprising that startups, rather than incumbents, have taken an early lead in generative AI (and in Web 2.0 before it). After all, if data-incumbency advantages are small or even nonexistent, then smaller and more nimble players may have an edge over established tech platforms. This is all the more likely given that, despite significant efforts, the biggest tech platforms were unable to offer compelling generative AI chatbots and image-generation services before the emergence of ChatGPT, Dall-E, Midjourney, etc. This failure suggests that, in a process akin to Christensen’s Innovator’s Dilemma,[66] something about their existing services and capabilities was holding them back in those markets. Of course, this does not necessarily mean that those same services/capabilities could not become an advantage when the generative AI market starts addressing issues of monetization and scale.[67] But it does mean that assumptions of a firm’s market power based on its possession of data are off the mark.

Another important implication is that, paradoxically, policymakers’ efforts to prevent Web 2.0 platforms from competing freely in generative AI markets may ultimately backfire and lead to less, not more, competition. Indeed, OpenAI is currently acquiring a sizeable lead in generative AI. While competition authorities might like to think that other startups will emerge and thrive in this space, it is important not to confuse desires with reality. For, while there is a vibrant AI-startup ecosystem, there is at least a case to be made that the most significant competition for today’s AI leaders will come from incumbent Web 2.0 platforms — although nothing is certain at this stage. Policymakers should beware not to stifle that competition on the misguided assumption that competitive pressure from large incumbents is somehow less valuable to consumers than that which originates from smaller firms.

Finally, even if there were a competition-related market failure to be addressed (which is anything but clear) in the field of generative AI, it is unclear that contemplated remedies would do more good than harm. Some of the solutions that have been put forward have highly ambiguous effects on consumer welfare. Scholars have shown that mandated data sharing — a solution championed by EU policymakers, among others — may sometimes dampen competition in generative AI markets.[68] This is also true of legislation like the GDPR that make it harder for firms to acquire more data about consumers — assuming such data is, indeed, useful to generative AI services.[69]

In sum, it is a flawed understanding of the economics and practical consequences of large agglomerations of data that lead competition authorities to believe that data-incumbency advantages are likely to harm competition in generative AI markets — or even in the data-intensive Web 2.0 markets that preceded them. Indeed, competition or regulatory intervention to “correct” data barriers and data network and scale effects is liable to do more harm than good.

[1] Nathan Newman, Taking on Google’s Monopoly Means Regulating Its Control of User Data, Huffington Post (Sep. 24, 2013), http://www.huffingtonpost.com/nathan-newman/taking-on-googlesmonopol_b_3980799.html.

[2] See e.g. Lina Khan & K. Sabeel Rahman, Restoring Competition in the U.S. Economy, in Untamed: How to Check Corporate, Financial, and Monopoly Power (Nell Abernathy, Mike Konczal, & Kathryn Milani, eds., 2016), at 23 (“From Amazon to Google to Uber, there is a new form of economic power on display, distinct from conventional monopolies and oligopolies…, leverag[ing] data, algorithms, and internet-based technologies… in ways that could operate invisibly and anticompetitively.”); Mark Weinstein, I Changed My Mind — Facebook Is a Monopoly, Wall St. J. (Oct. 1, 2021), https://www.wsj.com/articles/facebook-is-monopoly-metaverse-users-advertising-platforms-competition-mewe-big-tech-11633104247 (“[T]he glue that holds it all together is Facebook’s monopoly over data…. Facebook’s data troves give it unrivaled knowledge about people, governments — and its competitors.”).

[3] See generally Abigail Slater, Why “Big Data” Is a Big Deal, The Reg. Rev. (Nov. 6, 2023), https://www.theregreview.org/2023/11/06/slater-why-big-data-is-a-big-deal/; Amended Complaint at ¶36, United States v. Google, 1:20-cv-03010- (D.D.C. 2020); Complaint at ¶37, United States v. Google, 1:23-cv-00108 (E.D. Va. 2023), https://www.justice.gov/opa/pr/justice-department-sues-google-monopolizing-digital-advertising-technologies (“Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”).

[4] See e.g. Press Release, European Commission, Commission Launches Calls for Contributions on Competition in Virtual Worlds and Generative AI (Jan. 9, 2024), https://ec.europa.eu/commission/presscorner/detail/en/IP_24_85; Krysten Crawford, FTC’s Lina Khan warns Big Tech over AI, SIEPR (Nov. 3, 2020), https://siepr.stanford.edu/news/ftcs-lina-khan-warns-big-tech-over-ai (“Federal Trade Commission Chair Lina Khan delivered a sharp warning to the technology industry in a speech at Stanford on Thursday: Antitrust enforcers are watching what you do in the race to profit from artificial intelligence.”) (emphasis added).

[5] See e.g. John M. Newman, Antitrust in Digital Markets, 72 Vand. L. Rev. 1497, 1501 (2019) (“[T]he status quo has frequently failed in this vital area, and it continues to do so with alarming regularity. The laissez-faire approach advocated for by scholars and adopted by courts and enforcers has allowed potentially massive harms to go unchecked.”);
Bertin Martins, Are New EU Data Market Regulations Coherent and Efficient?, Bruegel Working Paper 21/23 (2023), available at https://www.bruegel.org/working-paper/are-new-eu-data-market-regulations-coherent-and-efficient (“Technical restrictions on access to and re-use of data may result in failures in data markets and data-driven services markets.”); Valéria Faure-Muntian, Competitive Dysfunction: Why Competition Law Is Failing in a Digital World, The Forum Network (Feb. 24, 2021), https://www.oecd-forum.org/posts/competitive-dysfunction-why-competition-law-is-failing-in-a-digital-world.

[6] Rana Foroohar, The Great US-Europe Antitrust Divide, FT (Feb. 5, 2024), https://www.ft.com/content/065a2f93-dc1e-410c-ba9d-73c930cedc14.

[7] See e.g. Press Release, European Commission, supra note 5.

[8] See infra, Section II. Commentators have also made similar claims. See, e.g., Ganesh Sitaram & Tejas N. Narechania, It’s Time for the Government to Regulate AI. Here’s How, Politico (Jan. 15, 2024) (“All that cloud computing power is used to train foundation models by having them “learn” from incomprehensibly huge quantities of data. Unsurprisingly, the entities that own these massive computing resources are also the companies that dominate model development. Google has Bard, Meta has LLaMa. Amazon recently invested $4 billion into one of OpenAI’s leading competitors, Anthropic. And Microsoft has a 49 percent ownership stake in OpenAI — giving it extraordinary influence, as the recent board struggles over Sam Altman’s role as CEO showed.”).

[9] Press Release, European Commission, supra note 5.

[10] Comment of U.S. Federal Trade Commission to the U.S. Copyright Office, Artificial Intelligence and Copyright, Docket No. 2023-6 (Oct. 30, 2023) at 4, available at https://www.ftc.gov/legal-library/browse/advocacy-filings/comment-federal-trade-commission-artificial-intelligence-copyright (emphasis added).

[11] See, e.g. Joe Caserta, Holger Harreis, Kayvaun Rowshankish, Nikhil Srinidhi, and Asin Tavakoli, The data dividend: Fueling generative AI, McKinsey Digital (Sept. 15, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-data-dividend-fueling-generative-ai (“Your data and its underlying foundations are the determining factors to what’s possible with generative AI.”).

[12] See e.g. Tim Keary, Google DeepMind’s Achievements and Breakthroughs in AI Research, Techopedia (Aug. 11, 2023), https://www.techopedia.com/google-deepminds-achievements-and-breakthroughs-in-ai-research; See e.g. Will Douglas Heaven, Google DeepMind used a large language model to solve an unsolved math problem, MIT Technology Review (Dec. 14, 2023), https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set/; See also A Decade of Advancing the State-of-the-Art in AI Through Open Research, Meta (Nov. 30, 2023), https://about.fb.com/news/2023/11/decade-of-advancing-ai-through-open-research/; See also 200 languages within a single AI model: A breakthrough in high-quality machine translation, Meta, https://ai.meta.com/blog/nllb-200-high-quality-machine-translation/ (last visited Jan. 18, 2023).

[13] See e.g. Jennifer Allen, 10 years of Siri: the history of Apple’s voice assistant, Tech Radar (Oct. 4, 2021), https://www.techradar.com/news/siri-10-year-anniversary; see also Evan Selleck, How Apple is already using machine learning and AI in iOS, Apple Insider (Nov. 20, 2023), https://appleinsider.com/articles/23/09/02/how-apple-is-already-using-machine-learning-and-ai-in-ios; see also Kathleen Walch, The Twenty Year History Of AI At Amazon, Forbes (July 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/07/19/the-twenty-year-history-of-ai-at-amazon/?sh=1734bcb268d0.

[14] See infra Section III.

[15] See e.g. Cédric Argenton & Jens Prüfer, Search Engine Competition with Network Externalities, 8 J. Comp. L. & Econ. 73, 74 (2012); Mark A. Lemley & Matthew Wansley, Coopting Disruption (February 1, 2024), https://ssrn.com/abstract=4713845.

[16] John M. Yun, The Role of Big Data in Antitrust, in The Global Antitrust Institute Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg, eds., Nov. 11, 2020) at 233, available at https://gaidigitalreport.com/2020/08/25/big-data-and-barriers-to-entry/#_ftnref50. See also e.g. Robert Wayne Gregory, Ola Henfridsson, Evgeny Kaganer, & Harris Kyriakou, The Role of Artificial Intelligence and Data Network Effects for Creating User Value, 46 Acad. of Mgmt. Rev. 534 (2020), final pre-print version at 4, available at http://wrap.warwick.ac.uk/134220) (“A platform exhibits data network effects if, the more that the platform learns from the data it collects on users, the more valuable the platform becomes to each user.”). See also Karl Schmedders, José Parra-Moyano & Michael Wade, Why Data Aggregation Laws Could be the Answer to Big Tech Dominance, Silicon Republic (Feb. 6, 2024), https://www.siliconrepublic.com/enterprise/data-ai-aggregation-laws-regulation-big-tech-dominance-competition-antitrust-imd.

[17] Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. Reg. 401, 409 (2014) (emphasis added). See also id. at 420 & 423 (“While there are a number of network effects that come into play with Google, [“its intimate knowledge of its users contained in its vast databases of user personal data”] is likely the most important one in terms of entrenching the company’s monopoly in search advertising…. Google’s overwhelming control of user data… might make its dominance nearly unchallengeable.”).

[18] See also Yun, supra note 17 at 229 (“[I]nvestments in big data can create competitive distance between a firm and its rivals, including potential entrants, but this distance is the result of a competitive desire to improve one’s product.”).

[19] For a review of the literature on increasing returns to scale in data (this topic is broader than data network effects) see Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo Mason L. Rev. 1281, 1344 (2021).

[20] Andrei Hagiu & Julian Wright, Data-Enabled Learning, Network Effects, and Competitive Advantage, 54 RAND J. Econ. 638 (2023) (final preprint available at https://andreihagiu.com/wp-content/uploads/2022/08/Data-enabled-learning-Final-RAND-Article.pdf).

[21] Id. at 2. The authors conclude that “Data-enabled learning would seem to give incumbent firms a competitive advantage. But how strong is this advantage and how does it differ from that obtained from more traditional mechanisms….”

[22] Id.

[23] Bruno Jullien & Wilfried Sand-Zantman, The Economics of Platforms: A Theory Guide for Competition Policy, 54 Info. Econ. & Pol’y 10080, 101031 (2021).

[24] Daniele Condorelli & Jorge Padilla, Harnessing Platform Envelopment in the Digital World, 16 J. Comp. L. & Pol’y 143, 167 (2020).

[25] See Hagiu & Wright, supra note 21.

[26] For a summary of these limitations, see generally Catherine Tucker, Network Effects and Market Power: What Have We Learned in the Last Decade?, Antitrust (Spring 2018) at 72, available at https://sites.bu.edu/tpri/files/2018/07/tucker-network-effects-antitrust2018.pdf. See also Manne & Auer, supra note 20, at 1330.

[27] See Jason Furman, Diane Coyle, Amelia Fletcher, Derek McAuley & Philip Marsden (Dig. Competition Expert Panel), Unlocking Digital Competition (2019) at 32-35 (“Furman Report”), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[28] Id. at 34.

[29] Id. at 35. To its credit, it should be noted, the Furman Report does counsel caution before mandating access to data as a remedy to promote competition. See id. at 75. That said, the Furman Report does maintain that such a remedy should certainly be on the table because “the evidence suggests that large data holdings are at the heart of the potential for some platform markets to be dominated by single players and for that dominance to be entrenched in a way that lessens the potential for competition for the market.” Id. In fact, the evidence does not show this.

[30] Case COMP/M.9660 — Google/Fitbit, Commission Decision (Dec. 17, 2020) (Summary at O.J. (C 194) 7), available at https://ec.europa.eu/competition/mergers/cases1/202120/m9660_3314_3.pdf at 455.

[31] Id. at 896.

[32] See Natasha Lomas, EU Checking if Microsoft’s OpenAI Investment Falls Under Merger Rules, TechCrunch (Jan. 9, 2024), https://techcrunch.com/2024/01/09/openai-microsoft-eu-merger-rules/.

[33] Amended Complaint at 11, Meta/Zuckerberg/Within, Fed. Trade Comm’n. (2022) (No. 605837), available at https://www.ftc.gov/system/files/ftc_gov/pdf/D09411%20-%20AMENDED%20COMPLAINT%20FILED%20BY%20COUNSEL%20SUPPORTING%20THE%20COMPLAINT%20-%20PUBLIC%20%281%29_0.pdf.

[34] Amended Complaint (D.D.C), supra note 4, at ¶37.

[35] Amended Complaint (E.D. Va), supra note 4, at ¶8.

[36] US Dep’t of Justice & Fed. Trade Comm’n, Merger Guidelines (2023) at 25, https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[37] Competition and Mkts. Auth., Merger Assessment Guidelines (2021) at  ¶7.19(e), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051823/MAGs_for_publication_2021_–_.pdf.

[38] Furman Report, supra note 28, at ¶4.

[39] See e.g. Chris Westfall, New Research Shows ChatGPT Reigns Supreme in AI Tool Sector, Forbes (Nov. 16, 2023), https://www.forbes.com/sites/chriswestfall/2023/11/16/new-research-shows-chatgpt-reigns-supreme-in-ai-tool-sector/?sh=7de5de250e9c.

[40] See Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base, Reuters (Feb. 2, 2023), https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/; Google: The AI Race Is On, App Economy Insights (Feb. 7, 2023), https://www.appeconomyinsights.com/p/google-the-ai-race-is-on.

[41] See Google Trends, https://trends.google.com/trends/explore?date=today%205-y&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited, Jan. 12, 2024) and https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024).

[42] See David F. Carr, As ChatGPT Growth Flattened in May, Google Bard Rose 187%, Similarweb Blog (June 5, 2023), https://www.similarweb.com/blog/insights/ai-news/chatgpt-bard/.

[43] See Press Release, Meta, Introducing New AI Experiences Across Our Family of Apps and Devices (Sept. 27, 2023), https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools/; Sundar Pichai, An Important Next Step on Our AI Journey, Google Keyword Blog (Feb. 6, 2023), https://blog.google/technology/ai/bard-google-ai-search-updates/.

[44] See Ion Prodan, 14 Million Users: Midjourney’s Statistical Success, Yon (Aug. 19, 2023), https://yon.fun/midjourney-statistics/. See also Andrew Wilson, Midjourney Statistics: Users, Polls, & Growth [Oct 2023], ApproachableAI (Oct. 13, 2023), https://approachableai.com/midjourney-statistics/.

[45] See Hema Budaraju, New Ways to Get Inspired with Generative AI in Search, Google Keyword Blog (Oct. 12, 2023), https://blog.google/products/search/google-search-generative-ai-october-update/; Imagine with Meta AI, Meta (last visited Jan. 12, 2024), https://imagine.meta.com/.

[46] Catherine Tucker, Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility, 54 Rev. Indus. Org. 683, 686 (2019).

[47] Manne & Auer, supra note 20, at 1345.

[48] See e.g. Stefanie Koperniak, Artificial Data Give the Same Results as Real Data—Without Compromising Privacy, MIT News (Mar. 3, 2017), https://news.mit.edu/2017/artificial-data-give-same-results-as-real-data-0303 (“[Authors] describe a machine learning system that automatically creates synthetic data—with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. While the use of authentic data can cause significant privacy concerns, this synthetic data is completely different from that produced by real users—but can still be used to develop and test data science algorithms and models.”).

[49] See e.g. Rachel Gordon, Synthetic Imagery Sets New Bar in AI Training Efficiency, MIT News (Nov. 20, 2023), https://news.mit.edu/2023/synthetic-imagery-sets-new-bar-ai-training-efficiency-1120 (“By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional ‘real-image’ training methods.).

[50] Thibault Schrepel & Alex ‘Sandy’ Pentland, Competition Between AI Foundation Models: Dynamics and Policy Recommendations, MIT Connection Science Working Paper (Jun. 2023), at 8.

[51] Igor Susmelj, Optimizing Generative AI: The Role of Data Curation, Lightly (last visited Jan 15, 2024), https://www.lightly.ai/post/optimizing-generative-ai-the-role-of-data-curation.

[52] See e.g. Xiaoliang Dai, et al., Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack , ArXiv (Sep. 27, 2023) at 1, https://ar5iv.labs.arxiv.org/html/2309.15807 (“[S]upervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality.”). See also Hu Xu, et al., Demystifying CLIP Data, ArXiv (Sep. 28, 2023), https://arxiv.org/abs/2309.16671.

[53] Lauren Leffer, New Training Method Helps AI Generalize like People Do, Sci. Am. (Oct. 26, 2023), https://www.scientificamerican.com/article/new-training-method-helps-ai-generalize-like-people-do/ (discussing Brendan M. Lake & Marco Baroni, Human-Like Systematic Generalization Through a Meta-Learning Neural Network, 623 Nature 115 (2023)).

[54] Timothy B. Lee, The Real Research Behind the Wild Rumors about OpenAI’s Q* Project, Ars Technica (Dec. 8, 2023), https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project/.

[55] Id. See also GSM8K, Papers with Code (last visited Jan. 18, 2023), available at https://paperswithcode.com/dataset/gsm8k; MATH Dataset, GitHub (last visited Jan. 18, 2024), available at https://github.com/hendrycks/math.

[56] Lee, supra note 55.

[57] Geoffrey Manne & Ben Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (Mar. 26, 2015), https://truthonthemarket.com/2015/03/26/debunking-the-myth-of-a-data-barrier-to-entry-for-online-services/ (citing Andres V. Lerner, The Role of ‘Big Data’ in Online Platform Competition (Aug. 26, 2014), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2482780.).

[58] See e.g., Lemley & Wansley, supra note 18, at 22 (“Incumbents have all that information. It would be difficult for a new entrant to acquire similar datasets independently….”).

[59] See Catherine Tucker, Digital Data as an Essential Facility: Control, CPI Antitrust Chron. (Feb. 2020) at 11 (“[U]ltimately the value of data is not the raw manifestation of the data itself, but the ability of a firm to use this data as an input to insight.”).

[60] Or, as John Yun puts it, data is only a small component of digital firms’ production function. See Yun, supra note 17, at 235 (“Second, while no one would seriously dispute that having more data is better than having less, the idea of a data-driven network effect is focused too narrowly on a single factor improving quality. As mentioned in supra Section I.A, there are a variety of factors that enter a firm’s production function to improve quality.”).

[61] Luxia Le, The Real Reason Windows Phone Failed Spectacularly, History–Computer (Aug. 8, 2023), https://history-computer.com/the-real-reason-windows-phone-failed-spectacularly/.

[62] Introducing the GPT Store, Open AI (Jan. 10, 2024), https://openai.com/blog/introducing-the-gpt-store.

[63] See Michael Schade, How ChatGPT and Our Language Models are Developed, OpenAI, https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed; Sreejani Bhattacharyya, Interesting innovations from OpenAI in 2021, AIM (Jan. 1, 2022), https://analyticsindiamag.com/interesting-innovations-from-openai-in-2021/; Danny Hernadez & Tom B. Brown, Measuring the Algorithmic Efficiency of Neural Networks, ArXiv (May 8, 2020), available at https://arxiv.org/abs/2005.04305.

[64] See Yun, supra note 17 at 235 (“Even if data is primarily responsible for a platform’s quality improvements, these improvements do not simply materialize with the presence of more data—which differentiates the idea of data-driven network effects from direct network effects. A firm needs to intentionally transform raw, collected data into something that provides analytical insights. This transformation involves costs including those associated with data storage, organization, and analytics, which moves the idea of collecting more data away from a strict network effect to more of a ‘data opportunity.’”).

[65] Lerner, supra note 58, at 4-5 (emphasis added).

[66] See Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (2013).

[67] See David J. Teece, Dynamic Capabilities and Strategic Management: Organizing for Innovation and Growth (2009).

[68] See Hagiu and Wright, supra note 21, at 4 (“We use our dynamic framework to explore how data sharing works: we find that it in-creases consumer surplus when one firm is sufficiently far ahead of the other by making the laggard more competitive, but it decreases consumer surplus when the firms are sufficiently evenly matched by making firms compete less aggressively, which in our model means subsidizing consumers less.”). See also Lerner, supra note 58.

[69] See e.g. Hagiu & Wright, id. (“We also use our model to highlight an unintended consequence of privacy policies. If such policies reduce the rate at which firms can extract useful data from consumers, they will tend to increase the incumbent’s competitive advantage, reflecting that the entrant has more scope for new learning and so is affected more by such a policy.”); Jian Jia, Ginger Zhe Jin & Liad Wagman, The Short-Run Effects of the General Data Protection Regulation on Technology Venture Investment, 40 Marketing Sci. 593 (2021) (finding GDPR reduced investment in new and emerging technology firms, particularly in data-related ventures); James Campbell, Avi Goldfarb, & Catherine Tucker, Privacy Regulation and Market Structure, 24 J. Econ. & Mgmt. Strat. 47 (2015) (“Consequently, rather than increasing competition, the nature of transaction costs implied by privacy regulation suggests that privacy regulation may be anti-competitive.”).

Continue reading
Antitrust & Consumer Protection

ICLE Response to the Australian Competition Taskforce’s Merger Reform Consultation

Regulatory Comments I. About the International Center for Law & Economics The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy . . .

I. About the International Center for Law & Economics

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of antitrust law and policy.

ICLE’s interest is to ensure that antitrust law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis. Some of the proposals in the Competition Taskforce’s Reform Consultation (“Consultation”) threaten to erode such foundations by, among other things, shifting toward merger analysis that focuses on the number of competitors, rather than the impact on competition, as well as reversing the burden of proof; curtailing rights of defense; and adopting an unduly strict approach to mergers in particular sectors. Our overriding concern is that intellectually coherent antitrust policy must focus on safeguarding competition and the interests of consumers.

In its ongoing efforts to contribute to ensuring that antitrust law in general, and merger control in particular, remain tethered to sound principles of economics, law, and due process, ICLE has submitted responses to consultations and published papers, articles, and reports in a number of jurisdictions, including the European Union, the United States, Brazil, the Republic of Korea, the United Kingdom, and India. These and other publications are available on ICLE’s website.[1]

II. Summary of Key Points

We appreciate the opportunity to comment on the Competition Taskforce’s Consultation. Our comments below mirror the structure of the main body of the Consultation. Section by section, we suggest improvements to the Consultation’s approach, as well as citing background law and economics that we believe the Treasury should keep in mind as it considers whether to move forward with merger reform in Australia.

  • Question 6 — Australia should not skew its merger regime toward blocking mergers under conditions of uncertainty. Uncertainty is endemic in merger control. Since the vast majority of mergers are procompetitive—including mergers in what is commonly called the “digital sector”—an error-cost-analysis approach would suggest that false negatives are preferable to false positives. Concrete evidence of a likely substantial lessening of competition post-merger should continue to be the decisive factor in decisions to block a merger, not uncertainty about its effects.
  • Question 8 — While potential competition and so-called “killer acquisitions” are important theories for the Australian Competition and Consumer Commission (“ACCC”) to consider when engaging in merger review, neither suggest that the burden of proof needed to reject a merger should be changed, nor do they warrant an overhaul of the existing merger regime. Furthermore, given the paucity of evidence finding “killer acquisitions” in the real world, it is highly unlikely that any economic woes that Australia currently faces can be blamed on an epidemic of killer acquisitions or acquisitions of potential/nascent competitors. If the Treasury is going to adopt any rules to address these theories of harm, it should do so in a manner consistent with the error-cost framework (see reply to Question 6) and should not undercut the benefits and incentives that startup firms derive from the prospect of being acquired by a larger player.
  • Question 9 — Merger control should remain tethered to the analysis of competitive effects within the framework of the significant lessening of competition test (“SLC test”), rather than seeking to foster any particular market structure. Market structure is, at best, an imperfect proxy for competitive effects and, at worst, a deeply misleading one. As such, it should remain just one tool among many in merger analysis, rather than an end in itself.
  • Question 13 — In deciding whether to impose a mandatory-notification regime, Australia should be guided by error-cost considerations, and not merely seek to replicate international trends. While there are sound reasons to prefer a system of mandatory-merger notifications, the Treasury cannot ignore the costs of filing mergers or of reviewing them. It should be noted that some studies suggest that voluntary merger notification may achieve objectives similar to those achieved by compulsory systems at lower cost to the merging parties, as well as to the regulator. If the Treasury nonetheless decides to impose mandatory notification, it should seek to contain unnecessary costs by setting a reasonable turnover threshold, thereby filtering out transactions with little-to-no potential for anticompetitive harm.
  • Question 17 —Australian merger control should require that a decisionmaker be satisfied that a merger would likely and substantially lessen competition before blocking it, rather than effectively reversing the burden of proof by requiring that merging parties demonstrate that it would not. In a misguided attempt to shift the costs of erroneous decisions from the public to the merging parties, the ACCC’s proposal forgets that false positives also impose costs on the public, most notably in the form of foregone consumer benefits. In addition, since the vast majority of mergers are procompetitive, including mergers in the digital sector, there is no objective empirical basis for reversing the burden of proof along the proposed lines.
  • Question 18 — The SLC test should not be amended to include acquisitions that “entrench, materially increase or materially extend a position of substantial market power.” First, the Consultation seems to conflate instances of anticompetitive leveraging with cases where an incumbent in one market enters an adjacent one. The latter is a powerful source of competition and, as such, should not be curtailed. The former is already covered by the SLC test, which equips authorities with sufficient tools to curb the misuse of market power post-merger. Third, it is unclear what the term “materially” would mean in the proposed context, or what it would add to the SLC test. Australian merger control already interprets “substantial” lessening of competition to mean “material in a relative sense and meaningful.” Thus, the term “materially” risks injecting unnecessary uncertainty and indeterminacy into the system.
  • Question 19 — As follows from our response to Question 9, Section 50(3) should not be amended to yield an increased focus on changes to market structure as a result of a merger. It is also unclear what is gained from removing the factors in Section 50(3). More than a “modernization” (as the Consultation calls it), this appears to be a redundancy, as the listed factors already significantly overlap with those commonly used under the SLC test. To the extent that these factors place a “straitjacket” on courts (though in principle they are sufficiently broad and flexible), they could be removed, however, so long as merger analysis remained tethered to the SLC test and respects its overarching logic.
  • Question 20 — Non-competition public benefits should play a limited role in merger control. Competition authorities are, in principle, ill-suited to rank, weigh, and prioritize complex and incommensurable goals and values. The injection of public-benefits analysis into merger review magnifies the risk of discretionary and arbitrary decision making.

III. Consultation Responses

A.   Question 6

Is Australia’s merger regime ‘skewed towards clearance’? Would it be more appropriate for the framework to skew towards blocking mergers where there is sufficient uncertainty about competition impacts?

In order for a merger to be blocked in Australia, it must be demonstrated that the merger is likely to substantially lessen competition. In the context of Section 50, “likely” means a “real commercial likelihood.”[2] Furthermore, a “substantial” lessening of competition need not be “large or weighty… but one that is ‘real or of substance… and thereby meaningful and relevant to the competitive process.’”[3] This does not set an inordinately high bar for authorities to clear.

In a sense, however, the ACCC is right when it says that Australian merger control is “skewed towards clearance.”[4] This is because all merger regimes are “skewed” toward clearance. Even in jurisdictions that require mandatory notifications, only a fraction of mergers—typically, those above a certain turnover threshold—are examined by competition authorities. Only a small percentage of these transactions are subject to conditional approval, and an even smaller percentage still are blocked or abandoned.[5] This means that the vast majority of mergers are allowed to proceed as intended by the parties, and for good reason. As the ACCC itself and the Consultation note, most mergers do not raise competition concerns.[6]

But while partially accurate, this statement is only half true. Most mergers are, in fact, either benign or procompetitive. Indeed, mergers are often an effective way to reduce transaction costs and generate economies of scale in production,[7] which can enable companies to bolster innovation post-merger. According to Robert Kulick and Andrew Card, mergers are responsible for increasing research and development expenditure by as much as $13.5 billion annually.[8] And as Francine Lafontaine and Margaret Slade point out in the context of vertical mergers:

In spite of the lack of unified theory, over all a fairly clear empirical picture emerges. The data appear to be telling us that efficiency considerations overwhelm anticompetitive motives in most contexts. Furthermore, even when we limit attention to natural monopolies or tight oligopolies, the evidence of anticompetitive harm is not strong. [9]

While vertical mergers are generally thought to be less likely to harm competition, this does not cast horizontal mergers in a negative light. It is true that the effects of horizontal mergers are empirically less well-documented. But while there is some evidence that horizontal mergers can reduce consumer welfare, at least in the short run, the long-run effects appear to be strongly positive. Dario Focarelli and Fabio Panetta find:

…strong evidence that, although consolidation does generate adverse price changes, these are temporary. In the long run, efficiency gains dominate over the market power effect, leading to more favorable prices for consumers.[10]

Furthermore, and in line with the above, some studies have found that horizontal merger enforcement has even harmed consumers.[11]

It is therefore only natural that merger regimes should be “skewed” toward clearance. But this is no more a flaw of the system than is the presumption that cartels are harmful. Instead, it reflects the well-documented and empirically grounded insight that most mergers do not raise competition concerns and that there are myriad legitimate, procompetitive reasons for firms to merge.[12]

It also reflects the principle that, since errors are inevitable, merger control should prefer Type II over Type I errors. Indeed, legal decision making and enforcement under uncertainty are always difficult and always potentially costly.[13] Given the limits of knowledge, there is always a looming risk of error.[14] Where enforcers or judges are trying to ascertain the likely effects of a business practice, such as a merger, their forward-looking analysis will seek to infer anticompetitive conduct from limited information.[15] To mitigate risks, antitrust law, generally, and merger control, specifically, must rely on certain heuristics to reduce the direct and indirect costs of the error-cost framework,[16] whose objective is to ensure that regulatory rules, enforcement decisions, and judicial outcomes minimize the expected cost of (1) erroneous condemnation and deterrence of beneficial conduct (“false positives,” or “Type I errors”); (2) erroneous allowance and under-deterrence of harmful conduct (“false negatives,” or “Type II errors”); and (3) the costs of administering the system.

Accordingly, “skewing” the merger-analysis framework toward blocking mergers could, in theory, be appropriate where the enforcer or the courts knew that mergers are always or almost always harmful (as in the case of, e.g., cartels). But we have already established that the opposite is, in fact, true: most mergers are either benign or procompetitive. The Consultation’s caveat that this would apply only in cases where “there is sufficient uncertainty about competition impacts” does not carve out a convincing exception to this principle. This is particularly true given that, in a forward-looking exercise, there is, by definition, always some degree of uncertainty about future outcomes. Given that most mergers are procompetitive or benign, any lingering uncertainty should, in any case, be resolved in favor of allowing a merger, not blocking it.

Concrete evidence of a likely substantial lessening of competition post-merger should therefore continue to be the decisive factor in decisions to block a merger, not uncertainty about its effects (see also the response to Question 17). Under uncertainty, the error-cost framework when applied to antitrust leads in most cases to a preference of Type II over Type I errors, and mergers are no exception.[17] The three main reasons can be summarized as follows. First, “mistaken inferences and the resulting false condemnations are especially costly, because they often chill the very conduct the antitrust laws are designed to protect.”[18] The aforementioned procompetitive benefits of mergers, coupled with the general principle that parties should have the latitude in a free-market economy to buy and sell to and from whomever they choose, are cases in point. Second, false positives may be more difficult to correct, especially in light of the weight of judicial precedent.[19] Third, the costs of a wrongly permitted monopoly are small compared to the costs of competition wrongly condemned.[20] As Lionel Robbins once said: monopoly tends to break, tariffs tend to stick.[21] The same is applicable to prohibited mergers.

In sum, Australia should not skew its merger regime toward blocking mergers under uncertainty.

B.   Question 8

Is there evidence of acquisitions by large firms (such as serial or creeping acquisitions, acquisitions of nascent competitors, ‘killer acquisitions’, and acquisitions by digital platforms) having anti-competitive effects in Australia?

We do not know whether there have been any such cases in Australia. We would, however, like to offer more general commentary on the relevance of nascent competition and killer acquisitions in the context of merger control, especially as concerns digital platforms.

One of the most important concerns about acquisitions by the major incumbent tech platforms is that they can be used to eliminate potential competitors that currently do not compete, but could leverage their existing network to compete in the future—a potential that incumbents can better identify than can competition enforcers.[22]

As the Furman Review states:

In mergers involving digital companies, the harms will often centre around the loss of potential competition, which the target company in an adjacent market may provide in the future, once their services develop.[23]

Similar concerns have been raised in the Stigler Report,[24] the expert report commissioned by Commissioner Margrethe Vestager for the European Commission,[25] and in the ACCC’s own Fifth Interim Report of the Digital Platform Services Inquiry.[26] Facebook’s acquisition of Instagram is frequently cited as a paradigmatic example of this phenomenon.

There are, however, a range of issues with using this concern as the basis for a more restrictive merger regime. First, while doubtless this kind of behavior is a risk, and competition enforcers should weigh potential competition as part of the range of considerations in any merger review, potential-competition theories often prove too much. If one firm with a similar but fundamentally different product poses a potential threat to a purchaser, there may be many other firms with similar, but fundamentally different, products that do, too.

If Instagram, with its photo feed and social features, posed a potential or nascent competitive threat to Facebook when Facebook acquired it, then so must other services with products that are clearly distinct from Facebook but have social features. In that case, Facebook faces potential competition from other services like TikTok, Twitch, YouTube, Twitter (X), and Snapchat, all of which have services that are at least as similar to Facebook’s as Instagram’s. In this case, the loss of a single, relatively small potential competitor out of many cannot be counted as a significant loss for competition, since so many other potential and actual competitors remain.

The most compelling version of the potential and nascent competition argument is that offered by Steven Salop, who argues that since a monopolist’s profits will tend to exceed duopolists’ combined profits, a monopolist will normally be willing and able to buy a would-be competitor for more than the competitor would be able to earn if it entered the market and competed directly, earning only duopoly profits.[27]

While theoretically elegant, this model has limited use in understanding real-world scenarios. First, it assumes that entry is only possible once—i.e., that after a monopolist purchases a would-be competitor, it can breathe easy. But if repeat entry is possible, such that another firm can enter the market at some point after an acquisition has taken place, the monopolist will be engaged in a potentially endless series of acquisitions, sharing its monopoly profits with a succession of would-be duopolists until there is no monopoly profit left.

Second, the model does not predict what share of monopoly profits would go to the entrant, as compared to the monopolist. The entrant could hold out for nearly all of the monopolist’s profit share, adjusted for the entrant’s expected success in becoming a duopolist.

Third, apart from being a poor strategy for preserving monopoly profits—since these may largely accrue to the entrants, under this model—this could lead to stronger incentives for entry than in a scenario where the duopolists were left to compete with one another, leading to more startup formation and entry overall.

Finally, acquisitions of potential competitors, far from harming competition, often benefit consumers. The acquisition of Instagram by Facebook, for example, brought the photo-editing technology that Instagram had developed to a much larger market of Facebook users, and provided those services with a powerful monetization mechanism that was otherwise unavailable to Instagram.[28] As Ben Sperry has written:

Facebook has helped to build Instagram into the product it is today, a position that was far from guaranteed, and that most of the commentators who mocked the merger did not even imagine was possible. Instagram’s integration into the Facebook platform in fact did benefit users, as evidenced by the rise of Instagram and other third-party photo apps on Facebook’s platform.[29]

In other words, many supposedly anticompetitive acquisitions appear that way only because of improvements made to the acquired business by the acquiring platform.[30]

As for “killer acquisitions,” this refers to scenarios in which incumbents acquire a firm just to shut down pipelines of products that compete closely with their own. By eliminating these products and research lines, it is feared that “killer acquisitions” could harm consumers by eliminating would-be competitors and their products from the market, and thereby eliminating an innovative rival. A recent study by Marc Ivaldi, Nicolas Petit, and Selçukhan Ünekbas, however, recommends caution surrounding the killer acquisition “hype.” First, despite the disproportionate attention they have been paid in policy circles, “killer acquisitions” are an exceedingly rare phenomenon. In pharmaceuticals, where the risk is arguably the highest, it is they account for between 5.3% and 7.4% of all acquisitions, while in digital markets, the rate is closer to 1 in 175.[31] The authors ultimately find that:

Examining acquisitions by large technology firms in ICT industries screened by the European Commission, [we find] that acquired products are often not killed but scaled, post-merger industry output demonstrably increases, and the relevant markets remain dynamic post-transaction. These findings cast doubt on contemporary calls for tightening of merger control policies.[32]

Thus, acquisitions of potential competitors and smaller rivals more often than not lead to valuable synergies, efficiencies, and the successful scaling of products and integration of technologies.

But there is an arguably even more important reason why the ACCC should not preventively restrict companies’ ability to acquire smaller rivals (or potential rivals). To safeguard incentives to invest and innovate, it is essential that buyouts remain a viable “way out” for startups and small players. As ICLE has argued previously:

Venture capitalists invest on the understanding that many of the businesses in their portfolio will likely fail, but that the returns from a single successful exit could be large enough to offset any failures. Unsurprisingly, this means that exit considerations are the most important factor for VCs when valuing a company. A US survey of VCs found 89% considered exits important and 48% considered it the most important factor. This is particularly important for later-stage VCs.”[33] (emphasis added)

Indeed, the “killer” label obfuscates the fact that acquisitions are frequently a desired exit strategy for founders, especially founders of startups and small companies. Investors and entrepreneurs hope to make money from the products into which they are putting their time and money. While that may come from the product becoming wildly successful and potentially displacing an incumbent, this outcome can be exceedingly difficult to achieve. The prospect of acquisition increases the possibility that these entrepreneurs can earn a return, and thus magnifies their incentives to build and innovate.[34]

In sum, while potential competition and so-called killer acquisitions are important theories for the ACCC to consider when engaging in merger review, neither theory suggests that the burden of proof needed to reject a merger should be changed, much less warranting an overhaul of the existing merger regime. Furthermore, given the paucity of “killer acquisitions” in the real world, it is highly unlikely that any economic woes that Australia currently faces are due to an epidemic of killer acquisitions or acquisitions of potential/nascent competitors. Indeed, a recent paper by Jonathan Barnett finds the concerns around startup acquisitions to have been vastly exaggerated, while their benefits have been underappreciated:

A review of the relevant body of evidence finds that these widely-held views concerning incumbent/startup acquisitions rest on meager support, confined to ambiguous evidence drawn from a small portion of the total universe of acquisitions in the pharmaceutical market and theoretical models of acquisition transactions in information technology markets. Moreover, the emergent regulatory and scholarly consensus fails to take into account the rich body of evidence showing the critical function played by incumbent/startup acquisitions in supplying a monetization mechanism that induces venture-capital investment and promotes startup entry in technology markets.

In addition:

Proposed changes to merger review standards would disrupt these efficient transactional mechanisms and are likely to have counterproductive effects on competitive conditions in innovation markets.[35]

Accordingly, if the Treasury is going to adopt any rules to address these theories of harm, it should do so in a way consistent with the error-cost framework (see reply to Question 6); that does not undercut the benefits and incentives that derive from the prospect of acquisition by a larger player; and that accurately reflects the real (modest) anticompetitive threat posed by killer acquisitions, rather than one animated by dystopic hyperbole.[36]

C.   Question 9

Should Australia’s merger regime focus more on acquisitions by firms with market power, and/or the effect of the acquisitions on the overall structure of the market?

Merger control should remain tethered to analysis of competitive effects within the framework of the SLC test, rather than on fostering any particular market structure. Market structure is, at best, an imperfect proxy for competitive effects and, at worst, a misleading one. As such, it should be considered just one tool among many for scrutinizing mergers, not an end in itself.

To start, the assumption that “too much” concentration is harmful presumes both that the structure of a market is what determines economic outcomes, and that anyone knows what the “right” amount of concentration is.[37] But as economists have understood since at least the 1970s, (despite an extremely vigorous, but ultimately futile, effort to show otherwise), market structure is not outcome determinative.[38] As Harold Demsetz has written:

Once perfect knowledge of technology and price is abandoned, [competitive intensity] may increase, decrease, or remain unchanged as the number of firms in the market is increased.… [I]t is presumptuous to conclude… that markets populated by fewer firms perform less well or offer competition that is less intense.[39]

This view is well-supported, and held by scholars across the political spectrum.[40] To take one prominent recent example, professors Fiona Scott Morton (deputy assistant attorney general for economics in the U.S. Justice Department Antitrust Division under President Barack Obama), Martin Gaynor (former director of the Federal Trade Commission Bureau of Economics under President Obama), and Steven Berry surveyed the industrial-organization literature and found that presumptions based on measures of concentration are unlikely to provide sound guidance for public policy:

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration.… Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl Hirschman Index should be given little weight in policy debates.[41]

The absence of correlation between increased concentration and both anticompetitive causes and deleterious economic effects is also demonstrated by a recent, influential empirical paper by Shanat Ganapati. Ganapati finds that the increase in industry concentration in U.S. non-manufacturing sectors between 1972 and 2012 was “related to an offsetting and positive force—these oligopolies are likely due to technical innovation or scale economies. [The] data suggests that national oligopolies are strongly correlated with innovations in productivity.”[42] In the end, Ganapati found, increased concentration resulted from a beneficial growth in firm size in productive industries that “expand[s] real output and hold[s] down prices, raising consumer welfare, while maintaining or reducing [these firms’] workforces.”[43] Sam Peltzman’s research on increasing concentration in manufacturing finds that it has, on average, been associated with both increased productivity growth and widening margins of price over input costs. These two effects offset each other, leading to “trivial” net price effects.[44]

Further, the presence of harmful effects in industries with increased concentration cannot readily be extrapolated to other industries. Thus, while some studies have plausibly shown that an increase in concentration in a particular case led to higher prices (although this is true in only a minority of the relevant literature), assuming the same result from an increase in concentration in other industries or other contexts is simply not justified:

The most plausible competitive or efficiency theory of any particular industry’s structure and business practices is as likely to be idiosyncratic to that industry as the most plausible strategic theory with market power.[45]

As Chad Syverson recently summarized:

Perhaps the deepest conceptual problem with concentration as a measure of market power is that it is an outcome, not an immutable core determinant of how competitive an industry or market is… As a result, concentration is worse than just a noisy barometer of market power. Instead, we cannot even generally know which way the barometer is oriented.[46]

In other words, depending on the nature and dynamics of the market, competition may well be protected under conditions that preserve a certain number of competitors in the relevant market. But competition may also be protected under conditions in which a single winner takes all on the merits of their business.[47] It is reductive, and bad policy, to presume that a certain number of competitors is always and everywhere conducive to better economic outcomes, or indicative of anticompetitive harm.

This does not mean that concentration measures have no use in merger enforcement. Instead, it demonstrates that market concentration is often unrelated to antitrust enforcement because it is driven by factors that are endogenous to each industry. In revamping its merger-control rules, Australia should be careful not to rely too heavily on structural presumptions based on concentration measures, as these may be poor indicators of those cases where antitrust enforcement would be most beneficial to consumers.

In sum, market structure should remain only a proxy for determining whether a transaction significantly lessens competition. It should not be at the forefront of merger review. And it should certainly not be the determining factor in deciding whether to block a merger.

D.   Question 13

Should Australia introduce a mandatory notification regime, and what would be the key considerations for designing notification thresholds?

The ACCC has argued that Australia is an “international outlier” in not requiring mandatory notification of mergers.[48] While it is true that most countries with merger-control rules also require mandatory notification of mergers when these exceed a certain threshold, there are also notable examples where this is not the case. For example, the United Kingdom, one of the leading competition jurisdictions in the world, does not require mandatory notification of mergers.

In deciding whether to impose a mandatory-notification regime and accompanying notification thresholds, Australia should not—as a matter of principle—be guided by international trends. International trends may be a useful indicator, but they can also be misleading. Instead, Australia’s decision should be informed by close analysis of error costs. In particular, Australia should seek to understand how a notification regime would affect the balance between Type I and Type II errors in this context. A notification regime would presumably reduce false negatives without necessarily increasing false positives, which is a good outcome.

In its calculation, however, the Treasury cannot ignore the costs of filing mergers and of reviewing them. If designed poorly, mandatory notifications can be a burden for the merging firms, for third parties, and for the reviewing authorities, siphoning resources that could be better deployed elsewhere. It is here where a voluntary-notification regime could have an edge over the alternative. For instance, a study by Chongwoo Choe comparing systems of compulsory pre-merger notification with the Australian system of voluntary pre-merger notification found that:

Thanks to the signaling opportunity that arises when notification is voluntary, voluntary notification leads to lower enforcement costs for the regulator and lower notification costs for the merging parties. Some of the theoretical predictions are supported by exploratory empirical tests using merger data from Australia. Overall, our results suggest that voluntary merger notification may achieve objectives similar to those achieved by compulsory systems at lower costs to the merging parties as well as to the regulator.[49] (emphasis added).

If the Treasury nonetheless decides to mandate merger notification, the next step would be to establish a notification threshold, as it is evident that not all mergers can, or should, be notified to the Australian authorities. Indeed, many mergers may be patently uninteresting from a competition perspective (e.g., one small supermarket in Perth buying another), while others might not have a significant nexus with Australia (e.g., where an international company that does modest business in Australia buys a shop in Spain).[50] Too many merger notifications strain the public’s limited resources and disproportionately affect smaller companies, as these companies are less capable of covering administrative costs and filing fees. To mitigate such unnecessary costs, the Treasury should establish reasonable thresholds that help filter out transactions where the merging parties are unlikely to have significant market power post-merger.

But what constitutes a reasonable threshold? Our view is that there is no need to reinvent the wheel here. Turnover has typically been used as a proxy for a merger’s competitive impact because it offers a first indicator of the parties’ relative position on the market. Despite the Consultation’s claim that “mergers of all sizes are potentially capable of raising competition concerns,”[51] where the parties (and especially the target company) have either no or only negligible turnover in Australia, it is highly unlikely that the merger will significantly lessen competition. If the Treasury decides to impose mandatory notification for mergers, it should therefore consider using a turnover-based threshold.

E.    Question 17

Should Australia’s merger control regime require the decision-maker to be satisfied that a proposed merger:

  • would be likely to substantially lessen competition before blocking it; or

  • would not be likely to substantially lessen competition before clearing it?

The second option would essentially reverse the burden of proof in merger control. Instead of requiring the authority to prove that a merger would substantially lessen competition, it would fall on the merging parties to prove a negative—i.e., that the merger would not be likely to substantially lessen competition.

The ACCC has made this proposal because it:

Means that the risk of error is borne by the merger parties rather than the public. In the cases where this difference matters (for example where there is uncertainty or a number of possible future outcomes), the default position should be to leave the risk with the merger parties, not to put at risk the public interest in maintaining the state of competition into the future.[52]

The Consultation sympathizes. It recognizes that “there are trade-offs between the risks of false positives and false negatives in designing a merger test,” but contends that, while both lead to lower output, higher prices, lower quality, and less innovation, “allowing anti-competitive mergers means that merging parties benefit at the expense of consumers.”[53]

But this argument is based on a flawed premise. The risk of error—whether Type I or Type II error—is always borne by the public. The public is harmed by false positives in at least two ways. First, and most directly, it suffers harm through the foregone benefits that could have accrued from a procompetitive merger. As we have shown in our responses to Questions 6, 8, and 9, these benefits are common and can be economically substantial. Second, but no less important, false positives chill merger activity and discourage future mergers. This also negatively affects the public.

The extent to which chilling merger activity harms the public has, however, been obfuscated by a contrived dichotomy between “the public” and the merging parties, which taints the ACCC’s argumentation and skews the Conclusion. The merging parties are also part of society and, therefore, also part of “the public.” An unduly restrictive merger regime that prioritizes avoiding false negatives over false positives harms consumers. But it also harms the “public” more broadly, insofar as anyone could, potentially, have a direct interest in a merger, either as a stakeholder or a party to that merger.

In addition, a regime that requires companies to prove that a deal is not harmful (with the usual caveats about the difficulty of proving a negative) before being allowed to proceed unduly restricts economic freedom and the rights of defense—both of which are very “public” benefits, as everyone, in principle, benefits from them. These elements should also be taken into consideration when weighing the costs and benefits of Type I and Type II errors. That balancing test should, in our view, generally favor false negatives, as argued in our response to Question 6.

Finally, there is no objective, material justification for “[shifting] the default position from allowing mergers to proceed where there is uncertainty [which is, by definition, always in a merger review process that is forward-looking] to a position where, if there is sufficient uncertainty about the effects of a merger, it would not be cleared.” As discussed in our answer to Question 6, the vast majority of mergers are procompetitive, including mergers in the digital sector, or mergers that involve digital platforms. This presumption is reflected in the requirement, common across antitrust jurisdictions, that enforcers must make a prima facie case that a merger will be anticompetitive before the merging parties have a duty to respond. There has been no major empirical finding or theoretical revelation in recent years that would justify reversing this burden of proof. Indeed, any change along these lines would be guided by ephemeral political and industrial-policy exigencies, rather than by robust principles of law and economics. In our view, these are not sound reasons for flipping merger review on its head.

In sum, Australian merger control should require that a decisionmaker be satisfied that a merger would be likely to substantially lessen competition before blocking it.

F.    Question 18

Should Australia’s substantial lessening of competition test be amended to include acquisitions that ‘entrench, materially increase or materially extend a position of substantial market power’?

According to the ACCC:

Under the current substantial lessening of competition test, it may be difficult to stop acquisitions that lead to a dominant firm extending their market power into related or adjacent markets.[54]

The ACCC imagines this is a problem, particularly in digital markets. Preventing dominant firms from leveraging their market power in one market to restrict competition in an adjacent one is a legitimate concern. We should, however, be clear about what is meant by “materially increase or materially extend a position of substantial market power.”

Merger control should not, as a matter of principle, seek to prevent incumbents from entering adjacent markets. Large firms moving into the core business of competitors from adjacent markets often represents the biggest source of competition for incumbents, as it is often precisely these firms who have the capacity to contest competitors’ dominance in their core businesses effectively. This scenario is prevalent in digital markets, where incumbents must enter multiple adjacent markets, most often by supplying highly differentiated products, complements, or “new combinations” of existing offerings.[55]

Moreover, it is unclear why the SLC test in its current state is insufficient to curb the misuse of market power. The SLC test is a standard used by regulatory authorities to assess the legality of proposed mergers and acquisitions. Simply put, it examines whether a prospective merger is likely to substantially lessen competition in a given market, with the purpose of preventing mergers that increase prices, reduce output, limit consumer choice, or stifle innovation as a result of a decrease in competition.

The SLC test is one of the two major tests deployed by competition authorities to determine whether a merger is anticompetitive, the other being the dominance test. Most merger-control regimes today use the SLC test, and for two good reasons. The first is that, under the dominance test, it is difficult to assess coordinated effects and non-horizontal mergers.[56] The other, mentioned in the Consultation, is that the SLC test allows for more robust effects-based economic analysis.[57]

The SLC test examines likely coordinated and non-coordinated effects in all three types of mergers: horizontal, vertical, and conglomerate. Horizontal mergers may substantially lessen competition by eliminating a significant competitive constraint on one or more firms, or by changing the nature of competition such that firms that had not previously coordinating their behavior will be more likely to do so. Vertical and conglomerate mergers tend to pose less of a risk to competition.[58] Still, there are facts and circumstances under which they can substantially lessen competition by, for example, foreclosing rivals from necessary inputs, supplies, or markets. These outcomes will often be associated with an increase in market power. As the OECD has written:

The focus of the SLC test lies predominantly on the impact of the merger on existing competitive constraints and on measuring market power post-merger.[59]

In other words, the SLC test already accounts for increases in market power that are capable and likely of harming competition. As to whether the “entrenchment” of market power—in line with the 2022 amendments to Canadian competition law—should be added to the SLC test, there is no reason to believe that this is either necessary or appropriate in the Australian context. The 2022 amendments to the Canadian competition law mentioned in the Consultation[60] largely align Canada’s merger control with its abuse-of-dominance provision, which prohibits anti-competitive activities that damage or eliminate competitors and that “preserve, entrench or enhance their market power.”[61] But in Australia, Section 46 (the equivalent of the Canadian abuse-of-dominance provision) prohibits conduct “that has the purpose, or has or is likely to have the effect, of substantially lessening competition.” The proposed amendment would thus create a discrepancy between merger control and Section 46, where the latter would remain tethered to an SLC test, and the former would shift to a new standard. Additionally, since it remains unclear what the results of Canada’s 2022 merger-control amendments have been or will be, it would be wiser for Australia to adopt a “wait and see” approach before rushing to replicate them.

Lastly, there is the question of defining “materiality” in the context of an increase or entrenchment of market power. Currently, Section 50 prohibits mergers that “substantially lessen competition,” with no mention of materiality.[62] The Merger Guidelines do, however, state that:

The term “substantial” has been variously interpreted as meaning real or of substance, not merely discernible but material in a relative sense and meaningful.[63] (emphasis added)

The proposed amendment follows suit, referring to the concepts of “material increase” and “material extension” of market power. What does this mean? How does a “material increase” in market power differ from a non-material one? In its comments to the American Innovation and Choice Online Act (“AICOA”), the American Bar Association’s Antitrust Law Section criticized the bill for using amorphous terms such as “fairness,” “preferencing,” and “materiality,” or the “intrinsic” value of a product. Because these concepts were not defined either in the legislation or in existing case law, the ABA argued that they injected variability and indeterminacy into how the legislation would be administered.[64] The same argument applies here.

Accordingly, the SLC test should not be amended to include acquisitions that “entrench, materially increase or materially extend a position of substantial market power.”

G.   Question 19

Should the merger factors in section 50(3) be amended to increase the focus on changes to market structure as a result of a merger? Or should the merger factors be removed entirely?

On market structure, see our responses to Question 9 and Question 18.

The merger factors under Section 50(3) already overlap with the factors typically used under the SLC test. These include the structure of related markets; the merger’s underlying economic rationale; market accessibility for potential entrants; the market shares of involved undertakings; whether the market is capacity constrained; the presence of competitors (existing and potential); consumer behavior (the willingness and ability of consumers to switch to alternative products); the likely effect on consumers; the financial investment required for market entry; and the market share necessary for a buyer or seller to achieve profitability or economies of scale.

Similarly, Section 50(3) contains a list of the factors to be considered under the SLC test, including barriers to entry, the intensity of competition on the market, the likely effects on price and profit margins, and the extent of vertical integration, among others. Structural questions, such as the degree of concentration on the market, are also one of the listed factors under Section 50(3).

As a result, it is unclear how eliminating the merger factors would transform the SLC test, or why there should be more emphasis on market structure (on the proper role of market structure in merger-control analysis, see our answers to Question 9 and Question 18).

In sum, Section 50(3) should not be amended to increase the focus on changes to market structure as a result of a merger. It is also not clear what is gained from removing the factors in Section 50(3). More than a “modernization” (as the Consultation calls it),[65] the change appears redundant. To the extent that these factors place a “straitjacket” on courts (though, in principle, they are broad enough to be sufficiently flexible), however, they could be removed, so long as merger analysis remains tethered to the SLC test.

H.  Question 20

 Should a public benefit test be retained if a new merger control regime was introduced?

Antitrust law, including merger control, is not a “Swiss Army knife.”[66] Public-interest considerations should generally have limited to no weight in merger analysis, except in extremely specific cases proscribed by the law (e.g., public security and defense considerations). Expanding merger analysis to encompass non-competition concerns risks undermining the rule of law, diminishing legal certainty, and harming consumers.

In Australia, the Competition Act currently does not expressly limit the range of public benefits (or detriments) that may be taken into account by the ACCC when deciding whether to block or allow a merger (this includes not limiting them to those that address market failure or improve economic efficiency).[67] Thus, “anything of value to the community generally, any contribution to the aims pursued by the society” could, in theory, be considered a public benefit for the purpose of the public-benefit test.[68] The authorization regime also does not require the ACCC to quantify the level of public benefits and detriments.

Competition authorities are, in principle, ill-suited to rank, weigh, and prioritize complex, incommensurable goals and values against one other. They lack the expertise to meaningfully evaluate political, social, environmental, and other goals. They are independent agencies with a strict, narrow mandate, not political decision makers tasked with redistributing wealth or guiding society forward. Requiring them to consider broad public considerations when deciding on mergers magnifies the risk of discretionary and arbitrary decision making and undercuts legal certainty. This is as true for blocking mergers on the basis of public detriments as it is for allowing them on the basis of public benefits. By contrast, the consumer-welfare standard, which forms the basis of the SLC, is properly understood as:

Offer[ing] a tractable test that is broad enough to contemplate a variety of evidence related to consumer welfare but also sufficiently objective and clear to cabin discretion and honor the principle of the rule of law. Perhaps most significantly, it is inherently an economic approach to antitrust that benefits from new economic learning and is capable of evaluating an evolving set of commercial practices and business models.[69]

Consequently, we recommend that the public-interest test be jettisoned from merger analysis, or at least very narrowly circumscribed, if a new merger-control regime is introduced in Australia.

I.      Question 24

What is the preferred option or combination of elements outlined above? What implementation considerations would need to be taken into account?

In our opinion, and based on the arguments espoused in this submission, the best options would be as follows:

[1] International Center for Law & Economics, https://laweconcenter.org.

[2] Australian Competition and Consumer Commission v Pacific National Pty Limited [2020] FCAFC 77, [246].

[3] Australian Competition and Consumer Commission v Pacific National Pty Limited [2020] FCAFC 77, [104].

[4] Outline to Treasury: ACCC’s Proposals for Merger Reform, Australian Competition and Consumer Commission (2023), 5, 8, available at https://www.accc.gov.au/system/files/accc-submission-on-preliminary-views-on-options-for-merger-control-process.pdf.

[5] For example, in the EU, 94% of mergers are cleared without commitments, whereas only about 6% are allowed with remedies, and less than 0.5% of mergers are blocked or withdrawn by the parties. See Joanna Piechucka, Tomaso Duso, Klaus Gugler, & Pauline Affeldt, Using Compensating Efficiencies to Assess EU Merger Policy, VoxEU (10 Jan. 2022), https://cepr.org/voxeu/columns/using-compensating-efficiencies-assess-eu-merger-policy.

[6] Consultation, 4; ACCC 2023: 2, point 8e.

[7] Ronald Coase, The Nature of the Firm, 4(16) Economica 386-405 (Nov. 1937).

[8] Robert Kulick & Andre Card, Mergers, Industries, and Innovation: Evidence from R&D Expenditure and Patent Applications, NERA Economic Consulting (Feb. 2023), available at https://www.uschamber.com/assets/documents/NERA-Mergers-and-Innovation-Feb-2023.pdf.

[9] Francine Lafontaine & Margaret Slade, Vertical Integration and Firm Boundaries: The Evidence, 45(3) Journal of Economic Literature 677 (Sep. 2007).

[10] Dario Focarelli & Fabio Panetta, Are Mergers Beneficial to Consumers? Evidence from the Market for Bank Deposits, 93(4) American Economic Review 1152 (Sep. 2003).

[11] B. Espen Eckbo & Peggy Wier, Antimerger Policy Under the Hart-Scott-Rodino Act: A Reexamination of the Market Power Hypothesis, 28(1) Journal of Law & Economics 121 (Apr. 1985).

[12] See, e.g., in the context of tech mergers: Sam Bowman & Sam Dumitriu, Better Together: The Procompetitive Effects of Mergers in Tech, The Entrepreneurs Network & International Center for Law & Economics (Oct. 2021), available at https://laweconcenter.org/wp-content/uploads/2021/10/BetterTogether.pdf.

[13] Geoffrey A. Manne, Error Costs in Digital Markets, in Joshua D. Wright & Douglas H. Ginsburg (eds.), The Global Antitrust Institute Report on the Digital Economy, 33-108 (2020).

[14] Robert H. Mnookin & Lewis Kornhauser, Bargaining in the Shadow of the Law: The Case of Divorce, 88(5) Yale Law Journal 950-97, 968 (Apr. 1979).

[15] See, e.g., in the context of predatory pricing, Paul L. Joskow & Alvin K. Klevorick, A Framework for Analyzing Predatory Pricing Policy, 89(2) Yale Law Journal 213-70 (Dec. 1979).

[16] Manne, supra note 13, at 34, 41.

[17] Id.

[18] Verizon Comm’ns Inc. v. Law Offices of Curtis V. Trinko, LLP, 540 U.S. 398, 414 (2004) (quoting Matsushita Elec. Indus. Co. v. Zenith Radio Corp., 475 U.S. 574, 594 (1986)).

[19] Frank H. Easterbrook, The Limits of Antitrust, 63(1) Texas Law Review 1-40, 2-3, 15-16 (Aug. 1984).

[20] Id., (“Other things equal, we should prefer the error of tolerating questionable conduct, which imposes losses over a part of the range of output, to the error of condemning beneficial conduct, which imposes losses over the whole range of output.”)

[21] Lionel Robbins, Economic Planning and International Order, 116, (1937).

[22] This section is adapted, in part, from Bowman & Dumitriu, supra note 12.

[23] Jason Furman, et al., Unlocking Digital Competition: Report of the Digital Competition Expert Panel (Mar. 2019), 98, available at https://assets.publishing.service.gov.uk/media/5c88150ee5274a230219c35f/unlocking_digital_competition_furman_review_web.pdf (“Furman Review”).

[24] Committee for the Study of Digital Platforms Market Structure and Antitrust Subcommittee Report, Stigler Center for the Study of the Economy and the State (2019), 75, 88, available at https://research.chicagobooth.edu/-/media/research/stigler/pdfs/market-structure—report-as-of-15-may-2019.pdf (“Stigler Report”).

[25] Yves-Alexandre de Motjoye, Heike Schweitzer, & Jacques Crémer, Competition Policy for the Digital Era, European Commission Directorate-General for Competition (2019), 110-112, https://op.europa.eu/en/publication-detail/-/publication/21dc175c-7b76-11e9-9f05-01aa75ed71a1/language-en.

[26] See Sections 3.2., 6.2.2. of the Digital Services Platform Inquiry of September 2022, which finds a “high risk of anticompetitive acquisitions by digital platforms,” available at https://www.accc.gov.au/system/files/Digital%20platform%20services%20inquiry.pdf.

[27] Steven Salop, Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits, Georgetown Law Faculty Publications and Other Works 2380 (Apr. 2021), available at https://scholarship.law.georgetown.edu/facpub/2380.

[28] Geoffrey A. Manne, et al., Comments of the International Center for Law & Economics on the FTC & DOJ Draft Merger Guidelines, International Center for Law & Economics (18 Sep. 2023), 38, available at https://laweconcenter.org/wp-content/uploads/2023/09/ICLE-Draft-Merger-Guidelines-Comments-1.pdf.

[29] Ben Sperry, Killer Acquisition of Successful Integration: The Case of the Facebook/Instagram Merger, The Hill (8 Oct. 2020), https://thehill.com/blogs/congress-blog/politics/520211-killer-acquisition-or-successful-integration-the-case-of-the.

[30] Sam Bowman & Geoffrey A. Manne, Killer Acquisitions: An Exit Strategy for Founders, International Center for Law & Economics (Jul. 2020), available at https://laweconcenter.org/wp-content/uploads/2020/07/ICLE-tldr-Killer-acquisitions_-an-exit-strategy-for-founders-FINAL.pdf.

[31] See Colleen Cunningham, Florida Ederer, & Song Ma, Killer Acquisitions, 129(3) Journal of Political Economy 649-702 (Mar. 2021); see also Axel Gautier & Joe Lamesch, Mergers in the Digital Economy 54 Information Economics and Policy 100890 (2 Sep. 2020).

[32] Marc Ivaldi, Nicolas Petit, & Selçukhan Ünekbas, Killer Acquisitions in Digital Markets May be More Hype than Reality, VoxEU (15 Sep. 2023), https://cepr.org/voxeu/columns/killer-acquisitions-digital-markets-may-be-more-hype-reality (“The majority of transactions triggered increasing levels of competition in their respective markets.”)

[33] Bowman & Dumitriu, supra note 12.

[34] Bowman & Manne, supra note 30.

[35] Jonathan Barnett, “Killer Acquisitions” Reexamined: Economic Hyperbole in the Age of Populist Antitrust, USC Class Research Paper 23-1 (28 Aug. 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4408546.

[36] On the current wave of dystopian thinking in antitrust law, especially surrounding anything “digital,” see Dirk Auer & Geoffrey A. Manne, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and their Origins, 28(4) George Mason Law Review 1281 (9 Sep. 2021).

[37] The response to this question is adapted from Manne, et al., supra note 28.

[38] See, e.g., Harold Demsetz, Industry Structure, Market Rivalry, and Public Policy, 16(1) Journal of Law & Economics 1-9 (Apr. 1973).

[39] See Harold Demsetz, The Intensity and Dimensionality of Competition, in Harold Demsetz, The Economics of the Business Firm: Seven Critical Commentaries 137, 140-41 (1995).

[40] Nathan Miller, et al., On the Misuse of Regressions of Price on the HHI in Merger Review, 10(2) Journal of Antitrust Enforcement 248-259 (28 May 2022).

[41] Steven Berry, Martin Gaynor, & Fiona Scott Morton, Do Increasing Markups Matter? Lessons from Empirical Industrial Organization, 33(3) Journal of Economic Perspectives 44-68, 48 (2019).

[42] Shanat Ganapati, Growing Oligopolies, Prices, Output, and Productivity, 13(3) American Economic Journal: Microeconomics 309-327, 324 (Aug. 2021).

[43] Id., 309.

[44] Sam Peltzman, Productivity, Prices and Productivity in Manufacturing: a Demsetzian Perspective, Coase-Sandor Working Paper Series in Law and Economics 917, (19 Jul. 2021).

[45] Timothy F. Bresnahan, Empirical Studies of Industries with Market Power, in Richard Schmalensee & Robert Willig (eds.), Handbook of Industrial Organization, 1011, 1053-54 (1989).

[46] Chad Syverson, Macroeconomics and Market Power: Context, Implications, and Open Questions, 33(3) Journal of Economic Perspectives 23-43, 26 (2019).

[47] Nicolas Petit & Lazar Radic, The Necessity of the Consumer Welfare Standard in Antitrust Analysis, ProMarket (18 Dec. 2023), https://www.promarket.org/2023/12/18/the-necessity-of-a-consumer-welfare-standard-in-antitrust-analysis.

[48] ACCC, 2023: 5.

[49] Chongwoo Choe, Compulsory or Voluntary Pre-Merger Notification? Theory and Some Evidence, 28(1) International Journal of Industrial Organization 10-20 (Jan. 2010).

[50] For an overview of the impact of unnecessary transaction costs in merger notification in the context of Ireland, see  Paul K. Gorecki, Merger Control in Ireland: Too Many Unnecessary Notifications?, ESRI Working Paper No. 383 (2011), https://www.econstor.eu/handle/10419/50090.

[51] Consultation, 24.

[52] ACCC, 2023, 9.

[53] Consultation, 29.

[54] Consultation, 19; ACCC, 2023: 6-7.

[55] Nicolas Petit, Big Tech and the Digital Economy: The Moligopoly Scenario (2020); see also Walid Chaiehoudj, On “Big Tech and the Digital Economy”: Interview with Professor Nicolas Petit, Competition Forum (11 Jan. 2021), https://competition-forum.com/on-big-tech-and-the-digital-economy-interview-with-professor-nicolas-petit.

[56] Standard for Merger Review, Organisation for Economic Co-operation and Development (11 May 2010), 6, available at https://www.oecd.org/daf/competition/45247537.pdf.

[57] Id.; see also Consultation, 31, indicating that “[SLC test] would enable mergers to be assessed on competition criteria but not prescriptively identify which competition criteria should be taken into account. It may permit more flexible application of the law and a greater degree of economic analysis in merger decision-making” (emphasis added).

[58] See, e.g., European Commission, Guidelines on the Assessment of Non-Horizontal Mergers Under the Council Regulation on the Control of Concentrations Between Undertakings (2008/C 265/07), paras. 11-13.

[59] OECD, supra note 56, at 16; see also European Commission, Guidelines on the Assessment of Horizontal Mergers Under the Council Regulation on the Control of Concentrations between Undertakings (2004/C 31/03).

[60] Consultation, 30-31.

[61] Canadian Competition Act, Sections 78 and 79.

[62] Section 44G, however, does mention a “material increase in competition.” (emphasis added).

[63] ACCC, Merger Guidelines (2008), available at https://www.accc.gov.au/system/files/Merger%20guidelines%20-%20Final.PDF ; see also Australia, Senate 1992, Debates, vol. S157, p. 4776, as cited in the Merger Guidelines (2008).

[64] Geoffrey A. Manne & Lazar Radic, The ABA’s Antitrust Law Section Sounds the Alarm on Klobuchar-Grassley, Truth on the Market (12 May 2022), https://truthonthemarket.com/2022/05/12/the-abas-antitrust-law-section-sounds-the-alarm-on-klobuchar-grassley.

[65] Consultation, 39.

[66] Geoffrey A. Manne, Hearing on “Reviving Competition, Part 5: Addressing the Effects of Economic Concentration on America’s Food Supply,” U.S. House Judiciary Subcommittee on Antitrust, Commercial, and Administrative Law (19 Jan. 2021), available at https://laweconcenter.org/wp-content/uploads/2022/01/Manne-Supply-Chain-Testimony-2021-01-19.pdf.

[67] Out-of-Market Efficiencies in Competition Enforcement – Note by Australia, Organisation for Economic Co-operation and Development (6 Dec. 2023), available at https://one.oecd.org/document/DAF/COMP/WD(2023)102/en/pdf.

[68] Re Queensland Co-Op Milling Association Limited and Defiance Holdings Limited (QCMA) (1976) ATPR 40-012.

[69] Elyse Dorsey, et al., Consumer Welfare & The Rule of Law: The Case Against the New Populist Antitrust Movement, 47 Pepperdine Law Review 861 (1 Jun. 2020).

Continue reading
Antitrust & Consumer Protection