Showing 9 of 217 Publications in Consumer Protection

More FTC Overreach in Labor Markets

TOTM The Federal Trade Commission (FTC) and U.S. Labor Department (DOL) signed a memorandum of understanding (MOU) this past week “to strengthen the Agencies’ partnership through greater cooperation . . .

The Federal Trade Commission (FTC) and U.S. Labor Department (DOL) signed a memorandum of understanding (MOU) this past week “to strengthen the Agencies’ partnership through greater cooperation and coordination in information sharing, investigations and enforcement activity, training, education, research, and outreach.” The accompanying Sept. 21 announcement is another example of FTC overreach, as it highlights matters that simply are not part of the agency’s mission.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Comments of the International Center for Law and Economics on the FTC & DOJ Draft Merger Guidelines

Regulatory Comments Executive Summary We appreciate the opportunity to comment on the Draft Merger Guidelines (Draft Guidelines) released by the U.S. Department of Justice (DOJ or Division) . . .

Executive Summary

We appreciate the opportunity to comment on the Draft Merger Guidelines (Draft Guidelines) released by the U.S. Department of Justice (DOJ or Division) and the Federal Trade Commission (FTC) (jointly, the agencies), Docket No. FTC-2023-0043. Our comments below mirror the structure of the main body of the Draft Guidelines: guidelines, market definition, and rebuttal evidence. Section by section, we suggest improvements to the Draft Guidelines, as well as background law and economics that we believe the agencies should keep in mind as they revise the Draft Guidelines. Our suggestions include, inter alia, the recission of some of the draft guidelines and the integration of others.

Much of the discussion around the guidelines focuses on whether enforcement should be more or less strict. But the stringency or rigor of antitrust scrutiny is not a simple dial to turn up or down. For example, what should be done with HHI thresholds? It may seem obvious that lower thresholds allow the agencies to challenge more mergers. In a world with limited agency resources, however, that may not be true. Under the 2010 Horizontal Merger Guidelines, the agencies did not challenge—much less block—all mergers leading to “moderately concentrated” or even “highly concentrated” markets. If we assume, as the Draft Guidelines appear to, that mergers leading to relatively high-concentration markets are generally more likely to be anticompetitive, lowering the thresholds would result in fewer of such challenges, to the extent that the agencies would necessarily allocate some of their scarce enforcement resources to matters that would not have raised competitive concerns under the thresholds specified in 2010.

Our main recommendations are as follows:

Guideline 1 places increased emphasis on structural presumptions and concentration measures. This rests on the assumption that the economy is becoming more concentrated, that this is problematic, and that lowering the thresholds would help to tackle this problem. But, as our comments explain, this seemingly simple story is not actually so simple. The changes contemplated by guideline 1 thus appear ill-founded. As written, guideline 1 could be used to block mergers without needing to show any actual harms to consumers or sellers/workers. Whether this is the intent or not, the answer should be made explicit. We argue that mergers should not be challenged based on concentration measures alone, given the long-known—but also recently empirically supported—disconnect between concentration measures and competitive harms.

Guideline 2: The guidelines mostly ignore the real distinctions between horizontal and vertical mergers. Guideline 2 is about horizonal mergers, as a footnote suggests, and provides an opportunity to make explicit that horizontal mergers exist, are unique, and will be treated differently than vertical mergers for reasons underlined by the guideline.

Guideline 6: To the extent that guideline 6 goes beyond what is included in guideline 5, it simply adds additional structural presumptions that are not justified by the law or the economics. In a part of the Brown Shoe decision ignored by the Draft Guidelines, the court wrote that “the percentage of the market foreclosed by the vertical arrangement cannot itself be decisive,” yet guideline 6 would make a structural-presumption decision. This is especially problematic in the context of vertical mergers, where the “foreclosure share” does not require an incentive to foreclose. As written, the guideline would treat as inevitable even foreclosure that was highly unprofitable.

Guideline 8: As concentration is not (by itself) harmful to consumers, neither is a trend toward concentration. As with guideline 1, guideline 8 should make explicit whether the intent is that it be used regardless of any harm to consumers. If an industry that has become more concentrated through more competition—as a large, recent economic literature documents is the norm—will the agencies block a merger that increases concentration but does not increase prices? Guideline 8 is especially problematic when paired with the statement that “efficiencies are not cognizable if they will accelerate a trend toward concentration.” This effectively negates any efficiency defense, since any efficiency will allow a merged party to win a larger share of the market. If these customers come from smaller competitors, that will increase concentration.

We conclude by explaining how the Draft Guidelines are not law and that it remains up to the courts whether to follow them. Historically, courts have followed such guidelines, given their reflection of current legal and economic understanding. These Draft Guidelines, by contrast, seem much more geared toward pursuing stronger merger enforcement. Rather than reflect current knowledge, the agencies are seemingly looking to reverse time and return to an outdated set of policies from which courts, enforcers, and mainstream antitrust scholars have all steered away. The net effect of these problems is to undermine confidence in the agency.

I.        Guideline 1: Mergers Should Not Significantly Increase Concentration in Highly Concentrated Markets

Draft Guideline 1 of the Draft Merger Guidelines (“Draft Guidelines”)[1] appears to suggest a standalone structural presumption[2] that mergers that “significantly increase” concentration in “highly concentrated” markets are unlawful; and it does so under a lower-threshold Herfindahl-Hirschman Index (“HHI”) for highly concentrated markets than that specified in the 2010 Horizontal Merger Guidelines, and a lower change in HHI than that specified in the 2010 Guidelines.

Several of these changes are salient. First, the Draft Guidelines replace a threshold HHI for “highly concentrated markets” of 2,500 with one of 1,800. Under the 2010 Guidelines, horizontal mergers that would increase HHI at least 100 points, resulting in an HHI of between 1,500 and 2,500 (inclusive), would be regarded as mergers that “potentially raise significant competitive concerns.” While they might warrant investigation, they would not implicate a structural presumption of illegality.

Second, under the considerably higher thresholds specified in 2010, mergers leading to highly concentrated markets that involved changes in HHI of between 100 and 200 would still be considered among those that “potentially raise significant competitive concerns,” and they would “often warrant scrutiny,” but they would not implicate a presumption of illegality. Only “[m]ergers resulting in highly concentrated markets that involve an increase in the HHI of more than 200 points [would] be presumed to be likely to enhance market power.”

Third, under the 2010 Guidelines, the presumption that mergers “likely to enhance market power” could be “rebutted by persuasive evidence showing that the merger is unlikely to enhance market power.” Draft guideline 1—even with lower thresholds for change and total market concentration, as measured by HHI—identifies no potential for rebuttal of the presumption.

Fourth, the 2010 Guidelines expressly identify mergers that are “unlikely to have adverse competitive effects and ordinarily require no further analysis”; namely, those involving increases in HHI of less than 100 and those resulting in an HHI less than 1,500. The Draft Guidelines do not identify any such mergers, whether under the 2010 thresholds or otherwise.

Fifth, the 2010 thresholds were specified in the Horizontal Merger Guidelines and, as such, applied to horizontal mergers. Other guidelines and agency practice recognized—correctly—that vertical mergers could raise competition concerns. At the same time, they recognized general distinctions between horizontal, vertical, and other “non-horizontal” mergers, such as “conglomerate mergers,” that are absent in—if not repudiated by—the Draft Guidelines. The lower thresholds and altered presumptions of the draft guideline 1 make no mention of horizontal-specific revisions; and, as we discuss below, draft guidelines 5-8 and 10 expressly extend the scope of the Draft Guidelines to vertical and other non-horizontal mergers.

If the Draft Guidelines’ “basis to presume that a merger is likely to substantially lessen competition” is not such a presumption of illegality, or is not so independent of market power, or is rebuttable, then revisions should say so. Also, if the agencies believe that there is any category of mergers that are unlikely to have adverse competitive effects, and unlikely to require further scrutiny, they should say so.

The Draft Guidelines state that this type of structural presumption provides a highly administrable and useful tool for identifying mergers that may substantially lessen competition. Unfortunately, this reasoning overlooks a crucial aspect of the antitrust apparatus (and of all regulation, for that matter): the error-cost framework. Administrability is a virtue, all things considered, but so is accuracy. Any given merger might be anticompetitive, but most are not, and enforcement should not routinely condemn benign and procompetitive mergers for the sake of convenience. As we explain below, the key insight is that policymakers should always consider antitrust enforcement as a whole. In other words, it is never appropriate to look at certain categories of judicial error in isolation (such as authorities wrongly clearing certain mergers). Instead, the challenge is to determine which set of rules and presumptions minimizes the sum of three social costs: false convictions, false acquittals, and enforcement costs.

When this is properly understood, it becomes clear that false negatives are only one part of the picture. It is equally important to ensure that new guidelines do not inefficiently chill or otherwise impede procompetitive deals. This is where proposals to lower current thresholds and alter existing presumptions run into trouble.

A.      Should Concentration Thresholds Be Lowered?

Draft guideline 1 puts concentration metrics front and center and introduces new structural presumptions. The Draft Guidelines evince a strong skepticism toward concentration that is unwarranted by the economic evidence. Two sets of questions are related: what, if anything, does the economic evidence say about the new HHI thresholds advanced by the Draft Guidelines? And what does the economic evidence indicate about strong structural presumptions in antitrust analysis?

Should new merger guidelines lower the HHI thresholds? We agree with comments submitted in 2022 by now-FTC Bureau of Economics Director Aviv Nevo and colleagues, who argued against such a change. They wrote:

Our view is that this would not be the most productive route for the agencies to pursue to successfully prevent harmful mergers, and could backfire by putting even further emphasis on market definition and structural presumptions.

If the agencies were to substantially change the presumption thresholds, they would also need to persuade courts that the new thresholds were at the right level. Is the evidence there to do so? The existing body of research on this question is, today, thin and mostly based on individual case studies in a handful of industries. Our reading of the literature is that it is not clear and persuasive enough, at this point in time, to support a substantially different threshold that will be applied across the board to all industries and market conditions. (emphasis added)[3]

Instead of following the economics literature, as summarized above, the Draft Guidelines lower the structural presumptions and add an additional one for when the merged firms share exceeds 30% and the HHI increase exceeds 100.

One argument for this increased emphasis on structural presumptions and concentration measures is that the economy is becoming more concentrated, that this is problematic, and that lowering the thresholds helps to tackle this problem. The following sections explain why the story is not so simple.

B.      Empirical Trends in Concentration

The first mistake is to suppose that concentration trends have reached unprecedented levels, that extant levels are generally harmful, and that current undue levels of concentration across the economy are due to lax antitrust enforcement. However, market concentration is not, in itself, a bad thing; indeed, recent research challenging the standard  account demonstrates that much observed concentration is driven by increased productivity, rather than by anticompetitive conduct or anticompetitive mergers. In addition, several recent studies show that local concentration—which is the most likely to affect consumers, and where most competition happens—has been steadily decreasing. In fact, as we show, increased concentration at the national level is itself likely the result of more vigorous competition at the local level. Further complicating matters for the “accepted” story (and exacerbated by these national/local distinctions) is the longstanding problem of drawing inferences from national-level concentration metrics for antitrust-relevant markets.

There is a popular narrative that lax antitrust enforcement has led to substantially increased concentration, strangling the economy, harming workers, and saddling consumers with greater markups in the process. Much of the contemporary dissatisfaction with antitrust arises from a suspicion that overly lax enforcement of existing laws has led to record levels of concentration and a concomitant decline in competition.

However, these beliefs—lax enforcement and increased anticompetitive concentration—wither under scrutiny.

1.        National versus local competition

Competition rarely takes place in national markets; it takes place in local markets. And although it appears that national-level firm concentration is growing, this effect is driving increased competition and decreased concentration at the local level, which typically is what matters for consumers. The rise in national concentration is predominantly a function of more efficient firms competing in more—and more localized—markets. Rising national concentration, where it is observed, is a result of increased productivity and competition, which weed out less-efficient producers.

This means it is inappropriate to draw conclusions about the strength of competition from national-concentration measures. This view is shared by economists across the political spectrum. Carl Shapiro (former deputy assistant attorney general for economics in the DOJ Antitrust Division under Presidents Obama and Clinton) for example, raises these concerns regarding the national-concentration data:

[S]imply as a matter of measurement, the Economic Census data that are being used to measure trends in concentration do not allow one to measure concentration in relevant antitrust markets, i.e., for the products and locations over which competition actually occurs. As a result, it is far from clear that the reported changes in concentration over time are informative regarding changes in competition over time.[4]

The 2020 report from the President’s Council of Economic Advisors sounds a similar note. After critically examining alarms about rising concentration, it concludes they are lacking, and that:

The assessment of the competitive health of the economy should be based on studies of properly defined markets, together with conceptual and empirical methods and data that are sufficient to distinguish between alternative explanations for rising concentration and markups.[5]

In general, competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not.

The narrative that increased market concentration has been driven by anticompetitive mergers and other anticompetitive conduct derives from a widely reported literature documenting increased national product-market concentration.[6] That same literature has also promoted the arguments that increased concentration has had harmful effects, including increased markups and increased market power,[7] declining labor share,[8] and declining entry and dynamism.[9]

There are good reasons to be skeptical of the national concentration and market-power data on their face.[10] But even more important, the narrative that purports to find a causal relationship between these data and the depredations mentioned above is almost certainly incorrect.

To begin with, the assumption that “too much” concentration is harmful assumes both that the structure of a market is what determines economic outcomes, and that anyone knows what the “right” amount of concentration is. But as economists have understood since at least the 1970s (and despite an extremely vigorous, but futile, effort to show otherwise), market structure is not outcome determinative.[11]

Once perfect knowledge of technology and price is abandoned, [competitive intensity] may increase, decrease, or remain unchanged as the number of firms in the market is increased.… [I]t is presumptuous to conclude… that markets populated by fewer firms perform less well or offer competition that is less intense.[12]

This view is well-supported, and it is held by scholars across the political spectrum.[13] To take one prominent, recent example, professors Fiona Scott Morton (deputy assistant attorney general for economics in the DOJ Antitrust Division under President Obama), Martin Gaynor (former director of the FTC Bureau of Economics under President Obama), and Steven Berry surveyed the industrial organization literature and found that presumptions based on measures of concentration are unlikely to provide sound guidance for public policy:

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration.…

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates.[14]

Furthermore, the national concentration statistics that are used to justify invigorated antitrust law and enhanced antitrust enforcement are generally derived from available data based on industry classifications and market definitions that have limited relevance to antitrust. As Luke Froeb (former deputy assistant attorney general for economics in the DOJ Antitrust Division under President Trump and former director of the FTC Bureau of Economics under President Bush) and Greg Werden (former senior economic counsel in the DOJ Antitrust Division from 1977-2019) note:

[T]he data are apt to mask any actual changes in the concentration of markets, which can remain the same or decline despite increasing concentration for broad aggregations of economic activity. Reliable data on trends in market concentration are available for only a few sectors of the economy, and for several, market concentration has not increased despite substantial merger activity.[15]

Agency experience and staff research in the critical area of health-care competition represents a signal model of the application of applied industrial-organization research to policy development and law enforcement. Notably, the underlying research program has provided solid ground for blocking anticompetitive hospital mergers, while militating against SCP assumptions in provider mergers. Results suggest, for example, that various “the new screening tools (in particular, WTP and UPP) are more accurate than traditional concentration measures at flagging potentially anticompetitive hospital mergers for further review.”[16]

Most important, these criticisms of the assumed relationship between concentration and economic outcomes are borne out by a host of recent empirical studies.

The absence of a correlation between increased concentration and both anticompetitive causes and deleterious economic effects is demonstrated by a recent, influential empirical paper by Sharat Ganapati. Ganapati finds that the increase in industry concentration in non-manufacturing sectors in the United States between 1972 and 2012 is “related to an o?setting and positive force—these oligopolies are likely due to technical innovation or scale economies. [The] data suggests that national oligopolies are strongly correlated with innovations in productivity.”[17] The result is that increased concentration results from a beneficial growth in firm size in productive industries that “expand[s] real output and hold[s] down prices, raising consumer welfare, while maintaining or reducing [these firms’] workforces.”[18] Sam Peltzman’s research on increasing concentration in manufacturing has been on average associated with both increased productivity growth and widening margins of price over input costs. These two effects offset each other, leading to “trivial” net price effects.[19]

Several other recent papers look at the data in detail and attempt to identify the likely cause of the observed national-level changes in concentration. Their findings demonstrate clearly that measures of increased national concentration cannot justify increased antitrust intervention. In fact, as these papers show, the reason for apparently increased concentration trends in the United States in recent years appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects appear beneficial. More to the point, while some products and services compete at a national level, much more competition is local—taking place within far narrower geographic boundaries.

By way of illustration, it hardly matters to a shopper in, say, Portland, Oregon, that there may be fewer grocery-store chains nationally if she has more stores to choose from within a short walk or drive from her home. If you are trying to connect the competitiveness of a market and the level of concentration, the relevant market to consider is local. The same consumer, contemplating elective surgery, may search in a somewhat broader geographic area, but one that is still local, not national, and best determined on a merger-by-merger basis.[20]

Moreover, because many of the large firms driving the national-concentration data operate across multiple product markets that do not offer substitutes for each other, the relevant product-market definition is also narrower. In other words, Walmart’s market share in, e.g., “retail” or “discount” retail implies virtually nothing about retail produce competition. In the real world, Walmart competes for consumers’ produce dollars with other large retailers, supermarkets, smaller local grocers, and local produce markets. It also competes in the gasoline market with other large retailers, some supermarkets, and local gas stations. It competes in the electronics market with other large retailers, large electronic stores, small local electronics stores, and a plethora of online sellers large and small—and so forth. For example, when the FTC investigated the Staples/Office Depot merger, it analyzed a far-narrower market than simply “office supplies” or “retail office supplies”; it found that general merchandisers such as Walmart, K-Mart, and Target accounted for 80% of office-supply sales “in the market for “consumable” office supplies sold to large business customers for their own use.”[21]

This conclusion is not mere supposition: In fact, recent empirical work demonstrates that national measures of concentration do not reflect market structures at the local level. Moreover, recent research published by the Federal Reserve Bank of New York concludes that a focus on nationwide trends may be misleading, to the extent that the data omit revenue earned by foreign firms competing in the United States.[22] The authors note that accounting for foreign firms’ sales in the U.S. indicates that market concentration did not increase, but “remained flat” over the 20-year period studied. They argue that increasing domestic concentration was counteracted by increasing market shares associated with foreign firms’ sales.

In a recent paper,[23] the authors look at both the national and local concentration trends between 1990 and 2014 and find that:

  1. Overall, and for all major sectors, concentration is increasing nationally but decreasing locally.
  2. Industries with diverging national/local trends are pervasive and account for a large share of employment and sales.
  3. Among diverging industries, the top firms have increased concentration nationally, but decreased it locally.
  4. Among diverging industries, opening of a plant from a top firm is associated with a long-lasting decrease in local concentration.[24]

Source: Rossi-Hansberg, et al. (2020)[25]

Importantly, all of the above applies not only to product markets, but to labor markets, as well:

The proportion of aggregate U.S. employment located in all SIC 8 industries with increasing national market concentration and decreasing ZIP code level market concentration is 43 percent. Thus, given that some industries have also had declining concentration at both the national and ZIP code level, 78 percent (or over 3/4) of U.S. employment resides in industries with declining local market concentration.[26]

There are disputes about the data used in this study for sales concentration. Some authors argue it more likely reflects employment concentration, instead of sales concentration.[27] It is well-documented that employment concentration has been falling at the local level.[28]

Instead of relying on NAICS or SIC codes, Benkard, Yurukoglu, & Zhang construct concentration measures that are intended to capture consumption-based product markets.[29] They use respondent-level data from the annual “Survey of the American Consumer” available from MRI Simmons, a market-research firm. The survey asks specific questions about which brands consumers buy. They define markets into 457 product markets categories, separated into 29 locations. Product “markets” are then aggregated into “sectors.” Since they know the ownership of different products, even if the brand name is different, they can lump products into companies.

If antitrust enforcers want one paper to get a sense of aggregate trends, this is the one. Their study more closely matches and aggregates antitrust markets than studies that rely on NAICS codes. Against the narrative of the draft guidelines, they find falling concentration at the product-market level (the narrowest product), both at the local and the national level. At the sector level (which aggregates markets), there is a slight increase.

Source: Benkard, et al (2021)[30]

With any concentration measure, one must define the relevant market. As in any antitrust case, this is not trivial when defining markets to measure concentration for the overall economy. Some work, such as Autor, et al., use industries with “time-consistent industry definitions.”[31] Other work finds falling concentration, even at the national level, between 2007 and 2017, when one includes the full sample of industries.[32]

The main implication of these studies for the merger guidelines is not that we need to take a stance on a technical debate in the academic literature, but to recognize that such a healthy debate exists and that it would be unwise to proceed as if we know for certain the direction of empirical trends (and that the agencies can reverse them).

2.        Larger national firms can lead to less-concentrated local markets

What is perhaps most remarkable about this data is the unique role large firms play in driving reduced concentration at the local level:

[T]he increase in market concentration observed at the national level over the last 25 years is being shaped by enterprises expanding into new local markets. This expansion into local markets is accompanied by a fall in local concentration as ?rms open establishments in new locations. These observations are suggestive of more, rather than less, competitive markets.[33]

A related paper explores this phenomenon in greater detail.[34] It shows that new technology has enabled large firms to scale production and distribution over a larger number of establishments across a wider geographic space. As a result, these large national firms have grown by increasing the number of local markets they serve, and in which they are relatively smaller players.[35]

What appears to be happening is that national-level growth in concentration is driven by increased competition in certain industries at the local level. “The increasing presence of top ?rms has decreased local concentration in local markets as the new establishments of top ?rms gain market share from local incumbents.”[36] The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more and are dominant in fewer industries.

These results turn the commonly accepted narrative on its head:

  1. First, rising concentration, where it is observed, is a result of increased productivity and competition that weed out less efficient producers. This is emphatically a good thing.
  2. Second, the rise in concentration is predominantly a function of more efficient firms competing in more—and more localized—markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not.
  3. Third, in labor markets, the effect of these dynamics is a reduction in monopsony power: “[T]he industrial revolution in services has implications on the employment of workers of different skills across locations. If labor markets are industry speci?c and local, the decline in local concentration of employment caused by the entry of top firms should reduce the monopsony power of employers in small markets.”[37]

Another paper takes a similar approach to analyze the effect of increased firm size on labor-market share.[38] In a complete refutation of the popular narrative, it finds that, while the labor-market power of firms appears to have increased, “labor market power has not contributed to the declining labor share because, despite an overall increase in national concentration, we ?nd that… local labor market concentration has declined over the last 35 years. Most local labor markets are more competitive than they were in the 1970s.”[39]

Further studies have corroborated these findings, noting that, on an industry-by-industry basis, the explanatory power of increasing concentration (or increasing firm size) is extremely weak. For example, while Autor, et al. (2020) attribute the purported decline in the labor share of the U.S. economy to the rise of “superstar” firms,[40] Stanford economist Robert Hall shows that the data is far more nuanced. Thus, comparing the employment shares of ?rms with 10,000 or more workers in the 19 NAICS sectors between 1998 and 2015, Hall finds that:

  1. “In four of the 19 sectors, very high-employment ?rms declined in importance over the 17-year span of the data. The weighted-average increase across all sectors was only 1.8 percentage points, from 25.3 percent to 27.1 percent. Thus it seems unlikely that rising concentration played much of a role in the general increase in market power.…”; and
  2. “[T]here is essentially no systematic relation between the mega-firm employment ratio… and the ratio of price to marginal cost.… Over the wide range of variation in the employment ratio, sectors with low market power and with high market power are found, with essentially the same average values. There is no cross-sectional support for the hypothesis of higher markup ratios in sectors with more very large ?rms and thus more concentration in the product markets contained in those sectors.”[41]

3.        It is not clear that industry concentration harms consumers

Economists have been studying the relationship between concentration and various potential indicia of anticompetitive effects—price, markup, profits, rate of return, etc.—for decades. There are, in fact, hundreds of empirical studies addressing this topic. Contrary to some common claims, however, when taken as a whole, this literature is singularly unhelpful in resolving our fundamental ignorance about the functional relationship between structure and performance: “Inter-industry research has taught us much about how markets look… even if it has not shown us exactly how markets work.”[42]

Though some studies have plausibly shown that an increase in concentration in a particular case led to higher prices (although this is true in only a minority share of the relevant literature), assuming the same result from an increase in concentration in other industries or other contexts is simply not justified: “The most plausible competitive or efficiency theory of any particular industry’s structure and business practices is as likely to be idiosyncratic to that industry as the most plausible strategic theory with market power.”[43]

As Chad Syverson recently summarized:

Perhaps the deepest conceptual problem with concentration as a measure of market power is that it is an outcome, not an immutable core determinant of how competitive an industry or market is… ??As a result, concentration is worse than just a noisy barometer of market power. Instead, we cannot even generally know which way the barometer is oriented.[44]

This does not mean that concentration measures have no use in merger enforcement. Instead, it demonstrates that market concentration is often unrelated to antitrust enforcement because it is driven by factors that are endogenous to each industry. Enforcers should be careful to not rely too heavily on structural presumptions based around concentration measures, as these may be poor indicators of the instances in which antitrust enforcement is most beneficial to consumers. The Draft Guidelines move in the opposite direction.

4.        Labor market concentration is falling; Should we decrease antitrust attention?

One way to see potential problems with structural presumptions is to consider labor markets. The best data aggregating labor-market concentration finds either low and/or falling concentration over recent decades at the local level. Studies that use administrative data from the Longitudinal Business Database find that local labor-market concentration has been declining, while national concentration has been increasing, across various definitions of “local.”[45]

Source: Rinz (2022)[46]

This fall in concentration has happened even as firms’ labor-market power appears to be rising—which, again, illustrates the disconnect between concentration and market power. According to one recent study in the American Economic Review, while the average labor-market power of firms appears to have increased nationally, “despite the backdrop of stable national concentration, we… find that [local concentration] has declined over the last 35 years.”[47]

Another study uses microdata from the Occupational Employment and Wage Statistics, mapped to the Quarterly Census of Employment and Wages, which records quarterly employment levels for each establishment in the United States that reports to state-level unemployment insurance departments.[48] They define markets using 6-digit SOC by metropolitan area. They find an average HHI that is relatively stable and low: the employment-weighted level of the employment HHI measure in the private sector is 0.0331.

In short, just as we should not use the low (or falling) average concentration as a reason to decrease HHI thresholds, we should not use high (or rising) average concentration to increase thresholds.

5.        Market structure and innovation.

The problem with the focus on market concentration can be seen clearly when looking at innovation. The draft guidelines rightly put increased innovation as a pro-competitive effect on par with increased output or investment, higher wages or improved working conditions, higher quality, and lower prices.[49]

However, this emphasis on innovation is in tension with the guidelines’ excessive focus on market concentration. How does a market’s structure affect innovation? This crucial question has occupied the world’s brightest economists for almost a century, from Schumpeter (who found that monopoly was optimal)[50] through Arrow (who concluded that competitive market structures were key),[51] to the endogenous-growth scholars (who empirically derived an inverted-U relationship between market concentration and innovation).[52] Despite these pioneering contributions to our understanding of competition and innovation, there is a growing consensus that no specific market structure is strictly superior at generating innovation. Just as the SCP paradigm ultimately faltered—because structural presumptions were a weak predictor of market outcomes[53]—so too have dreams of divining the optimal market structure for innovation.[54] Instead, in any given case, innovation depends on a plethora of sector- and firm-specific characteristics that range from the size and riskiness of innovation-related investments to regulatory compliance costs, the appropriability mechanisms used by firms, and the rate of technological change, among many others.

Despite this complex economic evidence, several antitrust agencies, including the FTC and the European Commission, believe they have cracked the innovation-market-structure conundrum. Throughout several recent decisions and complaints, these and other authorities have concluded that more firms in any given market will produce greater choice and more innovation for consumers. This could be referred to as the “Structuralist Innovation Presumption.”[55] This presumption notably plays an important role in the FTC’s recent case against Facebook, where the agency argues that:

Competition benefits users in some or all of the following ways: additional innovation (such as the development and introduction of new features, functionalities, and business models to attract and retain users); quality improvements (such as improved features, functionalities, integrity measures, and user experiences to attract and retain users); and consumer choice…[56]

Unfortunately, the Structuralist Innovation Presumption is a misguided heuristic that antitrust authorities around the globe would do well to avoid, as it is at odds with the mainstream economics of innovation.[57]

There is a vast empirical literature examining the relationship between market structure and innovation. While a comprehensive survey of the literature is beyond the scope of our comments, the top-level findings clearly suggest that  the relationship between market structure and innovation is not monotonic, and that it depends on several other parameters. For instance, surveying the econometric literature concerning the effect of industry structure on innovation, Richard Gilbert concludes that it is indeterminate:

Table 6.1 summarizes the conclusions from these interindustry studies for the effects of competition and industry structure on innovation. Unfortunately, these studies do not reach a consensus, other than to note that innovation effects can differ dramatically for firms that are at different levels of technological sophistication. Although some studies find a positive relationship between measures of innovation and competition (alternatively, a negative relationship between innovation and industry concentration), others find that the relationship exhibits an inverted-U, with the largest effects at moderate levels of industry concentration or competition, and at least one study reports a negative relationship between competition (measured by Chinese import penetration) and innovation (measured by citation-weighted patents and R&D investment. One consistent finding is that an increase in competition has less of a beneficial effect, and may have a negative effect, on innovation incentives for firms that are far behind the industry technological frontier.[58]

Along similar lines, high-profile studies reach opposite conclusions. For instance, looking at the semiconductor industry, Ronald Goettler and Brett Gordon find that concentrated market structures lead to higher innovation:

The rate of innovation in product quality would be 4.2 percent higher without AMD present, though higher prices would reduce consumer surplus by $12 billion per year. Comparative statics illustrate the role of product durability and provide implications of the model for other industries.[59]

Mitsuru Igami reaches the opposite conclusion while studying the hard-disk-drive industry:

The results suggest that despite strong preemptive motives and a substantial cost advantage over entrants, cannibalization makes incumbents reluctant to innovate, which can explain at least 57 percent of the incumbent-entrant innovation gap.[60]

Looking at the hospital industry, Elena Patel & Nathan Seegert find a negative relationship between competition and investment:

In particular, hospitals in concentrated markets increased investment by 5.1 percent ($2.5 million) more than firms in competitive markets in response to tax incentives. Further, firms’ investment responses monotonically increased with market concentration.[61]

Finally, some of the most universally recognized articles in this field stem from the empirical research of Aghion and coauthors.[62] Their work famously found that the relationship between product-market competition and innovation had an inverted-U shape. Stated differently, increased product-market competition is associated with higher innovative output, up to a point of diminishing returns.[63] According to some, this strand of research warrants a policy of greater antitrust enforcement, relying upon patents to generate ex post profits for innovators.[64]

This conclusion appears somewhat misguided, as Aghion et al.’s seminal paper paints a far more nuanced picture. The authors’ main finding is that product-market concentration has an ambiguous effect on innovation—on average.[65] This last qualification is often omitted in policy discussions. As a result, what is true for the economy as a whole does not necessarily hold on a case-by-case basis. Some comparatively concentrated industries may score highly in terms of innovation, while some moderately concentrated ones do not.[66] In other words, there are several endogenous factors that affect how increased product-market competition will influence innovation in a given case. For example, the authors show that greater product-market competition is more likely to have a positive effect on innovation in industries where firms are technologically “neck and neck” before an innovation takes places (as opposed to those industries where “laggard” firms can innovate to overtake incumbents).[67] In the first case, more competition mostly decreases pre-innovation rents, while in the second case it has a larger effect on post-innovation rents (this is because increased competition would have little to no effect on laggard firms’ pre-innovation rents, which are likely to be small). [68]

The upshot is that empirical economics do not paint a clear or consistent picture of the relationship between market structure and innovation. Antitrust authorities and courts should thus avoid the presumption that more concentrated-market structures hinder innovation to the detriment of consumers.

6.        Market structure and investment: lessons from telecom

As the previous section explained, mergers may lead to diverging price and innovation effect—as increased concentration might sometimes (though certainly not always) increase both market power and innovation output. This is not the only area where price and “non-price” effects may cut in opposite directions. Price competition and investments can also be inversely correlated.

Mergers among mobile-wireless providers provide a rich source of information to evaluate these effects. In a recent paper, ICLE scholars reviewed the sizable empirical literature on this topic, with much of the research focused on so-called “4-to-3” mergers that reduce the number of large, national carriers from four firms to three (though some have also persuasively argued that such a characterization may not be accurate).[69]

Of the 18 studies ICLE reviewed, eight analyzed changes in market concentration across multiple jurisdictions between 2000 and 2015, while 10 analyzed specific mergers. ICLE’s paper also reviewed a more recent study that considered the effects of U.S. market concentration in spectrum ownership on measures of quality.

Of the 10 studies that looked at specific mergers, about half found that short-term prices decreased following a merger, whereas half found that short-term prices increased. Even different studies of the same merger found wildly different effects on short-term prices, ranging from significant price decreases to significant price increases. Thus, looking at these price effects alone, the studies are, collectively, inconclusive.

The ICLE paper identified several reasons for these apparently divergent results, including:

  1. a lack of common measures of prices and price effects across studies;
  2. differences in the time period chosen; and
  3. difficulties accounting for variations in geography, demography, and regulatory regimes among jurisdictions (the latter also creates a potential for endogeneity bias).

Of those studies that considered the effect on long-term investment of such mergers, all found that capital expenditures—a proxy for investment and, presumably, long-term dynamic welfare—increased post-merger.

Indeed, several recent studies that looked more broadly at the effects of market concentration in the mobile-telecommunications industry suggest that increased concentration is correlated with increased investment and may therefore be correlated with greater dynamic benefits. These studies indicate that the highest levels of long-term country-wide investment occurred in markets with three facilities-based operators (though total investment was not significantly lower in markets with four facilities-based operators). In addition, a recent analysis found that U.S. markets with higher concentration of spectrum ownership had faster, more reliable cellular service (reflecting an increase in dynamic welfare effects).

Studies of investment also found that markets with three facilities-based operators had significantly higher levels of investment by individual firms. The implication is that, in such markets, individual firms have stronger incentives to make capital investments that enable long-term competition through expanded infrastructure and technological innovation, which affect the range, quality, and quantity of services provided to consumers. Studies also suggest this effect may be strengthened when the merger results in a more symmetrical market structure (i.e., the various facilities-based providers become more equal in market share). It is argued that increases in the number of competitors in asymmetric markets leads to disproportionately lower levels of investment by smaller firms. Thus, a merger between two smaller firms that results in greater market symmetry could result in higher levels of investment by the merged firms relative to the unmerged entities.

The results of ICLE’s review indicate that a merger that involves products or firms that compete along a variety of dimensions, in addition to price, must evaluate the effects of the merger across these dimensions, as well. In addition, relying on past empirical research to evaluate a current merger may overlook economic, technological, or regulatory changes that diminish the reliability of past experience to inform current events. This review of mobile-wireless-provider mergers reveals a number of factors that should be considered when seeking to understand the likely welfare effects of a given merger. These include:

  1. Whether the effects to be evaluated are limited to static price effects or also include qualitative measures, such as capital expenditures and other investment in quality of service, suggesting dynamic innovation effects;
  2. The timeframe over which the effects are evaluated;
  3. The effects on different tiers of service, especially those measured by hypothetical consumption profiles (known as “baskets” in mobile-wireless-provider mergers);
  4. The extent to which the effects of previous mergers may confound projected effects of the merger at hand; and
  5. Whether a transaction occurs during, or even as part of, a transition between different generations of technology (e.g., during an upgrade from 3G to 4G networks).

Further, it is well-known that process and product innovation does not arise solely from new entry; incumbent firms frequently are important sources of innovation, as well as of increased market competitiveness.[70] Dynamic analysis takes entry seriously, but it is much more sensitive to potential entry as a constraint on incumbents than a structuralist view would permit. Thus, for example, an incumbent mobile-wireless provider that offers wide coverage of 4G service must consider the potential capabilities of an existing competitor that currently has only sparse 4G coverage; it must incorporate potential threats from that competitor in its decision matrix when evaluating whether to upgrade its network to 5G in order to retain its customer base. An incumbent’s dominant position can quickly erode thanks to imperfect in-market substitutes, as well as from out-of-market firms that may decide to enter in the future.[71]

When evaluating the merits of a merger, authorities are charged with identifying the effects on the welfare of consumers. Crucially, this analysis must consider not only short-term price effects, but also long-term and dynamic effects, particularly in markets (like mobile telecommunications) in which competition occurs over both price and innovation. Based on the studies that we reviewed, 4-to-3 mergers appear to generate net long-term benefits to consumer welfare in the form of increased investment (presumably—although not conclusively, based on these studies—resulting in increased innovation), while the short-term effects on price are resolutely inconclusive.

II.      Guideline 2: Mergers Should Not Eliminate Substantial Competition Between Firms

While it is reasonable to consolidate the horizontal and vertical merger guidelines into one document, the draft essentially writes away the distinction between them. Footnote 30 suggests that Guideline 2 is about horizontal unilateral effects. If so, the application of the guideline to horizontal mergers specifically should be made explicit. Otherwise, readers are left with the impression that the Draft Guidelines intentionally avoid specificity, perhaps hoping to enhance the agencies’ prosecutorial discretion. That would be problematic, notwithstanding the possibility of line-blurring cases. In brief, a significant body of economic literature and judicial precedent recognizes the competitive importance of the distinction, and requires that the agencies treat horizontal and vertical mergers differently.

As Aviv Nevo and colleagues summarized, the distinction is especially important when thinking about efficiencies and other potential merger benefits:

Applying the same sort of skepticism about efficiencies in a vertical merger as in a horizontal merger can amount to assuming away a portion of the economics that is at the heart of the vertical investigation.

One clear example of this dual nature of vertical theories is the model of linear pricing, which generates a raising rivals’ cost incentive and also generates a potential procompetitive incentive in the form of elimination of double marginalization (“EDM”). Not every merger will present facts that fit this particular model. But, if that model is the basis of an investigation, its full range of implications should be considered.[72]

By rejecting—or implying a rejection of—a general distinction between horizontal and vertical mergers, the Draft Guidelines effectively enact a “horizontalization” of merger enforcement. The following subsection explains the importance of explicitly delineating horizontal and vertical mergers at certain points in the Draft Guidelines.

A.      Horizontal Mergers Are Different Than Vertical Mergers

Antitrust merger enforcement has long relied on a fundamental distinction between horizontal and vertical mergers (or horizontal and vertical theories of harm, to be more precise). Policymakers widely assume the former are more likely to cause problems for consumers than the latter. However, this distinction increasingly has been challenged by some antitrust scholars and enforcers. In recent years, antitrust authorities on both sides of the Atlantic—and several high-profile scholars—have put forward theories of harm that obscure the traditional distinctions among horizontal, vertical, and conglomerate mergers. This is epitomized by an alarmist 2020 article by Cristina Caffarra and co-authors that portrays nearly all tech mergers as horizontal, based on the supposition that, but for the acquisition, one of the merging firms likely would launch its own competing vertical product..[73] But the claim seems manifestly implausible, and the paper offers no evidence on its behalf. Of course, in a given case, under specific facts and circumstances, a large, diversified tech firm might consider or achieve entry into a vertical market. But a possibility under some facts and circumstances is a far cry from a general likelihood. The implication of this (and other) research is that mergers between firms that are either vertically related or active in unrelated markets routinely or typically have significant horizontal effects.[74] This can be the case, either when merging firms are potential competitors or when they compete in innovation markets (i.e., they have overlapping R&D pipelines, or may have them in the future).[75]

These concerns are compounded in the digital economy, where ostensibly non-competing firms may become competitors on one side of their platforms. For instance, it has been argued that Giphy, which offers a library of gif files, may ultimately compete with Facebook in ad markets.[76] Similarly, it has been claimed that Google’s acquisition of Fitbit—a producer of wearable health-monitoring devices—raises horizontal theories of harm, because Google would otherwise have developed its own wearable devices.[77] Such hypotheticals are sometimes deemed to be “reverse killer acquisitions,” on grounds that acquiring a rival enables the incumbent to not produce a good itself. Endorsing this approach to merger review wholeheartedly would have profound policy ramifications. Indeed, should authorities assume the counterfactual to a merger is that the acquirer will compete with the target directly, then every merger effectively becomes a horizontal one.

The influence of this research can be seen in the FTC’s loss in blocking Meta’s acquisition of Within Unlimited and the ongoing case against Meta, which centers on the company’s acquisitions of WhatsApp and Instagram.[78] For the Within case, the FTC wanted to turn a vertical merger (software and hardware) into a horizontal merger between potential competitors. The court was unwilling to accept the claim that, if the Within deal were blocked, Meta would likely develop its own VR fitness app to compete against Supernatural. Meta had no such product poised to enter the market, or even in late-stage development. The contingent probability of timely, competitively significant entry—inherent in a potential competition case—was simply too small or speculative to conclude that Meta was a potential competitor, and was further undermined by internal emails suggesting that they should partner with Peloton—an idea that got so little traction that they never even ran it past Peloton.

At the time of the WhatsApp and Instagram acquisitions, competition authorities around the world tended to analyze them (and the potential theories of harm they might give rise to) primarily as vertical. For instance, looking at Facebook’s purchase of WhatsApp, the European Commission concluded that “while consumer communications apps like Facebook Messenger and WhatsApp offer certain elements which are typical of a social networking service, in particular sharing of messages and photos, there are important differences between WhatsApp and social network services.” This suggested the merging firms were likely active in separate markets.[79] The FTC’s clearance of that deal suggests that the agency largely adhered to the view that the merging entities were not close competitors.[80] Similarly, when the UK CMA reviewed Facebook’s acquisition of Instagram, it concluded that the two firms exercised only weak competitive constraints on each other:

To conclude, there are several relatively strong competitors to Instagram in the supply of camera and photo editing apps, and those competitors appear at present to be a stronger constraint on Instagram than Facebook’s new app.[81]

Reevaluating these deals almost a decade later, the FTC reached a diametrically opposite conclusion. In its Facebook complaint, the agency concluded that:

Failing to compete on business talent, Facebook developed a plan to maintain its dominant position by acquiring companies that could emerge as or aid competitive threats. By buying up these companies, Facebook eliminated the possibility that rivals might harness the power of the mobile internet to challenge Facebook’s dominance….

…As Instagram soared, Facebook’s leaders began to focus on the prospect of acquiring Instagram rather than competing with it….

…In sum, Facebook’s acquisition and control of WhatsApp represents the neutralization of a significant threat to Facebook Blue’s personal social networking monopoly, and the unlawful maintenance of that monopoly by means other than competition on the merits.[82]

While this change of heart could be characterized as the agency updating its position in light of new evidence concerning the nature of competition between the merging firms, there is also a clear sense that times have changed. Indeed, both antitrust agencies and scholars appear more willing to assume (i) that firms could become competitors absent a merger, and (ii) that mergers between them are likely to reflect efforts by the acquirer to anticompetitively maintain its market position. We address both these claims in the subsequent sections.

The most important difference between a horizontal merger and a vertical merger is the merging parties’ relationships with each other. A horizontal merger is between firms that compete in the same product and geographic market. A vertical merger is between firms with an upstream-downstream (e.g., seller-buyer) relationship. These distinctions are well-known and widely accepted. There has been no economic trend that would justify a redefinition of these distinctions.

Drawing on an example provided by Steve Salop, consider a hypothetical orange-juice market with firms that manufacture and engage in the wholesale distribution of orange juice, as well as firms that own the orchards that supply the oranges to be juiced.[83] A merger between manufacturer/wholesalers would be a horizontal merger; a manufacturer/wholesaler’s purchase of a firm owning orchards would be a vertical merger.

A horizontal merger removes a competing firm from the market and thereby eliminates substitute products or firms that produce the products.[84],[85] By definition, horizontal mergers reduce competition, but the attendant harm to consumers may be large, small, or infra-marginal, depending on the facts and circumstances of a given merger; and any consumer harms may be offset by benefits, such as economies of scale and other efficiencies.[86]

In contrast, in most cases, a vertical merger does not eliminate a competing firm from the market and does not involve substitutes.[87] In fact, vertical mergers typically involve complements, such as a product plus distribution or a critical input to a complex device.[88] In Salop’s orange-juice hypothetical, the manufacturer juices oranges, cans the juice, and operates a wholesaling operation to sell the canned juice to retailers. In this example, the wholesaling operations is a complement to the manufacturing process.

Although not necessarily “by definition,” in most cases, vertical mergers are undertaken to achieve efficiencies and reduce costs. For example, through the elimination of double marginalization and the resulting downward pressure on prices, vertical mergers present a stronger likelihood of improving competition than horizontal mergers.[89]

In a statement during the 2018 FTC hearings, FTC Commissioner Christine Wilson concluded that “we know that competitive harm is less likely to occur in a vertical merger than in a horizontal one,” and echoed some of Hoffman’s points:[90]

[I]n contrast to horizontal guidelines, the economics in vertical mergers indicate efficiencies are much more likely. Professor Shapiro went so far as to call them “inherently” likely at our hearing. Given this dynamic, it may be appropriate to presume that certain vertical efficiencies are verifiable and substantial in the absence of strong evidence to the contrary, even if we would not do so in a horizontal merger case.[91]

The economics of horizontal mergers comprises a long, well-established literature of theoretical models and empirical research. In contrast, there are fewer quantitative theoretical models that can be used to predict outcomes in vertical mergers. Moreover, those models that do exist have a far shorter track record than those used to assess horizontal mergers.[92]

Naturally, the real world is much more complicated. For example, Salop points out that some mergers involve firms that are already vertically integrated prior to the merger.[93] In these cases, the merger would involve both vertical and horizontal elements. Such mergers may lead to horizontal and vertical efficiencies that reinforce each other. They also may lead to horizontal and vertical harms that reinforce each other. Or they may lead to mix of horizontal and vertical efficiencies and harms that counteract each other. That may explain why empirical research on vertical mergers, discussed below, can yield sometimes wildly different results—even when using seemingly similar sets of data.

To be sure, there are no economic trends that would lead one to revisit the distinction between horizontal and vertical mergers. Nevertheless, there have been advances in economic theory that have led some to conclude that vertical mergers may not be as beneficial as once thought or that they may lead to anticompetitive consumer harm.

Some critics of the current state of vertical-merger enforcement assert a vertical merger can effectively become a horizontal merger—or have horizontal effects. If that is the case, then it is argued that vertical mergers should be evaluated in the same way as horizontal mergers. According to Salop, “[f]or the type of markets that are normally analyzed in antitrust, the competitive harms from vertical mergers are just as intrinsic as are harms from horizontal mergers.”[94] Thus, a vertically integrated firm faces an “intrinsic incentive[95] to foreclose downstream competition “by raising the input price it charges to the rivals of its downstream merger partner” in the same way that horizontal firms face “inherent upward pricing pressure from horizontal mergers in differentiated products markets, even without coordination.”[96]

In an implicit acknowledgement of the distinction between horizontal and vertical mergers, Salop describes the competition between an upstream firm and a downstream partner as indirect: “the upstream merging firm that supplies a downstream firm is inherently an ‘indirect competitor’ of the future downstream merging firm. That indirect competition is eliminated by merger. This unilateral effect is exactly parallel to the unilateral effect from a horizontal merger.”[97]

But the two are not “exactly parallel,” of course, because indirect competition is different from direct competition—Salop himself make the distinction. Even in Salop’s telling, the mechanism by which his vertical-leads-to-horizontal theory operates requires that (1) the upstream firm has market power and (2) post-merger, the merged firm forecloses supply or raises costs to the downstream firm’s horizontal rivals. While this is possible, it is not a necessary consequence of the transaction; and the risk of competitive harm, at the very least, must be a function of both the likelihood and degree of foreclosure. The presence of downstream horizontal competitors operates as an immediate and present constraint on the vertically integrated merged firm.

It may be helpful to explain using Salop’s orange-juice hypothetical:

Company A is a manufacturer and wholesale supplier of orange juice to retailers. It seeks to acquire Company B, an owner of orange orchards.… The merged firm may find it profitable to raise the price or cease supplying oranges to one or more rival orange juice suppliers.… This input foreclosure may lessen competition in the wholesale orange juice market, for example, by raising the price or reducing the quality of some or all types of orange juice.[98]

This is an excellent example because it highlights how complex even a straightforward hypothetical of raising rivals’ costs can get. Under the standard formulation, the vertically integrated firm would produce oranges at the orchard’s marginal cost—in theory, the price it pays for oranges would be the same both pre- and post-merger. Under this theory, if the vertically integrated orchard does not sell its oranges to the non-integrated manufacturer/wholesalers, then the other non-vertically integrated orchards will be able to charge a price greater than their marginal cost of production and greater than the pre-merger market price for oranges. The higher price of oranges used by non-integrated manufacturer/wholesalers will then be reflected in higher prices for orange juice sold by the manufacturer/wholesalers.

The merged firm’s juice prices will be higher post-merger because its unintegrated rivals’ juice prices will be higher, thus increasing the merged firm’s profits. The merged firm and unintegrated orchards would be the “winners;” unintegrated manufacturer/wholesalers and consumers would be the “losers.” Under a consumer welfare standard, the result could be deemed anticompetitive. Under a total welfare standard, anything goes.

But this classic example of raising rivals’ costs is based on some strong assumptions. It assumes that, pre-merger, all upstream firms price at marginal cost, which means there is no double marginalization. It assumes all the upstream firm’s products are perfectly identical. It assumes unintegrated firms do not respond by integrating themselves. If one or more of these assumptions is not correct, more complex models—with additional (potentially unprovable) assumptions—must be employed. What begins as a seemingly straightforward theoretical example is now a model-selection problem: which economic models best fit the facts and best predict the likely outcome.

In Salop’s example, it is assumed the merged firm would raise the price or refuse to sell oranges to rival downstream wholesalers. However, if rival orchards charge a sufficiently high price, the merged firm would profit from undercutting its rivals’ orange prices, while still charging a price greater than its own marginal cost. Thus, it is not obvious that the merged firm has an incentive to cut off supply to downstream competitors or charge a higher price. The extent of the pricing pressure on the merged firm to cheat on itself is an empirical matter that depends on how upstream and downstream firms will or might react. Depending on how other manufacturer/wholesalers and orchard firms react, the merged firm’s attempt at foreclosure may have no effect and there would be no harm to competition.

The hypothetical also assumes that commercial juicing is the only use for oranges and that juice oranges are the only thing that can be produced by citrus groves. It is possible that, rather than raising prices or foreclosing competitors, the merged firm would divert some or all of its juice oranges to a “secondary” market, such as the retail market for those who juice at home. They also could convert groves used to grow juice oranges to the production of strains of oranges and other citrus fruits that are sold as fresh produce. Indeed, fresh citrus fruits currently account for 10% of Florida’s crop and 75% of California’s.[99] This diversion would lead to a decline in the supply of juice oranges and the price of this key input would rise.

This strategy would raise the merged firm’s costs along with its rivals. Moreover, rival orchards can respond to this strategy by diverting their own groves from the production of fresh produce citrus to the juice market, in which case there may be no significant effect on the price of juice oranges. What begins as a seemingly straightforward theoretical example is now a complicated empirical matter and raises the antitrust question of whether selling into a “secondary” market constitutes anticompetitive conduct.

Moreover, the merged firm may have legitimate business reasons for the merger and legitimate business reasons for reducing the supply of oranges to juice wholesalers. For example, “citrus greening,” an incurable bacterial disease, has caused severe damage to Florida’s citrus industry, significantly reducing crop yields.[100] A vertical merger could be one way to reduce supply risks. On the demand side, an increase in the demand for fresh oranges would guide firms to shift from juice and processed markets to the fresh market. What some would see as anticompetitive conduct, others would see as a natural and expected response to price signals.

Furthermore, it is not actually the case that the incentive to foreclose downstream rivals is “intrinsic,” nor is it the case that the effect is necessarily deleterious. In fact, as we discuss below, even when foreclosure can be shown, empirical evidence indicates that the consumer benefits from efficiencies tend to be greater than the harms from foreclosure.

A key difference between horizontal and vertical mergers is that any efficiency gains from a horizontal merger are not automatic and must be established. On the other hand, the realization of certain vertical-merger efficiencies, at least from the elimination of double marginalization, is automatic.[101] And, of course, additional merger benefits may be established for any given vertical merger.

The logic is simple: Potentially welfare-reducing vertical mergers are those that involve an upstream firm with market power. Thus, pre-merger, all downstream firms bear presumptively higher input costs. To realize their own profits, they must increase final-product prices to consumers by even more.[102] But after the merger, the merged downstream entity no longer pays the markup. As a result, it “enjoys lower input costs and thus increases its output, thereby increasing welfare.”[103] At the same time, of course, non-merged downstream firms bear a higher input price, and it is an empirical question whether the net consumer welfare effect will be positive or negative. But it is never a question that the two effects operate simultaneously, and that the reduction of double marginalization necessarily occurs. Indeed, it is most likely to arise and to lead to net consumer-welfare benefits precisely where there is the greatest potential for anticompetitive price increases to downstream rivals.[104]

All else being equal, the effect of removing a horizontal competitor by merger is automatic: less competition. That isn’t necessarily bad. It may be offset, and it may also enable innovation, more competition, or other results that benefit consumers. But in the first instance, former head-to-head competitors that merge are no longer competing. With vertical mergers, however, the effect is not to automatically reduce competition (indirect, potential, or otherwise). A vertically integrated firm might (or might not) choose to hurt unaffiliated downstream competitors by more than it benefits its integrated downstream firm—that might (or might not) be feasible and advantageous–but nothing is automatic. Assessing the competitive effect of such a merger necessarily means incorporating an added layer of uncertainty, complexity, and distance between cause and effect. In the absence of a few particular, tenuous, and stylized circumstances, “[i]n this model, vertical integration is unambiguously good for consumers.”[105]

In response, proponents of invigorated vertical-merger enforcement argue, in part, that:

[T]he claim that vertical mergers are inherently unlikely to raise horizontal concerns fails to recognize that all theories of harm from vertical mergers posit a horizontal interaction that is the ultimate source of harm. Vertical mergers create an inherent exclusionary incentive as well as the potential for coordinated effects similar to those that occur in horizontal mergers.[106]

But this fails to resolve anything. Moreover, the “analogy with horizontal mergers is misleading.”[107] It is uncontroversial (and far from “[un]recognized”) that “all theories of harm from vertical mergers posit a horizontal interaction that is the ultimate source of harm.”[108] All this says is that there could be harm of the sort that horizontal mergers might cause. But it does not acknowledge that the likelihood and extent of that harm are different in the vertical and horizontal contexts. Moreover, it does not note that the mechanism by which harm might arise is different and more complex in the vertical case. All in all, the probability of that outcome is lower in the case of a vertical merger, where it is dependent on an additional step that may or may not arrive and that may or may not cause harm.

III.    Guideline 4: Mergers Should Not Eliminate a Potential Entrant in a Concentrated Market

The wording of the guideline should be changed to reflect the fact that we are dealing with probabilities, as the body of the guideline makes clear. “Mergers should not eliminate a potential entrant with probable future entry in a concentrated market” would more closely match the body of the guideline.

The distinction between 4.A and 4.B should be eliminated. The only way for a potential entrant to exert competitive pressure is if the current competitors perceive the potential entrant to be a threat. Are the agencies claiming otherwise? Are there firms that no current competitors think about yet somehow still exert competitive pressure on the market? If the agencies mean as much, it should be explicit.

One difficulty with treating all potential competitors like actual competitors is that it assumes that all vertically related (or even non-related) firms could eventually threaten the acquiring incumbent. In other words, potential competition from a particular firm is probabilistic, with the likelihood varying according to the facts and circumstances of the individual case. This forces agencies to make complex assessments regarding the potential future evolution of competition. Beyond the scale that “for mergers involving one or more potential entrants, the higher the market concentration, the lower the probability of entry that gives rise to concern,” the guidelines do not offer guidance about how the relevant probabilities will be assessed.

A.      Potential Competition Is Inherently Probabilistic

The uncertainty involved in any merger involving a potential competitor has important ramifications for policymaking. Anticompetitive mergers are, by definition, possible (under the above theories) only when the acquired rival could effectively challenge the incumbent.[109] But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.[110]

A first important consequence is that, while potential competitors are important constraints on existing markets, they do not generally offer the same degree of constraint as actual competitors.[111] As such, any analysis of a merger involving a potential competitor would have to assess and incorporate the probability of competition.[112] High-quality analysis of the effects of potential competition are few and far between but, according to at least one literature review, a potential competitor may have between one-eighth to one-third the effect on competition as an actual competitor. [113] Likelihoods may vary by industry, product category, and the specific facts and circumstances of the product market and firms at issue. The strength of this competitive constraint also depends on the firms’ perceptions: If both the incumbent and the rival heavily discount the probability of entry, then potential competition is unlikely to affect their behavior.[114]

This leads to a second important issue. Because the loss of a potential competitor will, in expectation, lead to less harm than that of an actual competitor, it is crucial that agencies tailor their responses accordingly. While the traditional remedies for anticompetitive horizontal mergers include divestments or outright prohibition, these remedies may no longer be appropriate in the face of potential competition theories of harm (although such remedies might sometimes remain necessary to fully remove potential anticompetitive harm). Decisionmakers should look at mergers from a cost/benefit standpoint, which, in turn, counsels weighing anticompetitive harms against procompetitive benefits. Because one would expect anticompetitive harms in potential-competition cases to be only a fraction of those in actual-competition cases, there is—all else being equal—a higher likelihood in the former that efficiencies will outweigh harms.

It is not clear how this can be addressed in terms of remedies: neither divestures nor prohibitions can realistically be made probabilistic or conditioned on future market outcomes, as firms could easily game this. At the very least, this probably means judges should set a high evidentiary bar for claims that a merger will reduce potential competition, and agencies should, at the margin, focus more heavily on traditional theories that involve more tangible risks of consumer harm.

This restrained approach to enforcement is—perhaps surprisingly, given the agency’s generally interventionist track record in digital markets—encapsulated by the European Commission’s stance in the Google /Fitbit merger, which many sought to frame as a potential competition case. Instead, the commission found that:

As regards Fitbit’s ability to compete in innovation with regard to smartwatches, the Commission notes that [Fitbit’s product strategy], there are also no competitive relationships that would lead to the Transaction reducing Google’s incentives to innovate in the future. Based on the Notifying Party’s submission, the Commission considers that there is no possible market assessed in this Decision where Fitbit is the only or main source of pressure on Google to innovate. For these reasons, the Commission considers that the Transaction would not unduly restrict competition in… innovation as regards the supply of smartwatches. This issue will, therefore, not be further discussed in this Decision.[115]

Review of mergers that involve potential competitors require agencies to make speculative assessments as to how competition will likely play out in a given market. Absent the ability to condition remedies on these future evolutions, error-cost considerations will often dictate that authorities clear mergers, despite a limited risk of future competitive harm.[116] Failing this, agencies and courts should, at the very least, set a high evidentiary bar for plaintiffs to bring forward such claims, or else numerous mergers will wrongly be prohibited as anticompetitive, to the detriment of consumers.

B.      Buying Up Every Potential Competitor Is Unlikely to Be a Successful Business Strategy

One cannot simply assume that mergers involving potential competitors are harmful. It is becoming a common theory of harm regarding non-horizontal acquisitions that they are, in fact, horizontal acquisitions in disguise. This is a form of the “horizontalization” discussed above. The acquired party may not be a direct competitor today but may become one in the future. Therefore, the theory goes, to reduce the competitive pressure they would otherwise face in the future, the incumbent will acquire a company that does not appear to be a competitor.

This argument to strengthen enforcement against mergers involving potential competitors is intuitive but it involves restrictive assumptions that weaken its applicability. The argument is laid out most completely by Steven Salop in his paper, Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits.[117] In it, he argues that:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

Under the model that Salop puts forward, there should, in fact, be a presumption against any acquisition, since any firm is a potential competitor with a sufficiently small probability.[118] Given that a model like Salop’s animates lots of skepticism toward mergers with potential entrants, it is important to examine the model’s assumptions, including that, because monopoly profits exceed duopoly profits, incumbents have an incentive to eliminate potential competition for anticompetitive reasons.

The notion that monopoly profits exceed joint duopoly profits rests upon two restrictive assumptions that hinder the simple application of Salop’s model to antitrust in general and to the merger guidelines, in particular.

First, even in a simple model, it is not always true that monopolists have both the ability and incentive to eliminate any potential entrant simply because monopoly profits exceed duopoly profits. For the simplest complication, suppose there are two possible entrants, rather than the common assumption of just one entrant at a time. The monopolist must now pay each of the entrants enough to prevent entry. But how much? If the incumbent has already paid one potential entrant not to enter, the second could then enter the market as a duopolist, rather than as one of three oligopolists. Therefore, the incumbent must pay the second entrant an amount sufficient to compensate a duopolist, not their share of a three-firm oligopoly profit. The same is true for buying the first entrant. To remain a monopolist, the incumbent would have to pay each possible competitor duopoly profits.

Because monopoly profits exceed duopoly profits, it is profitable to pay a single entrant half of the duopoly profit to prevent entry. It is not, however, necessarily profitable for the incumbent to pay both potential entrants half of the duopoly profit to avoid entry by either.[119] With enough potential entrants, the monopolist in any market will not want to buy any of them out. In that case, the outcome involves no acquisitions.

If we observe an acquisition in a market with many potential entrants, which any given market may or may not have, there must be another reason for that deal besides monopoly maintenance. The presence of multiple potential entrants overturns the antitrust implications of the truism that monopoly profits exceed duopoly profits. The question turns instead to empirical analysis of the merger and market in question as to whether it would be profitable to acquire all potential entrants.

The second simplifying assumption that restricts applicability of Salop’s baseline model is that the incumbent has the lowest cost of production. He rules out the possibility of lower-cost entrants in Footnote 2: “Monopoly profits are not always higher. The entrant may have much lower costs or a better or highly differentiated product. But higher monopoly profits are more usually the case.” If one allows the possibility that an entrant may have lower costs (even if those lower costs won’t be achieved until the future, when the entrant gets to scale), it does not follow that monopoly profits (under the current higher-cost monopolist) necessarily exceed duopoly profits (with a lower-cost producer involved).

One cannot simply assume that all firms have the same costs or that the incumbent is always the lowest-cost producer. This is not just a modeling choice but has implications for how we think about mergers. As Manne, Bowman, & Auer argue:

Although it is convenient in theoretical modeling to assume that similarly situated firms have equivalent capacities to realize profits, in reality firms vary greatly in their capabilities, and their investment and other business decisions are dependent on the firm’s managers’ expectations about their idiosyncratic abilities to recognize profit opportunities and take advantage of them—in short, they rest on the firm managers’ ability to be entrepreneurial.[120]

Given the assumptions that all firms have identical costs and there is only one potential entrant, Salop’s framework would find that all possible mergers are anticompetitive and that there are no possible efficiency gains from any merger. Since the acquired firm cannot, by assumption, have lower costs of production, it cannot improve the incumbent’s costs of production. But, in fact, whether a merger is efficiency-reducing and bad for competition and consumers needs to be proven, not assumed.

If we take Salop’s acquisition model literally, every industry would have just one firm. Every incumbent would acquire every possible competitor, no matter how small—after all, monopoly profits are higher than duopoly profits, and so the incumbent both wants to and can preserve its monopoly profits. The model gives us no way to disentangle when mergers would stop. The merger, again by assumption, does not affect the production side of the economy but exists only to gain market power to manipulate the price. Since the model offers no downside to the incumbent of acquiring a competitor, it would acquire every last potential competitor, no matter how small, unless prevented by law.

Once we allow for the possibility that firms differ in productivity, however, it is no longer true that monopoly profits are greater than industry duopoly profits. We can see this most clearly in situations where there is “competition for the market” and the market is winner-take-all. If the entrant to such a market has lower costs, the profit under entry (when one firms wins the whole market) can be greater than the original monopoly profits. In such cases, monopoly maintenance alone cannot explain an entrant’s decision to sell. An acquisition could therefore be procompetitive and increase consumer welfare. For example, the acquisition could allow the lower-cost entrant to get to scale quicker. The acquisition of Instagram by Facebook, for example, brought the photo-editing technology that Instagram had developed to a much larger market of Facebook users and provided it with a powerful monetization mechanism that was otherwise unavailable to Instagram.

In short, the notion that incumbents can systematically and profitably maintain their market position by acquiring potential competitors rests on assumptions that, in practice, will regularly and consistently fail to materialize. It is thus improper to assume that most of these acquisitions reflect efforts by an incumbent to anticompetitively maintain its market position.

IV.    Guideline 5: Mergers Should Not Substantially Lessen Competition by Creating a Firm That Controls Products or Services That Its Rivals May Use to Compete

The word “may” in this context is much too open, appearing to include products that no firm would imagine using to compete—but may use—and products that have close substitutes that constrain competition. A better wording would be “likely use to compete” or, at least, “plausibly use to compete.” Alternatively, the guideline could use the language from the body of the guideline “have the ability and incentive,” since the incentive to restrict products and services that competitors use is what matters for predicting whether the merged party will restrict products and services.

The guideline should not use the phrase “make it harder for rivals to compete,” since that will include many pro-competitive mergers. If the merged firm is more productive and can outbid competitors for inputs, that merger makes it harder for rivals to compete. Would the agencies challenge such a merger? A better phrase would be that the “merged firm would have the ability and incentive to restrict access and thereby harm competition” or “merged firm would have the ability and incentive to weaken or exclude rivals and thereby harm competition.”

A.      Vertical Mergers Often Create Efficiencies That Make It Harder for Rivals to Compete

The language of “make it harder for rivals to compete” is especially problematic in vertical mergers, which guideline 5 is about, without saying as much. The reason is that vertical mergers often have pro-competitive effects that make it harder for rivals to compete. Most of the time, vertical mergers are benign or beneficial, often leading to cost reductions, synergies, new or improved products, and lower prices for consumers.[121] Again, as Aviv Nevo and colleagues summarized:

Applying the same sort of skepticism about efficiencies in a vertical merger as in a horizontal merger can amount to assuming away a portion of the economics that is at the heart of the vertical investigation.[122]

Critics of the “Chicago school orthodoxy” on vertical mergers pay special attention to “oligopoly” markets,[123] contending that “[a] stronger overarching procompetitive presumption for vertical mergers does not make sense in oligopoly markets where vertical merger enforcement would be focused.”[124] But the critics are simply wrong that the empirical evidence supports greater condemnation of vertical mergers, even in oligopoly markets. At best, the evidence from oligopoly markets is mixed. Rather than a rush to condemnation, there is a need for further research before adopting any new policies based on such ambivalent (at best) evidence.

Emerging criticisms of the so-called “orthodoxy” must either ignore or dismiss the hundreds of econometric studies famously reviewed by Lafontaine and Slade.[125] Indeed, this longstanding work is criticized by some as irrelevant or insufficient.[126] But the reality is that these studies constitute the overwhelming majority of the evidence we have; many, if not most, of the studies are well-done, even by modern standards.[127] The upshot of these studies, as Lafontaine & Slade put it, is that:

[C]onsistent with the large set of efficiency motives for vertical mergers that we have described so far, the evidence on the consequences of vertical mergers suggests that consumers mostly benefit from mergers that firms undertake voluntarily.[128]

Francine Lafontaine, while acknowledging the limitations of some of the evidence used for these studies, recently reiterated the relevance of the studies to vertical mergers, and restated the overall conclusions of the literature:

We were clear that some of the early empirical evidence is less than ideal, in terms of data and methods.

But we summarized by saying that the empirical literature reveals consistent evidence of efficiencies associated with the use of vertical restraints (when chosen by market participants) and, similarly, with vertical integration decisions.[129]

Margaret Slade reiterated this same conclusion in June 2019 at the OECD, where she noted that, even in light of further studies, “[t]he empirical evidence leads one to conclude that most vertical mergers are efficient.”[130] Moreover, as Slade noted, forecasting likely effects from vertical mergers using more modern tools—such as assessment of vertical upward pricing pressure—is a fraught and unreliable endeavor.[131]

Nonetheless, critics forward the claim that many newer studies demonstrate harm from vertical mergers. The implication is that the balance of evidence taken from these studies tips the scales against a presumption of benefits from vertical mergers:

Surveys of earlier economic studies, relied upon by commenters who propose a procompetitive presumption, reference studies of vertical mergers in which the researchers sometimes identified competitive harm and sometimes did not. However, recent empirical work using the most advanced empirical toolkit often finds evidence of anticompetitive effects.[132]

The implication is that the balance of evidence taken from these studies tips the scales against a presumption of benefits from vertical mergers. Yet the newer literature is no different than the old in finding widely procompetitive results overall, intermixed with relatively few seemingly harmful results. As scholars at the Global Antitrust Institute at George Mason Law School have noted in a thorough canvassing of the more-recent literature:

In sum, these papers from 2009-2018 continue to support the conclusions from Lafontaine & Slade (2007) and Cooper et al. (2005) that consumers mostly benefit from vertical integration. While vertical integration can certainly foreclose rivals in theory, there is only limited empirical evidence supporting that finding in real markets.[133]

Below, we briefly review the actual results of several of these recent studies—including, in particular, studies that were referenced at the recent 2018 FTC hearings to support claims that the “econometric evidence does not support a stronger procompetitive presumption.”[134]

Fernando Luco and Guillermo Marshall examined Coca-Cola and PepsiCo’s acquisitions of some of their downstream bottlers.[135] At the time, Dr Pepper Snapple Group remained independent in selling inputs to bottlers. Bottlers, even those that are vertically integrated with one of their upstream suppliers, purchased inputs from competing upstream suppliers. Based on their statistical analysis, the authors conclude that vertical integration in the carbonated-beverage industry was associated with price increases for Dr Pepper Snapple Group products and price decreases for both Coca-Cola and PepsiCo products bottled by vertically integrated bottlers. However, the market share of the products associated with higher prices was no more than 2%. Thus, the authors conclude: “vertical integration did not have a significant effect on quantity-weighted prices when considering the full set of products.”[136] Overall, the effect on consumers was either an efficiency gain or no change. As Francine Lafontaine notes, “in total, consumers were better off given who was consuming how much of what.”[137]

Justine Hastings and Richard Gilbert conclude that vertical integration is associated with statistically significant higher wholesale gasoline prices.[138] Using data from 1996-1998, their study examined the wholesale prices charged by a vertically integrated refiner/retailer and found the firm charged higher wholesale prices in cities where its retail outlets competed more with independent gas stations. Hastings and Gilbert conclude that their observations are consistent with a theory of raising rivals’ costs.[139]

In subsequent research, Christopher Taylor, Nicolas Kreisle, and Paul Zimmerman examine retail gasoline prices following the 1997 acquisition of an independent gasoline retailer by a vertically integrated refiner/retailer.[140] They estimate the merger was associated with a price increase of 0.4 to 1.0 cents per-gallon—about 1% or less—and was economically insignificant.[141] These results were at odds with Hastings’ earlier review of the same merger, which concluded that the replacement of independent retailers with branded vertically integrated retailers would result in higher prices.[142]

To explain the conflicting results between Hastings and Taylor et al., Hastings[143] highlights the challenges of evaluating vertical mergers with incomplete data or using different sets of data—even seemingly similar data can yield wildly different results. Because of the wide range of reported results and their sensitivity to the data used, caution should be exercised before inferring any general conclusions from this line of research.

Other commonly cited studies for the proposition that the more recent evidence on vertical mergers shows a greater likelihood of harm fare no better.

Gregory Crawford, Robin Lee, Michael Whinston, & Ali Yurukoglu examine vertical mergers between cable-programming distributors (MVPDs) and regional sports networks (RSNs).[144] Margaret Slade characterizes the findings of the paper as “mixed,” in that integration can be associated with both beneficial and harmful effects.[145] In a purely semantic sense, that is an accurate characterization. But the overall results in Crawford et al. overwhelmingly find procompetitive consumer-welfare effects:

In counterfactual simulations that enforce program access rules, we find that vertical integration leads to signi?cant gains in both consumer and aggregate welfare… Averaging results across channels, we find that integration of a single RSN with effective program access rules in place would reduce average cable prices by 1.2% ($0.67) per subscriber per month in markets served by the RSN, and increase overall carriage of the RSN by 9.4%. Combined, these effects would yield, on average, a $0.43 increase in total welfare per household from all television services, representing approximately 17% of the average consumer willingness to pay for a single RSN. We also predict that consumer welfare would increase….

On net, we find that the overall effect of vertical integration in the absence of effective program access rules—allowing for both efficiency and foreclosure incentives—is to increase consumer and total welfare on average, resulting in (statistically significant) gains of approximately $0.38–0.39 per household per month, representing 15–16% of the average consumer willingness to pay for an RSN….[146]

The implications of this well-designed and carefully executed study are clear. Indeed, Harvard economist Robin Lee, one of the study’s authors, concluded that the findings demonstrate that the consumer benefits of efficiency gains outweighed any harms from foreclosure.[147]

Ayako Suzuki reviewed the vertical merger between Time Warner and Turner Broadcasting in programming and distribution in the cable-television market.[148] The paper examined the merger’s effects on foreclosure, per-channel prices, basic-bundle product mix, and basic-bundle penetration.

The author found foreclosure following the merger in Time Warner markets for those rival channels that were not integrated with any cable distributors. After the merger, two independent channels, the Disney Channel and the Fox News Channel, were foreclosed from Time Warner markets. The paper notes that prior to the merger, two Turner channels (TBS and TCM) were foreclosed by Time Warner, but the foreclosure was ended after the merger: “Turner suffered from the low market shares of TBS and TCM in Time Warner markets, therefore it integrated itself with Time Warner in order to recover their market shares.”[149]

Suzuki concludes that per-channel prices decreased more in Time Warner markets than they would have in the absence of the merger.[150] The paper suggests transaction-cost efficiencies lowered the implicit cost to the channels’ distributor, causing input prices to shift downward, and in turn resulted in reduced cable prices to consumers.[151]

V.      Guideline 6: Vertical Mergers Should Not Create Market Structures That Foreclose Competition

Guideline 6 appears to add additional structural presumptions that are not justified by the law or the economics. On the law, the guideline says “If the foreclosure share is above 50 percent, that factor alone is a sufficient basis to conclude that the effect of the merger may be to substantially lessen competition, subject to any rebuttal evidence…” However, the section of Brown Shoe immediately following the one cited states:

Between these extremes, in cases such as the one before us, in which the foreclosure is neither of monopoly nor de minimis proportions, the percentage of the market foreclosed by the vertical arrangement cannot itself be decisive.[152]

On the economics, guideline 6 shares all the issues of the structural presumptions discussed around guideline 1 and more. The “foreclosure share” is the amount the merged firm could foreclose. It does not require an incentive to foreclose. If guideline 6 remains, foreclosure share needs to include an incentive to foreclose. Otherwise, the agencies could challenge a merger of a firm with 51 percent of an upstream market and a firm with 0.001 percent of a downstream market since “the foreclosure share is above 50 percent, [and] that factor alone is a sufficient basis to conclude that the effect of the merger may be to substantially lessen competition.”

The courts have recently rejected such arguments, so it is surprising to see them in the Draft Guidelines. In the recent Microsoft-Activision merger, the Draft Guidelines would certainly flag it to be blocked since Microsoft could pull Call of Duty from the Sony PlayStation consoles. But the courts concluded that Microsoft would not have an incentive to pull Call of Duty, since Sony has the biggest market share.[153]

VI.    Guideline 8: Mergers Should Not Further a Trend Toward Concentration

The agencies are well-justified to think about the dynamics of the market, not just the static snapshot. Unfortunately, guideline 8 maintains all the flaws of guideline 1 and adds a few more.

It is important to reiterate: concentration need not be harmful to consumers. In fact, the trade and industrial-organization literature that explicitly studies changes (or trends) in competition finds that increased competition increases concentration. As Chad Syverson summarizes:

Many empirical studies in varied settings have found that greater substitutability/competition—resulting from, say, reductions in trade, transport, or search costs—shifts activity away from smaller, higher-cost producers and toward larger, lower-cost producers.. [We] demonstrate that search cost reductions reallocate market share toward lower-cost and larger sellers, increasing market concentration even as margins fall. It is not an exaggeration to say that there are scores, perhaps hundreds, of such studies.[154]

This literature does not imply that every increase in concentration is pro-competitive. Instead, it simply means that a previous trend toward concentration need not be anticompetitive in any way. If there is an industry that has become more concentrated through more competition, will the agencies block a merger that increases concentration but does not increase prices?

Guideline 8 is especially problematic when paired with the statement “efficiencies are not cognizable if they will accelerate a trend toward concentration.”[155] Such a statement effectively negates any efficiency defense available to all but the very smallest firms. Efficiencies will almost always increase concentration—especially if those efficiencies come from economies of scale. If a merger creates efficiencies, the merged firm can lower costs, cut prices, and attract more customers. Attracting more customers with better products and prices will likely increase competition. The economic evidence is quite strong that efficiency increases concentration.

VII.  Guideline 11: When a Merger Involves Competing Buyers, the Agencies Examine Whether It May Substantially Lessen Competition for Workers or Other Sellers

Guideline 11 should be commended for mentioning lower wages as an anticompetitive harm. The other guidelines would benefit from focusing more on effects on prices, quality, and innovation, instead of structural presumptions.

Guideline 11 should, however, be restricted to the first two paragraphs: the first stating that merger analysis applies to buyer markets and the second (if there was any confusion) that labor markets are buyer markets. The rest of the guideline is a digression on the nature of labor markets that cites neither law nor economics. For example, the guidelines say, “labor markets are often relatively narrow.”[156] What is the justification for this claim in the merger guidelines, of all documents?

If the agencies have demonstrated a loss of competition in the labor market, the guidelines make clear that the Clayton Act does not allow for the consideration of offsetting effects in output markets. In the standard monopsony models in economics, there is no offsetting effect, so the point is irrelevant. Harm to sellers of inputs (workers) hurts consumers as well. This was the case in the recent successful action to  block Penguin-Random House from merging with Simon & Schuster.[157] The parties agreed that, if there was harm to the authors, there would be fewer books, harming consumers.[158] There was no need to think about offset harms.

The hard part is when the agencies have yet to prove loss of competition in the labor market, but that putative loss is being adjudicated. Thorny issues arise that make competition among buyers different from competition among sellers, but the guidelines do not offer any guidance here. For example, will the agencies consider a reduction in wages to be evidence of harm in labor markets? A merger that increases efficiency but does not decrease competition could still end up reducing workers’ wages if the efficiency gains require fewer workers. Perhaps the merger does not require fewer workers overall, but it does reduce employment of a subset of workers. Will the agencies regard that as a labor-market harm? The guidelines may not be the right place for these clarifications, but providing guidance on such tough issues would be more beneficial than making blanket statements about the nature of labor markets.

A.        Monopsony Is More Than the Mirror Image of Monopoly

The application of antitrust to monopsony is significantly more complicated than it might seem. On the surface, it may appear that monopsony is simply the “mirror image” of monopoly.”[159] There are, however, several important differences between monopoly and monopsony, and several complications raised by monopsony analysis that significantly distinguish the analysis required for each. Most fundamental among these, monopsony and monopoly markets do not sit at the same place in the supply chain.[160] This matters because all supply chains end with final consumers. Accordingly, from a policy standpoint, it is essential to decide whether antitrust ultimately seeks to maximize output and welfare at that (final) level of the distribution chain (albeit indirectly); whether intermediate levels of the distribution chain (e.g., an input market) should be analyzed in isolation; or whether effects in both must be somehow aggregated.

This has important ramifications for antitrust enforcement against monopsonies. As we explain below, competitive conditions of input markets have salient impacts on prices and output in product markets. Given this, any evaluation of monopsony must consider the “pass-through” to the final product market, while there is no such “mirror image” complication in the consideration of final-product monopoly markets. Along similar lines, treating the assessment of mergers in input markets as the simple mirror image of product-market mergers presents important problems for the way authorities address merger efficiencies, as traditional efficiencies and increased buyer power are often two sides of the same coin. Finally, it is unclear how authorities should think about market definition—a cornerstone of modern antitrust policy—in labor markets, in particular.

The upshot is that, while monopsony concerns are becoming more prevalent in academic and policy discussions, the agencies should be extremely hesitant as they move forward. Some have argued that “[m]mergers affecting the labor market require some rethinking of merger policy, although not any altering of its fundamentals.”[161] As we discuss below, however, while the economic “fundamentals” undergirding merger policy may not change for labor-market mergers, the “rethinking” required to properly assess such mergers does entail fundamental changes that have not yet been adequately studied or addressed. As many have pointed out, there is only a scant history of merger enforcement in input markets in general, and even less in labor markets.[162] It is premature to offer guidelines purporting to synthesize past practice and the state of knowledge when neither is well established.

1.        Theoretical differences between monopoly and monopsony

Before getting to the practical differences of a monopoly case versus a monopsony case, consider the theoretical differences between identifying monopsony power versus monopoly power.[163] Suppose, for now, that a merger either generates efficiency gains or market power but not both. In a monopoly case, if there are efficiency gains from a merger, the quantity sold in the output market will increase. With sufficient data, the agencies will be able to see (or estimate) the efficiencies directly in the output market. Efficiency gains result in either greater output at lower unit cost or else product-quality improvements that increase consumer demand. In contrast, if the merger simply enhances monopoly power without efficiency gains, the quantity sold will decrease, either because the merging parties raise prices or quality declines. The empirical implication of the merger is seen directly in the market in question.

The monopsony case is more complicated, however. Ultimately, we can be certain of the effects of monopsony only by looking at the output market, not the input market where the monopsony power is claimed. To see this, consider again a merger that generates either efficiency gains or market (now monopsony) power. A merger that creates monopsony power will necessarily reduce the prices and quantity purchased of inputs like labor and materials. But this same effect (reduced prices and quantities for inputs) could be observed if the merger is efficiency-enhancing, as well. If there are efficiency gains, the merged parties may purchase fewer of one or more inputs. For example, if the efficiency gain arises from the elimination of redundancies in a hospital merger, the hospital will buy fewer inputs, hire fewer technicians, or purchase fewer medical supplies. This may even reduce the wages of technicians or the price of medical supplies, even if the newly merged hospitals are not exercising any market power to suppress wages.[164]

Decisionmakers cannot simply look at the quantity of inputs purchased in the monopsony case as the flip side of the quantity sold in the monopoly case, because the efficiency-enhancing merger can look like the monopsony merger in terms of the level of inputs purchased. The court can differentiate a merger that generates monopsony power from a merger that increases productive efficiencies only by looking to the output market. Once we look at the output market, as in a monopoly case, if the merger is efficiency-enhancing, there will be an increase in the output-market quantity. If the merger increases monopsony power, the firm perceives its marginal cost as higher than before the merger and will reduce output.[165]

In short, the assumption that monopsony analysis is simply the mirror image of monopoly analysis does not hold.[166] In both types of mergers—those that possibly generate monopoly or monopsony—the agencies and courts cannot look to the input market to differentiate them from efficiency-enhancing mergers; they must look at the output market. Therefore, it is impossible to discuss monopsony power coherently without considering the output market.

2.        Monopsony and merger efficiencies

In real world cases, mergers will not necessarily be either just efficiency-enhancing or just monopsony-generating, but a blend of the two. Any rigorous consideration of merger effects must account for both and make some tradeoff between them. The question of how guidelines should address monopsony power is inextricably tied to the consideration of merger efficiencies—particularly given the point above that identifying and evaluating monopsony power will often depend on its effects in downstream markets.

This reality raises some thorny problems for monopsony merger review that have not been well studied to date:

Admitting the existence of efficiencies gives rise to a subsequent set of difficult questions central to which is “what counts as an efficiency?.” A good example of why the economics of this is difficult is considering the case in which a horizontal merger leads to increased bargaining power with upstream suppliers. The merger may lead to the merging parties being able to extract necessary inputs at a lower price than they otherwise would be able to. If so, does this merger enhance competition in a possible upstream market? Perhaps not. However, to the extent that the ability to obtain inputs at a lower price leads to an increase in the total output of the industry, then downstream consumers may in fact bene?t. Whether the possible increase in the total surplus created by such a scenario should be regarded as off-setting any perceived loss in competition in a more narrowly de?ned upstream market is a question that warrants more attention than it has attracted to date.[167]

With “monopoly” mergers, plaintiffs must show that a transaction will reduce competition, leading to an output reduction and increased prices to consumers. This finding can be rebutted by demonstrating cost-saving or quality-improving efficiencies that would lead to lower prices or other forms of increased consumer welfare. In evaluating such mergers, agencies and courts must weigh the upward pricing pressure from reduced competition against the downward pricing pressure associated with increased efficiencies and the potential for improved quality.

As we have explained above, this analysis becomes more complicated when a merger raises monopsony concerns. In a simple model of monopsony, the merger would increase market power in the input market (e.g., labor), leading to a lower price paid for the input and a smaller quantity used of the input relative to pre-merger levels. Assuming no change in market power in the final product market, these cost savings would result in lower prices paid by consumers. Should such efficiency effects “count” in evaluating mergers alleged to lessen competition in input markets? It is surely too facile a response to assert that such efficiency effects would be “out of market” and thus irrelevant. Indeed, if antitrust enforcement truly seeks to promote consumer welfare, any evaluation of a “monopsony” merger must weigh these effects against the effects in the input market.

Some would argue these are the types of efficiencies that merger policy is meant to encourage. Others may counter that policy should encourage technological efficiencies while discouraging efficiencies stemming from the exercise of monopsony power.

But this raises another complication: How do agencies and courts distinguish “good” efficiencies from “bad?” Is reducing the number of executives pro- or anticompetitive? Is shutting down a factory or healthcare facility made redundant post-merger pro- or anticompetitive? Trying to answer these questions places agencies and courts in the position of second guessing not just the effects of business decisions, but the intent of those decisions (to a first approximation, the observed outcomes are identical). Even worse, it can create a Catch-22 where an efficiencies defense in the product market is turned into an efficiencies offense in the input market—e.g., a hyper-efficient merged entity may outcompete rivals in the product market, possibly leading to monopsony in the input market. In ambiguous cases this means the outcome may depend on whether it is challenged on the input or output side of the market, and it even implies that overcoming a challenge by successfully identifying efficiencies in one case creates the predicate for a challenge based on effects on the other side of the market.

A further complication arises when dynamic effects are considered, which may convert apparent harms even on only the seller side of an input market into benefits:

[T]he presence of larger buyers can make it more profitable for a supplier to reduce marginal cost (or, likewise, to increase quality). This result stands in stark contrast to an often expressed view whereby the exercise of buyer power would stifle suppliers’ investment incentives. In a model with bilateral negotiations, a supplier can extract more of the profits from an investment if it faces more powerful buyers, though the supplier’s total profits decline. Furthermore, the presence of more powerful buyers creates additional incentives to lower marginal cost as this reduces the value of buyers’ alternative supply options.[168]

None of this is to say the creation of monopsony power should categorically be excluded from the scope of antitrust enforcement, of course. But it is quite apparent that this sort of enforcement raises extremely complicated tradeoffs that are elided over or underappreciated in the current discourse and under-explored in the law. It would be deeply problematic to attempt to enshrine a particular view of these tradeoffs into guidelines given the current state of knowledge and practice in this area. Perhaps worse, it would almost surely undermine the efficacy and authority of guidelines in general, as courts are unlikely to find such guidelines to be the helpful distillation of economic and legal principles that they are today.

3.        Determining the relevant market for labor

In monopoly cases, agencies and courts face an enormous challenge in accurately identifying a relevant market. These challenges are multiplied in input markets—especially labor markets—in which monopsony is alleged. Many inputs are highly substitutable across a wide range of industries, firms, and geographies. For example, changes in technology, such as the development of PEX tubing and quick-connect fittings, allows for laborers and carpenters to perform work previously done exclusively by plumbers. Technological changes have also expanded the relevant market in skilled labor: Remote work during the COVID-19 pandemic, for example, demonstrates that many skilled workers are not bound by geography and compete in national—if not international—labor markets.

When Whole Foods attempted to acquire Wild Oats, the FTC defined the relevant market as “premium natural and organic supermarkets” as a way to exclude larger firms, such as Walmart and Kroger, from the relevant product market.[169] Yet even if one were to accept the FTC’s product market definition, it is unlikely that anyone would consider employment at a “premium natural and organic supermarket” as a distinct input market. This is because the skill set needed to work at Whole Foods overlaps with the skill set demanded by myriad retailers and other employers—and certainly overlaps with the skillset needed to work at Kroger.

Moreover, policies such as occupational licensing have the effect of arbitrarily defining the work that can be performed or the services provided by a wide range of workers. This raises the question whether firms should be scrutinized for exercising monopsony power when regulations may be limiting the scope of the relevant market and contributing to the monopsony conditions. A “whole-of-government” approach to competition,[170] in other words, would certainly work to reduce these artificial barriers to market scope before thwarting possibly efficiency enhancing mergers that appear monopsonistic only because of such government constraints.

Contrary to what some have claimed, applying the SSNIP test to input markets—in the form of a “small and significant but non-transitory reduction in wages” or “SSNRW”—would also raise significant difficulties.[171] For a start the necessary datapoints required to conduct a SSNRW test are much harder to obtain than is the case for the SSNIP. The SSNIP test asks whether a hypothetical monopolist could profitably raise prices 5-10% above the competitive baseline, whereas the SSNRW test questions whether a hypothetical monopsonist could profitably decrease wages by 5-10%. The former question is far more tractable than the latter. Indeed, under the SSNIP, profitability hinges on the quantity sold, as well as the difference between prices and costs—both of which are relatively amendable to measurement. This is less true of the SSNRW, which depends on the difference between prices paid for inputs and their “marginal revenue product.” The second of these two factors would prove extremely challenging, perhaps impossible, to measure. This makes the SSNRW significantly harder to apply than the SSNIP. At the same time, “wages” in many labor contexts consist of a complicated mix of factors, including some (e.g., “work environment”) that defy easy quantification. While there are, of course, issues with measuring quality changes in product markets, the problems are significantly magnified in labor markets, and laborers’ preferences are invariably more heterogenous across many more dimensions of the elements of labor’s “price.” Furthermore, the marginal revenue product of an input hinges on competitive conditions in the output market. This reinforces the sense that monopsony analysis inherently raises cross-market effects that are less prevalent in the monopoly case.

4.        Monopsony and the consumer welfare standard

As discussed in the previous sections, using antitrust enforcement to thwart potential monopsony harms is a task full of evidentiary difficulties, as well as complex tradeoffs. Perhaps more problematically, it is also unclear whether (and, if so, how) such an endeavor is consistent with the consumer welfare standard—the lodestar of antitrust enforcement—at least as it is currently understood and implemented by courts.

Marinescu & Hovenkamp assert that:

Properly defined, the consumer welfare standard applies in exactly the same way to monopsony. Its goal is high output, which comes from the elimination of monopoly power in the purchasing market.… [W]hen consumer welfare is properly defined as targeting monopolistic restrictions on output, it is well suited to address anticompetitive consequences on both the selling and the buying side of markets, and those that affect labor as well as the ones that affect products. In cases where output does not decrease, the anticompetitive harm to trading partners can also be invoked.”[172]

But this is far from self-evident. There are at least two problems with this reasoning.

For a start, the assertion that harm to input providers that does not result in reduced product output is actionable is based on a tenuous assertion that a mere pecuniary transfer is sufficient to establish anticompetitive harm.[173] This is problematic because such harms may actually benefit consumers. In the extreme example, all of the benefits of a better negotiating position are passed on to consumers.[174] The main justification for ignoring these cross-market effects (as with all market-definition exercises) is primarily a pragmatic one (though it is rather weakened in light of modern analytical methods).[175] Particularly in the context of inputs into a specific output market, these cross-market effects are inextricably linked and hardly beyond calculation. And as the enforcement agencies have previously recognized, “[i]nextricably linked out-of-market efficiencies, however, can cause the Agencies, in their discretion, not to challenge mergers that would be challenged absent the efficiencies.”[176]

The assertion that pecuniary transfers are actionable is also inconsistent with the fundamental basis for antitrust enforcement, which seeks to mitigate deadweight loss, but not mere pecuniary transfers that do not result in anticompetitive effects.[177]

Second, it is unclear whether the consumer welfare standard applies to input markets. At its heart, the consumer welfare standard focuses on the effects that a(n) (incipient) monopolist’s behavior may have on consumers. And courts have, arguably, extended this welfare calculation to all direct purchasers affected by anticompetitive behavior. Much less clear is whether courts have extended (or would extend) this notion of anticompetitive harm to input markets. This goes to the very heart of the consumer welfare standard.

As we explain above, lower wages could be consistent with both efficiency and monopsony.[178] Somewhat more problematically, these lower wages may also be accompanied by lower prices passed through to consumers (or at least the monopsonist’s direct purchasers, downstream).

Larger buyers may also be able to reduce their purchasing costs at the expense of suppliers…. The concept of buyer power as an efficiency defence rests squarely on such a presumption. What is more, the argument also posits that the exercise of buyer power will not only have distributional consequences, but also increase welfare and consumer surplus by reducing deadweight loss. As we spell out in detail below, welfare gains may arise both at the upstream level, i.e., in the transactions between the more powerful merged firm and its suppliers, as well as at the downstream level, where the creation of buyer power may translate into increased rivalry and lower prices. The extent to which final consumers ultimately benefit is of particular importance if antitrust authorities rely more on a consumer standard when assessing mergers. If total welfare is the standard, however, distributional issues are not directly relevant and any pass-on to consumers is thus only relevant in as much as it contributes to total welfare.[179]

This raises an obvious question: can the consumer welfare standard (and thus antitrust authorities and courts) reach a finding of anticompetitive harm if consumers (at least in the narrow market under investigation) are ultimately being charged lower prices? As the FTC summarized in closing the investigation of a merger between two pharmacy benefit managers, “[a]s a general matter, transactions that allow firms to reduce the costs of input products have a high likelihood of benefitting consumers, since lower costs create incentives to lower prices.”[180]

Consider Judge Breyer’s Kartell opinion. As Steve Salop explains:[181]

The famous Kartell opinion written by Judge (now Justice) Stephen Breyer provides an analysis of a buyer-side “cartel” (comprised of final consumers and their “agent” insurance provider, Blue Cross) that also is consistent with the true consumer welfare standard.… Buyer-side cartels generally are inefficient and reduce aggregate economic welfare because they reduce output below the competitive level…. However, a buyer-side cartel. comprised of final consumers generally would raise true consumer welfare (i.e., consumer surplus) because gains accrued from the lower prices would outweigh the losses from the associated output reduction, even though the conduct inherently reduces total welfare (i.e., total surplus).…

…Judge Breyer treated Blue Cross essentially as an agent for the customers it insured, rather than as an intermediary firm that purchased inputs and sold outputs as a monopolistic reseller. The court apparently assumed (perhaps wrongfully) that Blue Cross would pass on its lower input costs to its customers in the form of lower insurance premiums….

…In permitting Blue Cross to achieve and exercise monopsony power by aggregating the underlying consumer demands for medical care—i.e., permitting Blue Cross to act as the agent for final consumers—the Kartell court implicitly opted for the true consumer welfare standard. Blue Cross’s assumed monopsony conduct on behalf of its subscribers would thus lead to higher welfare for its subscribers despite reduced efficiency and lower aggregate economic welfare. Thus, this result represents a clear (if only implicit) judicial preference for the true consumer welfare standard rather than the aggregate economic welfare standard.

By this logic, it seems, the relevant “consumer” welfare in antitrust analysis—including mergers that increase either monopoly or monopsony power—is that of the literal consumer: the end-user of the final product. But this contrasts quite sharply with the standard mode of analysis in monopsony cases as the mirror image of monopoly, in which the merging parties’ “trading partner” (whether upstream or downstream) is the relevant locus of the welfare analysis.

Indeed, extended to more current potential cases, this mode of analysis raises a distinct problem for the agencies. Consider, for example, a hypothetical case against Kroger surrounding practices that exploit its buyer power.[182] Should such a challenge fail regardless of the effect on input providers because Kroger can be considered “an agent for the customers it [sells to]”? There is, as Salop seems to suggest,[183] some merit in such an approach, but it is certainly not how similar cases have been evaluated in the past.

There is no easy answer to the difficulty of assessing harm in upstream markets when downstream markets benefit. At first blush, excluding deadweight losses that stem from monopsony power (or at least forcing plaintiffs to show that downstream purchasers are also harmed) seems like legalistic reasoning that is largely incompatible with the welfarist ancestry of the consumer welfare standard.[184] Indeed, the consumer welfare standard is largely premised on the assumption that increased output is desirable, and deadweight losses are harmful to society, regardless of their second-order effects. It seems odd to depart from this reasoning just because a supplier, rather than a consumer, is being harmed. Not to mention that, from a welfare standpoint, inefficient switching, caused by a deadweight loss, is no less harmful in the monopsony context than the monopoly one.

But at least when it comes to law and antitrust practice, things are more complicated than that. Faced with what may potentially be intractable economic questions, antitrust courts have often decided to limit antitrust analysis to what economics generally refer to as partial equilibrium analysis. This likely explains why only direct purchasers can claim antitrust damages,[185] and why the Amex court chose to overlook potential harm to cash purchasers (as they were deemed to lie outside of the relevant market).[186] The upshot is that, with some notable exceptions (such as the case of two-sided markets in Amex), antitrust courts have been reluctant to analyze competitive effects in adjacent markets.

What might seem like an arbitrary decision appears more reasonable when one considers the sheer complexity of the task at hand. Economic behavior will often have second-order effects that run in an opposite direction to its first-order or “partial equilibrium” ones. A charcoal monopoly may cause buyers to opt for cleaner energy sources; a conservation cartel may maximize the long-term value of scarce resources.[187]

The question is whether antitrust law has a comparative advantage in dealing with these more “systemic” issues, or whether other legal frameworks are better adapted. Put differently, antitrust law’s main strength might be that it is mostly a consumer-oriented body of law that focuses on a single tractable problem: the prices consumers and other direct purchasers pay for goods. If that is true, then maybe other bodies of law (such as, e.g., labor and environmental laws) may be better suited to deal with broader harms. Indeed, in the case of each of these fields there exists a massive regulatory apparatus specifically designed to implement government standards. And, under the law as it stands, where antitrust law and a regulatory regime conflict, antitrust must give way.[188]

We do not purport to have a satisfactory answer to this complicated question. In fact, it is probably fair to say one does not exist. Antitrust law can either depart from its welfarist underpinnings—a large loss for its economic consistency—or it can follow those principles towards potentially intractable problems that may ultimately undermine its administrability and thus its usefulness as a policy tool. At this juncture, it is not clear there is a compromise that might enable enforcers to thread the needle to solve this complex conundrum. And if such a solution exists, it has yet to be articulated in a convincing manner that may lead to actionable insights for enforcers or courts.

Given all of this, the FTC and DOJ’s desire to adopt merger guidelines that address monopsony harms, while clearly important, seems premature compared to the state of the economic literature, and potentially unactionable under the consumer welfare standard. This is not to say the antitrust policy world should suddenly ignore monopsony harms, but rather that more research, discussion, and case law is needed before definitive guidelines can be written. And, ultimately, it may well be that legislative change is needed before any such guidelines will be enforceable before the courts.

VIII.     Market Definition

The difficulties discussed above should serve as a good reminder that market definition is but a means to an end. As William Landes, Richard Posner, and Louis Kaplow have all observed, market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.[189]

Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.

Unfortunately, this is not how the FTC has proceeded in recent cases or the current Draft Guidelines. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude.[190] Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:

The benefits to users of additional competition include some or all of the following: additional innovation…; quality improvements…; and/or consumer choice…. In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.[191]

Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.

In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.

IX.   Rebuttal Evidence Showing That No Substantial Lessening of Competition Is Threatened by the Merger

Starting at page 39, we discussed how vertical mergers often are pro-competitive and introduce efficiencies. Even in the case of horizontal mergers, however, the best recent empirical work finds that efficiencies in many mergers.[192] While procompetitive efficiencies could be oversold by the merging parties, they cannot be assumed away, and the Draft Guidelines raise the burden on any efficiency defense beyond what is justified by the law or the economics. For example, the Draft Guidelines require that cognizable efficiencies “could not be achieved without the merger under review.”[193] First, “could not” is too high of a burden. Second, what if there were many similar mergers available that offered efficiencies, all of which were pro-competitive? The wording of draft guideline would not recognize those efficiencies, since they were not unique to the merger being considered. The wording of the 2010 HMGs is better: “The Agencies credit only those efficiencies likely to be accomplished with the proposed merger and unlikely to be accomplished in the absence of either the proposed merger or another means having comparable anticompetitive effects” (emphasis added).[194]

The most extreme version of “raising the burden of proof” is the statement: “efficiencies are not cognizable if they will accelerate a trend toward concentration (see Guideline 8) or vertical integration (see Guideline 6).”[195] Until that is removed, there effectively is no rebuttal, since most efficiencies will accelerate a trend toward concentration or involve vertical integration. As such, the above statement should be removed.

X.     Avoiding Damage to the Credibility of the Merger Guidelines

Conceptually, the role of guidelines is to codify the accepted knowledge in a particular area of antitrust for the sake of legal certainty, and not to drive the law toward a particular unsettled frontier of the discipline. It is highly doubtful, however, whether some of the issues raised in the Draft Guidelines enjoy anywhere near the level of consensus needed to justify being codified into guidelines. The problem with pretending that they do is that it risks turning “guidelines” into an opportunity for agencies to advocate for new antitrust law and set new antitrust policy, rather than offer a useful, albeit comparatively modest, tool for legal interpretation.

Relatedly, it is somewhat puzzling that the agencies feel compelled and empowered to issue new merger guidelines now. Typically, guidelines are issued in the face of new learnings or new jurisprudence with the potential to overhaul an area of antitrust law. Adoption of the 1982 guidelines, for instance, was preceded by a series of Supreme Court opinions that indicated a marked embrace of economic analysis in the Court’s antitrust analysis.[196] Nothing of this sort has, to our knowledge, preceded the agencies’ current proposals. If new economic or legal learning is not guiding the new guidelines, then what is? It is not cited in the Draft Guidelines. The most plausible explanation is that it is politics. This idea is further reinforced by the limited public debate surrounding the current process for adopting new guidelines, and the pervasiveness of certain contentious assumptions which indicate a clear political bias and preordained political intent.

Not that there isn’t precedent for this sort of approach. But the last time merger guidelines were (arguably) employed to advance a contentious political objective was more than 40 years ago.[197] By virtually any measure, subsequent updates to the guidelines have been aimed at attempting to incorporate relatively new-but-well-established learning and to synthesize updates to longstanding agency practice aimed at “getting it right,” particularly with respect to basic and ever-evolving procedural issues, like the use of thresholds. There has been, in other words, an overarching humility to the process, which has lent it a crucial authority in both courts and among practitioners and economic actors.

The 2010 [HMGs] are noteworthy because, although the agencies’ views are not binding on the judiciary, courts adjudicating merger challenges routinely cite them as persuasive. The Guidelines derive their persuasive value from laying out a consensus view on the framework that the FTC and DOJ have developed, over decades of experience, to analyze the effects of mergers. Reflecting precedent from courts and the agencies, and based on accepted economic principles, they garnered support at adoption and in case after case, serving as the touchstone for merging parties, enforcers, and judges alike.[198]

Indeed, where previous guidelines have strayed perhaps a bit too far into novelty, their influence on the courts has been minimal. Perhaps the best example of this has been the reception by courts of the 2010 Horizontal Merger Guidelines (“2010 HMGs”), particularly the intended diminishment of the role of technical analysis of market definition and the heightened reliance on relatively novel methods of direct evidence of competitive effects.[199] Although the 2010 HMGs have generally proved to be significantly influential,[200] courts’ have been decidedly reluctant to replace consideration of market definition with measures like the gross upward pricing pressure index (“GUPPI”) to assess unilateral effects.[201] Indeed, reliance on market shares to determine case outcomes has arguably increased.[202]

By contrast, the FTC’s recent rejection of the 2020 Vertical Merger Guidelines (“2020 VMGs”) was grounded in an obvious distaste for the specific outcomes it might have engendered.[203] Although nominally justified by a claimed lack of scholarly support,[204] that rhetoric was transparently faulty, particularly given the process by which the withdrawal was accomplished.[205] Indeed, as Carl Shapiro and Herbert Hovenkamp put it: “The Federal Trade Commission’s recent withdrawal of its 2020 vertical merger guidelines is flatly incorrect as a matter of microeconomic theory and is contrary to an extensive economic literature about vertical integration.”[206] To be sure, there was (and always will be) disagreement at the margins over best practices in merger analysis and enforcement. But nothing in the 2020 VMGs was unsupported by longstanding scholarship and practice (except, ironically, to the extent they may have gone too far at times toward repudiating the FTC majority’s preferences).[207]

And the same preference for simply stronger—not necessarily better—enforcement seems to be animating the agencies’ “very tendentious” (in the words of Doug Melamed) effort to produce new merger guidelines now.[208]  Indeed, in the press release announcing the guidelines-revision process, FTC Chair Khan and AAG Kanter declare at the outset that they have “launched a joint public inquiry aimed at strengthening enforcement against illegal mergers.”[209]

The Draft Guidelines are overwhelmingly concerned with the presumed dangers of underenforcement, but inexplicably pays almost no heed to the possibility, let alone the cost, of overenforcement. Leaving aside the fact that—in merger enforcement, as in antitrust law more generally—a sound error-cost framework takes a holistic view of the likelihood and cost of errors, underpinning the agencies’ slanted view are two popular, albeit unjustified, narratives that dissolve upon closer examination.

Ultimately, both these narratives appear designed to bolster the case for the type of politically motivated overhaul of the merger guidelines that the agencies have pre-committed themselves to, rather than to fulfill what is—and should remain—the primary purpose of merger guidelines: i.e., to codify state-of-the-art knowledge and practice in one area of antitrust law as a means to increase legal certainty.

Before the FTC and DOJ consider what recommendations should be incorporated into a new set of merger guidelines, it would be appropriate to briefly consider what the current review process should aim to achieve. This raises two critical questions: What is the ultimate aim of merger guidelines, and what should the process leading up to them look like?

A.      The Role of Merger Guidelines

Merger guidelines attempt to provide an authoritative and practical guide for enforcement and adjudication by explicating two important inputs into those processes. First, guidelines attempt to coalesce established agency thinking and practice to inform potential merging parties—effectively seeking to improve legal certainty by prefiguring how agencies are likely to respond to given situations. They also describe the “accepted wisdom” of merger analysis (especially that which stems from jurisprudence). “To be as effective and persuasive as possible, the Guidelines should reflect our best thinking about the competitive effects of mergers and appropriate merger enforcement policy.”[210] Updating merger guidelines may thus be necessary when the consensus—the economic and legal “best thinking” or the underlying jurisprudence—surrounding certain practices has evolved. “Indeed, many commentators regard the guidelines’ credibility arising from this collected institutional wisdom as a foundational principle of any further revisions to the Guidelines. This caution doubtlessly preserves consumer welfare by reducing the costs associated with uncertain antitrust enforcement.”[211]

As the Antitrust Modernization Commission (“AMC”) described them:

There is general consensus that the Merger Guidelines have acted as the “blueprint for the architecture” of merger analysis and, overall, provide a guide that “functions well.” The Guidelines have had a significant influence on judicial development of merger law, which is reflected in their widespread acceptance by the courts as the relevant framework for analyzing merger cases.… The Guidelines have also provided useful guidance and transparency to the business community and antitrust bar. Finally, the Guidelines have helped to influence the development of merger policy by jurisdictions outside the United States.[212]

Given these twin goals—providing legal certainty and “codifying” the accepted knowledge concerning certain antitrust situations—guidelines are not the place to set out a novel, activist agenda or push the boundaries of knowledge and practice.

This is no small detail. There is a vast difference between what may fairly be described as new learning (i.e., a new consensus gleaned from extensive scholarship and rigorous debate), on the one hand, and new interrogations (i.e., unresolved questions that pique the interest of some scholars), on the other. As the rest of our comment suggests, many of the questions currently contemplated by the agencies fall squarely within the latter category. Accordingly, while they arguably constitute an interesting research agenda for scholars, there is virtually no sense in which they justify drafting guidelines that seek to settle these unresolved issues and that, in doing so, lead to a significant departure from existing practice.

Our assertion here is further supported by the fact that guidelines do not have binding authority, either on enforcers or courts. Courts are under no obligation to adhere to antitrust guidelines, and they will be far less likely to look to them even for guidance if they espouse politicized, un-rigorous concepts. Accordingly, by importing novel and unresolved enforcement concepts (as well as approaches to merger enforcement) into their guidelines, the agencies may render them of little use both to the public and to the courts. As Tim Muris & Bilal Sayyed put it, “the Merger Guidelines have succeeded in significant part because they do not try to do too much.”[213] In short, there is a risk that the resulting updated guidelines will not describe the “state of the art” of the economic and legal understanding. As a result, they would no longer shed light on either agency practice or likely litigation outcomes. The guidelines would thus be devoid of any tangible purpose.

This would be a real loss for consumers, as non-specialist courts currently do often look to guidelines in order to appropriately resolve complex merger issues. “The Guidelines accrued substantial institutional credibility and capital with courts due to their economic sophistication and consistency in application.”[214] As Christine Varney, assistant attorney general of the DOJ Antitrust Division in the Obama administration and a member of the Federal Trade Commission in the Clinton administration, put it: “many courts indicate that they consider the Guidelines in assessing mergers under the antitrust laws, some finding them more useful than others.”[215] Numerous scholars and practitioners echo this view and applaud the role of the HMGs in bringing focus and consensus to merger enforcement.[216] Given the speculative and politicized nature of the draft guidelines, there is good reason to doubt that many courts will find the resulting guidelines to fall on the “more useful” end of the scale.

B.      How Guidelines Are Adopted

The process the DOJ and FTC are following to produce their updated guidelines is also problematic. Indeed, if guidelines are released without real opportunity for input and without clear indication that that input has been considered in their formulation, they will be of little use.

It is not inherently problematic to revisit and revise the guidelines, of course; the agencies have done so on a somewhat regular basis since the first guidelines were issued in 1968. In all previous instances (and in the case of the agencies’ other guidelines), revisions were preceded by significant public input, debate, and consideration, leading to identification of an overarching consensus. To take one example, the FTC and DOJ ran an extensive series of workshops and consultations when they updated the HMGs in 2009-2010.[217] In a joint press release announcing the workshops, the agencies explained the goal of this process: “The goal of the workshops will be to determine whether the Horizontal Merger Guidelines accurately reflect the current practice of merger review at the Department and the FTC as well as to take into account legal and economic developments that have occurred since the last significant Guidelines revision in 1992.”[218] And as Christine Varney later elaborated on the agencies’ process and what they expected to glean from it:

In addition to inviting comments, [five] workshops have been held over the past two months.… Our nearly 100 panelists have included leading practitioners, economists, consumer advocates, industry executives, and academics. We have been fortunate to have both former and current government enforcers from the United States and around the world share their perspectives with us.… We’ve learned a lot from the workshops and the comments received so far, and this morning I would like to offer some views about what we’ve heard during this process and where I believe areas of consensus are emerging.”[219]

This is quite different from the perfunctory process seemingly contemplated, at least thus far, by those same agencies today.

One response may be that the substantial process used to develop the 2010 HMGs was itself unnecessary. Rather, the agencies are approaching the current revisions using the notice-and-comment procedures required by the Administrative Procedure Act (“APA”).[220] The problem with this view is that the APA only applies agency rulemaking authorized by Congress—and, even then, it sets the procedural floor. Congress has not authorized the antitrust agencies to develop legally binding merger guidelines. This does not mean that it is impermissible for them to develop such guidelines as informal policy statement. It does mean, however, that such guidelines carry no force of law beyond their ability to persuade courts of their approach. On this account, adopting a minimal notice-and-comment approach offers minimal support for the proposed changes when compared to past guidelines—especially when normalized relative to the extent of the proposed changes. Modest changes might be supported by more modest procedure; substantial changes should be supported by more robust procedure.

To make matters worse, it is difficult to escape the sense that, whatever nominal process is employed by the agencies, the current guidelines-reform effort is intended to effect a predetermined, political outcome, irrespective of any actual consensus (or lack thereof) that emerges. We cannot know precisely how this process will unfold, of course, but there is considerable basis for concern. In particular, the FTC majority’s seriousness about engaging in apolitical, rigorous analysis must be called into question based on the inescapable pattern that has emerged from its recent conduct. In brief, the current FTC majority has undertaken a series of actions and adopted a series of governance policies that reveal an agency focused myopically on advancing a radical revision of antitrust law, as far as possible from the strictures of judicial review and without consultation from the antitrust community.[221]

This sense that politics, rather than evidence, is driving the current review process is further reinforced by the contents of the Draft Guidelines. Many of the claims therein demonstrate substantial bias and heavy reliance on contentious and unsupported assumptions. Indeed, the Draft Guidelines operate from the apparent assumptions (among others) that more enforcement is inherently better, that merger efficiencies are inconsistent with Section 7, and that distributional concerns should factor into merger review. The Draft Guidelines are overwhelmingly concerned with how the status quo may lead to false acquittals; the notions that authorities may err in the other direction, and that excessive enforcement may chill beneficial business activity, are conspicuously absent. Further, the inquiries of those questions often rely on cases that are woefully outdated and not reflective of a massive amount of subsequent economic learning and case law. Citations to cases throughout the draft guidelines are often one-sided and omit or ignore contrary authority.[222] This is notably the case of the guidelines’ repeated citations to Brown Shoe[223] (15 citations), Philadelphia National Bank[224] (eight citations), and Procter and Gamble[225] (six citations)—three mid-20th century cases that are widely decried as being out of tune with modern economics and social science.[226] In short, in their pursuit of strong merger enforcement, the agencies are seemingly looking to reverse time and return to an old set of learnings from which courts, enforcers, and mainstream antitrust scholars have all steered away.

The net effect of these problems is to undermine confidence in the agency. That effect that will carry over to the courts as they are confronted with the resulting guidelines, all the more so if the sanitizing effect of legitimate process is not applied going forward. Such undermining of confidence is a serious problem for effective guidelines, so much so that the FTC’s unremitting willingness to maneuver outside the bounds of established antitrust law and economics reveals perhaps a fundamental disdain for the opinion of the courts.


[1] U.S. Dep’t of Justice & F.T.C., Draft Merger Guidelines for Public Comment (Jul. 18, 2023), [hereinafter “Draft Merger Guidelines” or “Draft Guidelines”].

[2] Draft Merger Guidelines, supra note 1, at 31 (“The Agencies may assess whether a merger may substantially lessen competition or tend to create a monopoly based on a fact-specific analysis under any one or more of the Guidelines discussed above.”)

[3] John Asker et al, Comments on the January 2022 DOJ and FTC RFI on Merger Enforcement, available at at 15-6.

[4] Gregory J. Werden & Luke M. Froeb, Don’t Panic: A Guide to Claims of Increasing Concentration, 33 Antitrust 74, 74 (2018).

[5] Executive Office of the President, Council of Economic Advisers, Economic Report of the President 215 (Feb. 2020).

[6] See, e.g., Germán Gutiérrez and Thomas Philippon, Declining Competition and Investment in the U.S., NBER Working Paper No. 23583 (2017),; Simcha Barkai, Declining Labor and Capital Shares, 75 J. Fin. 2021 (2020).

[7] See Jan De Loecker, Jan Eeckhout & Gabriel Unger, The Rise of Market Power and the Macroeconomic Implications, 135 Q. J. Econ. 561 (2020).

[8] See David Autor, et al., The Fall of the Labor Share and the Rise of Superstar Firms, 135 Q. J. Econ. 635 (2020).

[9] Ryan A. Decker, John Haltiwanger, Ron S. Jarmin & Javier Miranda, Where Has All the Skewness Gone? The Decline in High-Growth (Young) Firms in the U.S, 86 Eur. Econ. Rev. 4, 5 (2016).

[10] Several papers simply do not find that the accepted story—built in significant part around the famous De Loecker and Eeckhout study, see De Loecker, et al., supra note 2 —regarding the vast size of markups and market power is accurate. Among other things, the claimed markups due to increased concentration are likely not nearly as substantial as commonly assumed. See, e.g., James Traina, Is Aggregate Market Power Increasing? Production Trends Using Financial Statements, Stigler Center Working Paper (Feb. 2018),; see also World Economic Outlook, April 2019 Growth Slowdown, Precarious Recovery, International Monetary Fund (Apr. 2019), Another study finds that profits have increased but are still within their historical range. See Loukas Karabarbounis & Brent Neiman, Accounting for Factorless Income, 33 NBER Macroeconomics Annual 167 (2018). And still another shows decreased wages in concentrated markets but also that local concentration has been decreasing over the relevant time period. See Kevin Rinz, Labor Market Concentration, Earnings, and Inequality, 57 J. Human Resources S251 (2022), available at

[11] See Harold Demsetz, Industry Structure, Market Rivalry, and Public Policy, 16 J. L. & Econ. 1 (1973). See also, e.g., Richard Schmalensee, Inter-Industry Studies of Structure and Performance, in 2 Handbook of Industrial Organization 951-1009 (Richard Schmalensee & Robert Willig, eds., 1989); William N. Evans, Luke M. Froeb & Gregory J. Werden, Endogeneity in the Concentration-Price Relationship: Causes, Consequences, and Cures, 41 J. Indus. Econ. 431 (1993); Steven Berry, Market Structure and Competition, Redux, FTC Micro Conference (Nov. 2017), available at; Nathan Miller, et al., On the Misuse of Regressions of Price on the HHI in Merger Review, 10 J. Antitrust Enforcement 248 (2022).

[12] Harold Demsetz, The Intensity and Dimensionality of Competition, in Harold Demsetz, The Economics of the Business Firm: Seven Critical Commentaries 137, 140-41 (1995).

[13] See Nathan Miller, et al., supra note 12.

[14] Steven Berry, Martin Gaynor & Fiona Scott Morton, Do Increasing Markups Matter? Lessons from Empirical Industrial Organization, 33 J. Econ. Persp. 44, 48 (2019) (emphasis added). See also Jonathan Baker & Timothy F. Bresnahan, Economic Evidence in Antitrust: Defining Markets and Measuring Market Power, John M. Olin Program in L. & Econ., Stanford Law Sch. Working Paper 24 (Sep. 2006) (“The Chicago identification argument has carried the day, and structure-conduct-performance empirical methods have largely been discarded in economics.”).

[15] Gregory J. Werden & Luke M. Froeb, Don’t Panic: A Guide to Claims of Increasing Concentration, 33 Antitrust 74, 74 (2018).

[16] Christopher Garmon, The Accuracy of Hospital Screening Methods, 48 Rand J. Econ. 1068, 1070 (2017) (reviewing post-merger price changes for 28 hospital mergers, initially published as a BE Working Paper).

[17] Sharat Ganapati, Growing Oligopolies, Prices, Output, and Productivity, Working Paper (Oct. 6, 2018) at 13 (forthcoming in Am. Econ. J.: Microeconomics),

[18] Id. at 1.

[19] Sam Peltzman, Productivity and Prices in Manufacturing During an Era of Rising Concentration, Working Paper (May 10, 2018, rev. Feb. 3, 2021),

[20] Regarding geographic market area for hospitals, see, e.g., Joseph Farrell, et al., Economics at the FTC: Hospital Mergers, Authorized Generic Drugs, and Consumer Credit Markets, 39 Rev. Indus. Org. 271 (2011) (initially published as BE Working Paper): Garmon, The Accuracy of Hospital Screening Methods, supra note 17.

[21] W. Kip Viscusi, Joseph E. Harrington, Jr. & David E. M. Sappington, Economics of Regulation and Antitrust (2005) at 214-15.

[22] Mary Amiti & Sebastian Heise, U.S. Market Concentration and Import Competition, Federal Reserve Bank of New York, Working Paper No. 968 (May 2021), available at

[23] Esteban Rossi-Hansberg, Pierre-Daniel Sarte & Nicholas Trachter, Diverging Trends in National and Local Concentration, in NBER Macroeconomics Annual 2020, Vol. 35 (Martin Eichenbaum & Erik Hurst eds., 2020).

[24] Rossi-Hansberg, et al., Presentation: Diverging Trends in National and Local Concentration, slide 3, available at

[25] Rossi-Hansberg, et al, supra note 26, at 9.

[26] Id. at 14 (emphasis added).

[27] Ryan Decker, Discussion of “Diverging Trends in National and Local Concentration,” available at

[28] See Rinz, supra note 11. See also David Berger, Kyle Herkenhoff & Simon Mongey, Labor Market Power, 112 AM. ECON. REV. 1147 (2022).

[29] C. Lanier Benkard, Ali Yurukoglu & Anthony Lee Zhang, Concentration in Product Markets, NBER, Working Paper No. 28745 (Apr. 2021), available at

[30] Id. at 4.

[31] Autor, et al. supra note 8. See David Autor, Christina Patterson & John Van Reenen, Local and National Concentration Trends in Jobs and Sales: The Role of Structural Transformation, NBER, Working Paper No. 31130 (Apr. 2023), available at

[32] Robert Kulick & Andrew Kard, A Tale of Two Samples: Unpacking Recent Trends in Industrial Concentration, AEI Economic policy Working Paper, available at

[33] Rossi-Hansberg, supra note 26 at 27 (emphasis added).

[34] Chang-Tai Hsieh & Esteban Rossi-Hansberg, The Industrial Revolution in Services, Working Paper (May 12, 2021), available at

[35] Id. at 4 (“[T]he increase in national industry concentration documented by Autor et al. (2017) and others, is driven by the expansion in markets per firms by top ?rms.”).

[36] Id. at 6.

[37] Id. at 41-42.

[38] Berger, et al., supra note 31.

[39] Id. at 1148.

[40] See Autor, et al., supra note 8.

[41] Robert E. Hall, New Evidence on the Markup of Prices Over Marginal Costs and the Role of Mega-Firms in the US Economy, Working Paper 16 (Apr. 27, 2018) (emphasis added),

[42] Richard Schmalensee, Inter-Industry Studies of Structure and Performance, in 2 Handbook of Industrial Organization 951, 1000 (Richard Schmalensee & Robert Willig eds., 1989). See also Timothy F. Bresnahan, Empirical Studies of Industries with Market Power, in 2 Handbook of Industrial Organization 1011, 1053-54 (Richard Schmalensee & Robert Willig eds., 1989) (“[A]lthough the [most advanced empirical literature] has had a great deal to say about measuring market power, it has had very little, as yet, to say about the causes of market power.”); Frank H. Easterbrook, Workable Antitrust Policy, 84 Mich. L. Rev. 1696, 1698 (1986) (“Today it is hard to find an economist who believes the old structure-conduct-performance paradigm.”).

[43] Baker & Bresnahan, supra note 14, at 26.

[44] Chad Syverson, Macroeconomics and Market Power: Context, Implications, and Open Questions 33 J. Econ. Persp. 23, (2019) at 26.

[45] See Rinz, supra note 11

[46] Id. at S259.

[47] Berger et al., supra note 31 at 1148.

[48] Elizabeth Weber Handwerker & Matthew Dey, Some Facts About Concentrated Labor Markets in the United States. Industrial Relations: A Journal of Economy and Society, (2023), early view at

[49] Draft Guidelines at 12.

[50] See J.A. Schumpeter, Capitalism, Socialism and Democracy 72 (1976).

[51] See Kenneth Arrow, Economic Welfare and the Allocation of Resources for Invention, in The Rate and Direction of Inventive Activity: Economic and Social Factors 620 (Richard R. Nelson ed.,1962).

[52] See, e.g., Philippe Aghion, Nick Bloom, Richard Blundell, Rachel Griffith & Peter Howitt, Competition and Innovation: An Inverted-U Relationship, 120 Q. J. Econ. 702 (2005).

[54] See, e.g., Michael L. Katz & Howard A. Shelanski, Mergers and Innovation, 74 Antitrust L.J. 1, 22 (2007) (“The literature addressing how market structure affects innovation (and vice versa) in the end reveals an ambiguous relationship in which factors unrelated to competition play an important role.”).

[55] Dirk Auer, Structuralist Innovation: A Shaky Legal Presumption in Need of an Overhaul, CPI Antitrust Chronicle (Dec. 1, 2018).

[56] Amended complaint, Fed. Trade Comm’n v. Facebook, Inc., No. 1:20-cv-03590 (D.C. Cir. filed Aug. 19, 2021), available at, at 73.

[57] This is not to say that some economists do not believe that more competitive market structures generally lead to more innovation. But rather that these writings have (i) not garnered a wide consensus among the economics profession, and (ii) often rest on narrow assumptions that reduce their application to specific settings. See, e.g., Carl Shapiro, Competition and Innovation: Did Arrow Hit the Bull’s Eye?, in The Rate and Direction of Inventive Activity Revisited 400 (Josh Lerner & Scott Stern eds., 2011). See also Ilya Segal & Michael D. Whinston, Antitrust in Innovative Industries, 97 Am. Econ. Rev. 1712 (2007). For instance, both above papers conclude that exclusivity, though it may increase innovator’s ex-post profits, is unlikely to increase incentives to innovate because it prevents entry by more innovative rivals. To reach this conclusion, the authors notably assume that consumers that are bound by exclusivity contracts never find it profitable to purchase the innovation of a second firm (they assume that the innovation costs more to produce than the value to consumers of its incremental improvement). There is no reason to believe that this is, or is not, a good reflection of reality.

[58] Richard J. Gilbert, Innovation Matters: Competition Policy for the High-Technology Economy, 116 (2020)

[59] Ronald L. Goettler & Brett R. Gordon, Does AMD Spur Intel to Innovate More?, 119 J. Pol. Econ. 1141, 1141 (2011)

[60] Mitsuru Igami, Estimating the Innovator’s Dilemma: Structural Analysis of Creative Destruction in the Hard Disk Drive Industry, 1981–1998, 125 J. Pol. Econ. 798, 798 (2017)

[61] Elena Patel & Nathan Seegert, Does Market Power Encourage or Discourage Investment? Evidence From the Hospital Market, 63 J.L. Econ. 667, 667 (2020).

[62] See Aghion, et al., supra note 52, 701-28 (2005). The theoretical aspects of this paper are a refinement of previous seminal research by some of these authors, which found that increased product market competition had a negative effect on innovation. See P. Aghion & P. Howitt, A Model of Growth Through Creative Destruction, 60 Econometrica 323 (1992).

[63]  Id. at 707.

[64] See, e.g., Federico Etro, Competition, Innovation, and Antitrust: A Theory of Market Leaders and Its Policy Implications (2007) at 163-64.

[65] See Aghion, et al., supra note 52, at 714.

[66] Id. at 706.

[67] Id. at 702.

[68] Id.

[69] Eric Fruits, Justin (Gus) Hurwitz, Geoffrey A. Manne, Julian Morris & Alec Stapp, Static and Dynamic Effects of Mergers: A Review of the Empirical Evidence in the Wireless Telecommunications Industry, OECD Directorate for Financial and Enterprise Affairs Competition Committee, Global Forum on Competition. DAF/COMP/GF(2019)13 (Sep. 4, 2020), available at

[70] See, generally, Nicolai J. Foss & Peter G. Klein, Organizing Entrepreneurial Judgment (2012).

[71] See, e.g., J. Gregory Sidak & David J. Teece, Dynamic Competition in Antitrust Law, 5 J. Competition L. & Econ. 581 (2009).

[72] Asker, et al., supra note 3, at 34.

[73] Cristina Caffarra, Gregory S. Crawford & Tommaso Valletti, “How Tech Rolls”: Potential Competition and “Reverse” Killer Acquisitions, Antitrust Chronicle (May, 26, 2020) (“Large digital platforms in particular have exceptional abilities to pursue organic expansion but also opportunities to ‘roll up’ (willing) startups to ‘get there faster’, ‘buying’ instead of expending effort in rival innovation. Foregoing such effort is never good for consumers and society as a whole: while innovative effort is costly, it will often yield multiple providers and differentiated services, with socially desirable properties.”).

[74] See, e.g., Steven C. Salop, Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits, Working Paper (Apr. 28, 2021), available at See also C. Scott Hemphill & Tim Wu, Nascent Competitors, 168 U. Pa. L. Rev. 1879 (2019).

[75] See, e.g., Salop, id. See also Giulio Federico, Gregor Langus & Tommaso Valletti, Horizontal Mergers and Product Innovation, 59 Int’l J. Indus. Org. 1 (2018).

[76] CMA, Completed Acquisition by Facebook, Inc (now Meta Platforms, Inc.) of Giphy, Inc., Final Report (Nov. 30, 2021) at 223 (“We consider this evidence supports the view that GIPHY was an important player in a potentially growing segment of the display advertising market, and as such (taking account of the economic context, in particular the expected closeness of competition between Facebook and GIPHY) an important part of a dynamic competitive process with Facebook and others.”).

[77] See Caffarra, et al., supra note 74. (“What seems to be more frequent are cases where the acquisition may effectively extinguish the standalone effort of the buyer to expand in a particular space because the target immediately provides it with those capabilities.  This covers a broader set of possibilities as platforms continue to expand into adjacent fields by buying functionalities, capabilities, even whole businesses (see the recent example of Google/Fitbit).”).

[78] FTC v. Meta Platforms Inc., U.S. Dist. LEXIS 29832 (2023); Complaint, Fed. Trade Comm’n v. Facebook, Inc., No. 1:20-cv-03590 (D.C. Cir. filed Jan. 13, 2021).

[79] Case No COMP/M.7217—Facebook / WhatsApp (Oct. 3, 2014), at 61.

[80] Jessica L Recih, Letter Reminding Both Firms That WhatsApp Must Continue To Honor Its Promises To Consumers With Respect to the Limited Nature of the Data It Collects, Maintains, and Shares With Third Parties (Apr. 10, 2014), available at

[81] CMA Case ME/5525/12—Anticipated acquisition by Facebook Inc of Instagram Inc (Aug. 22, 2012).

[82] Amended complaint, Fed. Trade Comm’n v. Facebook, Inc., No. 1:20-cv-03590 (D.C. Cir. filed Aug. 19, 2021), available at, at 26-41.

[83] Steven C. Salop, A Suggested Revision of the 2020 Vertical Merger Guidelines, Georgetown Law Faculty Publications and Other Works No. 2381 (Dec. 2021),

[84] D. Bruce Hoffman, Acting Dir., Bureau of Competition, Fed. Trade Comm’n, Remarks at the Credit Suisse 2018 Washington Perspectives Conference: Vertical Merger Enforcement at the FTC 4 (Jan. 10, 2018), available at

[85] Although in some cases, such as a failing firm, the competing firm may have exited the market even if the merger did not occur.

[86] Hoffman, supra note 86.

[87] Id.

[88] Id.


[90] Christine S. Wilson, Comm’r, Fed. Trade Comm’n, Keynote Address at the GCR Live 8th Annual Antitrust Law Leaders Forum: Vertical Merger Policy: What Do We Know and Where Do We Go? (Feb. 1, 2019) at 4 & 9, available at

[91] Id.

[92] Hoffman, supra note 86.

[93] Salop, supra note 89.

[94] Competition and Consumer Protection in the 21st Century; FTC Hearing #5: Vertical Merger Analysis and the Role of the Consumer Welfare Standard in U.S. Antitrust Law; Before the FTC, Presentation Slides at 15 (Nov. 1, 2018), available at [hereinafter “Salop, Vertical Merger Slides”] (emphasis added). See also Serge Moresi & Steven C. Salop, When Vertical is Horizontal: How Vertical Mergers Lead to Increases in “Effective Concentration,” 59 R. Ind. Org. 177 (2021) (“there in an inherent loss of an indirect competitor that supported the non-merging competitors in the pre-merger world, which leads to reduced competition when there is an input foreclosure concern”).

[95] Id. (emphasis added).

[96] Id. (emphasis added).

[97] Id. (emphasis added).

[98] Salop, supra note 89.

[99] USDA, Citrus Fruits 2021 Summary (Sep. 2021), available at

[100] Chad Miles, After Troubling New Forecast, Florida Citrus Advocate Says Industry Is “At A Crossroads,” WFTS (Jan. 24, 2022),

[101] David Reiffen & Michael Vita, Comment: Is There New Thinking on Vertical Mergers? 63 Antitrust L. J. 917, 920 (1995) (“Some horizontal mergers do not create efficiencies; they are profitable only because of the post-merger anticompetitive conduct made possible by the transaction. By contrast, the primary lesson of both the older literature on vertical integration, as well as the newer ‘post-Chicago’ literature, is that this trade-off invariably exists for all vertical transactions that threaten to reduce consumer welfare.”). See also Joseph J. Spengler, Vertical Integration and Antitrust Policy, 58 J. Pol. Econ. 347 (1950); Robert H. Bork, The Antitrust Paradox: A Policy At War With Itself 219 (1978); Richard A. Posner, Antitrust Law 228 (1976).

[102] See, e.g., Michael A. Salinger, Vertical Mergers and Market Foreclosure, 103 Q.J. Econ. 345 (1988).

[103] Reiffen & Vita, supra note 107, at 921.

[104] Id. (“High price-cost margins increase the size of gain to the integrated firm as well as the potential for anticompetitive input price increases.… [And] the post-Chicago literature suggests that vertical mergers that occur in the presence of high premerger concentration are likely to result in lower prices to consumers.”).

[105] Cooper, et al., supra note 108, at 645.

[106] Jonathan B. Baker, Nancy L. Rose, Steven C. Salop, & Fiona Scott Morton, Five Principles for Vertical Merger Enforcement Policy, Georgetown Law Faculty Pub. and Other Works, Working Paper No. 2148 (2019), at 8 (emphasis added).

[107] Reiffen & Vita, supra note 107, at 920.

[108] See, e.g., Cooper, et al., supra note 108, at 642-45 (assessing the vast majority of post-Chicago theories of vertical harm under the heading “softening horizontal competition”).

[109] See, generally, Salop, supra note 79.

[110] Id.

[111] Id.

[112] See Dissenting Statement of Commissioner Joshua D. Wright, In the Matter of Nielsen Holdings N.V. and Arbitron Inc., FTC File No. 131-0058 (Sep. 20, 2013), at note 3 (“Nevertheless, competitive effects in actual potential competition cases still are more difficult, on balance, to assess than typical merger cases because the agency must predict whether a party is likely to enter the relevant market absent the merger. It is because of this uncertainty and the potential for conjecture that the courts and agencies have cabined the actual potential competition doctrine by, for instance, applying a heightened standard of proof for showing a firm likely would enter the market absent the merger.”) (citing B.A.T. Indus., 104 F.T.C. 852, 926-28 (1984) (applying a “clear proof” standard)).

[113] See Mergers That Eliminate Potential Competition, RESEARCH HANDBOOK ON THE ECONOMICS OF ANTITRUST LAWS 111 (Einer Elhauge, ed. 2012) (“All twelve studies [of airline markets] find that potential competition results in lower prices by incumbent carriers, in ten cases by statistically significant amounts. Except as noted below, the amounts range between one quarter of one percent to about two percent, and in all cases are less than the amount of the price decline from one additional actual competitor, specifically, from one eighth to one third as large.”).

[114] Id.

[115] Case No M.9660—Google/Fitbit, C (2020) 9105 final (Dec. 12, 2020), at 398.

[116] Geoffrey A. Manne, Sam Bowman & Dirk Auer, Technology Mergers and the Market for Corporate Control, 86 Mo. L. Rev. 1047 (2021). This is because the availability of mergers as an exit strategy have been shown to increase investments by firms. Regarding the effect of mergers on investment, see, e.g., Gordon M. Phillips & Alexei Zhdanov, Venture Capital Investments and Merger and Acquisition Activity Around the World, NBER, Working Paper No. 24082 (Nov. 2017), (“We examine the relation between venture capital (VC) investments and mergers and acquisitions (M&A) activity around the world. We find evidence of a strong positive association between VC investments and lagged M&A activity, consistent with the hypothesis that an active M&A market provides viable exit opportunities for VC companies and therefore incentivizes them to engage in more deals.”). And increased M&A activity in the pharmaceutical sector has not led to decreases in product approvals; rather, quite the opposite has happened. See, e.g., Barak Richman, et al., Pharmaceutical M&A Activity: Effects on Prices, Innovation, and Competition, 48 Loyola U. Chi. L. J. 787, 799 (2017) (“Our review of data measuring pharmaceutical innovation, however, tells a different story. First, even as merger activity in the United States increased over the past ten years, there has been a steady upward trend of FDA approvals of new molecular entities (“NMEs”) and new biological products (“BLAs”). Hence, the industry has been highly successful in bringing new products to the market.”).

[117] See Salop, supra note 79.

[118] In this section, we focus on Salop’s comments because they represent a common perspective. As Salop himself points out “I do not think that any of the analysis in the article is new. I expect that all the points have been made elsewhere by others and myself.”

[119] For a simple example, consider a Cournot oligopoly model with an industry inverse demand curve of P(Q)=1-Q and constant marginal costs that are normalized to zero. In a market with N symmetric sellers, each seller earns in profits. A monopolist makes a profit of 1/4. A duopolist can expect to earn a profit of 1/9. If there are 3 potential entrants plus the incumbent, the monopolist must pay each the duopoly profit 3*1/9=1/3, which exceeds the monopoly profits of 1/4. In the Nash/Cournot equilibrium, the incumbent will not acquire any of the competitors since it is too costly to keep them all out.

[120] Manne, Bowman, & Auer, supra note 132, at 1080.

[121] For vertical mergers the welfare-enhancing effects are well-established. See, e.g., Francine Lafontaine & Margaret Slade, Vertical Integration and Firm Boundaries: The Evidence, 45 J. ECON. LIT. 629, 677 (2007) (“In spite of the lack of unified theory, over all a fairly clear empirical picture emerges. The data appear to be telling us that efficiency considerations overwhelm anticompetitive motives in most contexts. Furthermore, even when we limit attention to natural monopolies or tight oligopolies, the evidence of anticompetitive harm is not strong.”). See also, Global Antitrust Institute, Comment Letter on Federal Trade Commission’s Hearings on Competition and Consumer Protection in the 21st Century, Vertical Merger, Geo. Mason Law & Econ. Research Paper No. 18-27, 8–9 (2018), (“In sum, these papers from 2009-2018 continue to support the conclusions from Lafontaine & Slade (2007) and Cooper et al. (2005) that consumers mostly benefit from vertical integration. While vertical integration can certainly foreclose rivals in theory, there is only limited empirical evidence supporting that finding in real markets. The results continue to suggest that the modern antitrust approach to vertical mergers 9 should reflect the empirical reality that vertical relationships are generally procompetitive.”). Along similar lines, empirical research casts doubt on the notion that antitrust merger enforcement (in marginal cases) raises consumer welfare. The effects of horizontal mergers are, empirically, less well documented. See, e.g., Robert W. Crandall & Clifford Winston, Does Antitrust Policy Improve Consumer Welfare? Assessing the Evidence, 17 J. ECON. PERSP. 3, 20 (2003) (“We can only conclude that efforts by antitrust authorities to block particular mergers or affect a merger’s outcome by allowing it only if certain conditions are met under a consent decree have not been found to increase consumer welfare in any systematic way, and in some instances the intervention may even have reduced consumer welfare.”). While there is some evidence that horizontal mergers can reduce consumer welfare, at least in the short run, see, e.g., Gregory J. Werden, et al., The Effects of Mergers on Price and Output: Two Case Studies from the Airline Industry, 12 MGMT. DECIS. ECON. 341 (1991), the long-run effects appear to be strongly positive. See, e.g., Dario Focarelli & Fabio Panetta, Are Mergers Beneficial to Consumers? Evidence from the Market for Bank Deposits, 93 AM. ECON. REV. 1152, 1152 (2003) (“We find strong evidence that, although consolidation does generate adverse price changes, these are temporary. In the long run, efficiency gains dominate over the market power effect, leading to more favorable prices for consumers.”). See, generally, Michael C. Jensen, Takeovers: Their Causes and Consequences, 2 J. ECON. PERSP. 21 (1988). Some related literature similarly finds that horizontal merger enforcement has harmed consumers. See B. Espen Eckbo & Peggy Wier, Antimerger Policy Under the Hart-Scott-Rodino Act: A Reexamination of the Market Power Hypothesis, 28 J.L. & ECON. 119, 121 (1985) (“In sum, our results do not support the contention that enforcement of Section 7 has served the public interest. While it is possible that the government’s merger policy has deterred some anticompetitive mergers, the results indicate that it has also protected rival producers from facing increased competition due to efficient mergers.”); B. Espen Eckbo, Mergers and the Value of Antitrust Deterrence, 47 J. FINANCE 1005, 1027–28 (1992) (rejecting “the market concentration doctrine on samples of both U.S. and Canadian mergers. By implication, the results also reject the effective deterrence hypothesis. The evidence is, however, consistent with the alternative hypothesis that the horizontal mergers in either of the two countries were expected to generate productive efficiencies”).

[122] Asker, et al., supra note 3, at 34.

[123] See Baker, et al., supra note 106, at 13 (“[Treating vertical mergers more permissively than horizontal mergers, even in concentrated markets] would be tantamount to presuming that vertical mergers benefit competition regardless of market structure. However, such a presumption is not warranted for vertical mergers in the oligopoly markets that typically prompt enforcement agency review.”); Competition and Consumer Protection in the 21st Century: FTC Hearing #5: Vertical Merger Analysis and the Role of the Consumer Welfare Standard in U.S. Antitrust Law; FTC Transcript 164 (Nov. 1, 2018) [hereinafter “FTC Hearing #5”] at 14-15 (statement of Steven Salop, Professor, Georgetown University Law Center). See also Cooper, et al., supra note 108, at 643-48 (discussing such “post-Chicago” scholarship).

[124] Salop, Vertical Merger Slides, supra note 96, at 14.

[125] See Lafontaine & Slade, supra note 138. See also Cooper, et al., supra note 108; Daniel O’Brien, The Antitrust Treatment of Vertical Restraint: Beyond the Beyond the Possibility Theorems, in Report: The Pros and Cons of Vertical Restraints 22, 36 (2008) (“[Table 1 in this paper] indicates that voluntarily adopted restraints are associated with lower costs, greater consumption, higher stock returns, and better chances of survival.”).

[126] See, e.g., Salop, Vertical Merger Slides, supra note 96, at 17 (dismissing Lafontaine & Slade and attempting to adduce a few newer studies as contradictory and dispositive).

[127] It is fair to point out that, indeed, many of the studies look at the effects of vertical restraints rather than vertical mergers, per se. But such studies remain instructive, given that the theories of harm arising from vertical mergers arise from precisely the sorts of conduct at issue in these studies. If perfect alignment of facts were required, no economic theory or evidence would ever be relevant.

[128] Lafontaine & Slade, supra note 138, at 663.

[129] FTC Hearing #5 (statement of Francine Lafontaine, Professor, Michigan-Ross), supra note 140, at 93.

[130] Margaret E. Slade, Vertical Integration and Mergers: Empirical Evidence and Evaluation Methods, OECD (Jun. 7, 2019),

[131] Id. at 10-12.

[132] Baker, et al., supra note 106, at 11.

[133] Global Antitrust Institute, Comment at the Fed. Trade Comm’n Hearings on Competition and Consumer Protection in the 21st Century, The Consumer Welfare Standard in Antitrust Law (Sep. 7, 2018).

[134] Salop, Vertical Merger Slides, supra note 96, at 25. For a more comprehensive assessment of the recent empirical scholarship (finding the same overall results that we do), see id.

[135] Fernando Luco & Guillermo Marshall, Vertical Integration With Multiproduct Firms: When Eliminating Double Marginalization May Hurt Consumers (Jan. 15, 2018),

[136] Id. at 22.

[137] FTC Hearing #5 (statement of Francine Lafontaine, Professor, Michigan-Ross), supra note 140, at 88.

[138] Justine S. Hastings & Richard J. Gilbert, Market Power, Vertical Integration, and the Wholesale Price of Gasoline, 33 J. Indus. Econ. 469 (2005).

[139] Id. at 471.

[140] Christopher T. Taylor, Nicolas M. Kreisle, & Paul R. Zimmerman, Vertical Relationships and Competition in Retail Gasoline Markets: Empirical Evidence from Contract Changes in Southern California: Comment, 100 Am. Econ. Rev. 1269 (2010).

[141] Id. at 1272-76.

[142] Justine Hastings, Vertical Relationships and Competition in Retail Gasoline Markets: Empirical Evidence from Contract Changes in Southern California, 94 Am. Econ. Rev. 317 (2004).

[143] Justine Hastings, Vertical Relationships and Competition in Retail Gasoline Markets: Empirical Evidence from Contract Changes in Southern California: Reply, 100 Am. Econ. Rev. 1227 (2010).

[144] Gregory S. Crawford, Robin S. Lee, Michael D. Whinston, & Ali Yurukoglu, The Welfare Effects of Vertical Integration in Multichannel Television Markets, 86 Econometrica 891 (2018).

[145] Slade, supra, note 147, at 6.

[146] Crawford, et al, supra note 160, at 893-94 (emphasis added).

[147] Competition and Consumer Protection in the 21st Century: FTC Hearing #3: Multi-sided Platforms, Labor Markets, and Potential Competition; FTC Transcript 101 (Oct. 17, 2018) (statement of Robin Lee, Professor, Harvard University), available at  (“[O]ur key findings are that, on average, across channels and simulations, there is a net consumer welfare gain from integration. Don’t get me wrong, there are significant foreclosure effects, and rival distributors are harmed, but these negative effects are oftentimes offset by sizeable efficiency gains. Of course, this is an average. It masks considerable heterogeneity. When complete exclusion occurs, which happens both in our simulations and in the data some of the times, consumer welfare is actually harmed.”).

[148] Ayako Suzuki, Market Foreclosure and Vertical Merger: A Case Study of the Vertical Merger Between Turner Broadcasting and Time Warner, 27 Int’l J. of Indus. Org. 532 (2009).

[149] Id. at 542.

[150] Id.

[151] Id.

[152] Brown Shoe, 370 U.S. at 329 ((emphasis added)

[153] FTC v. Microsoft Corporation et al., No. 23-cv-02880-JSC (N.D. Cal. Jul. 10, 2023), available at

[154] Syverson, supra note 48, at 27.

[155] Draft Merger Guidelines, at 34.

[156] Id. at 26.

[157] United States v. Bertelsmann SE & Co. KGaA, No. CV 21-2886-FYP, 2022 WL 16949715 (D.D.C. Nov. 15, 2022)

[158] Id. (“The defendants do not dispute that if advances are significantly decreased, some authors will not be able to write, resulting in fewer books being published, less variety in the marketplace of ideas, and an inevitable loss of intellectual and creative output.”)

[159] See, e.g., Roger G. Noll, Buyer Power and Economic Policy, 72 Antitrust L.J. 589, 589 (2005) (“[B]uyer power arises from monopsony (one buyer) or oligopsony (a few buyers), and is the mirror image of monopoly or oligopoly.”); Id. at 591 (“Asymmetric treatment of monopoly and monopsony has no basis in economic analysis.”).

[160] Of course, monopoly markets in intermediate products (i.e., products sold not to end users but to manufacturers who use them as inputs for products that are, in turn, sold to end users) may indeed sit in the same place in the supply chain as the typical monopsony market. Some, but not all, of the complications associated with monopsony analysis are relevant to these monopoly situations, as well.

[161] Ioana Marinescu & Herbert J. Hovenkamp, Anticompetitive Mergers in Labor Markets, 94 Indiana L.J. 1031, 1034 (2019) (“While the use of section 7 to pursue mergers among buyers is well established, there is relatively little case law.”)

[162] Id. at 1034.

[163] For purposes of this discussion, “monopoly” refers to any merger that would increase market power by a seller in a product market and “monopsony” refers to any merger that would increase market by the buyer in an input market.

[164] Some efficiency-enhancing mergers will be identifiable, of course. For example, if the merger raises quantities and prices for all inputs, that must be efficiency enhancing. The problem, as always, is with the hard cases.

[165] See C. Scott Hemphill & Nancy L. Rose, Mergers that Harm Sellers, 127 Yale L.J. 2078 (2018).

[166] In theory, one could force a monopsony model to be identical to monopoly. The key difference is about the standard economic form of these models that economists use. The standard monopoly model looks at one output good at a time, while the standard factor demand model uses two inputs, which introduces a trade-off between, say, capital and labor. See Sonia Jaffe, Robert Minton, Casey B. Mulligan, and Kevin M. Murphy, Chicago Price Theory (2019) at Ch. 10. One could generate harm from an efficiency for monopoly (as we show for monopsony) by assuming the merging parties each produce two different outputs, apples and bananas. An efficiency gain could favor apple production and hurt banana consumers. While this sort of substitution among outputs is often realistic, it is not the standard economic way of modeling an output market.

[167] John Asker & Volker Nocke, Collusion, Mergers, and Related Antitrust Issues, NBER Working Paper 29175 (Aug. 2021), at 42,

[168] Roman Inderst & Christian Wey, Countervailing Power and Dynamic Efficiency, 9 J. Eur. Econ. Ass’n 702, 715 (2011).

[169] FTC v. Whole Foods Mkt., Inc., 548 F.3d 1028, 1063 (D.C. Cir. 2008). See also Geoffrey Manne, Premium, Natural, and Organic Bullsh**t, Truth on the Market (Jun. 6, 2007), (“In other words, there is a serious risk of conflating a ‘market’ for business purposes with an actual antitrust-relevant market.”).

[170] Executive Order 14036 on Promoting Competition in the American Economy, § 2(g) (Jul. 9, 2021) (“This order recognizes that a whole-of-government approach is necessary to address overconcentration, monopolization, and unfair competition in the American economy.”

[171] Ioana Marinescu & Herbert J. Hovenkamp, Anticompetitive Mergers in Labor Markets, 94 Indiana L.J. 1031, 1050 (2019). (“The analogous question for considering monopsony in the labor market would be to identify the smallest labor market for which a hypothetical monopsonist in that market would find profitable to implement a “small and significant but non-transitory reduction in wages” (SSNRW)”).

[172] Id. 1062-63.

[173] As Marinescu & Hovenkamp note (attributing the point to Hemphill & Rose), “[i]n this case, there is merely a transfer away from workers and towards the merging firms. Yet. . . such a transfer is a harm for antitrust law as it results from a reduction in competition.” Id. at 1062 (citing Hemphill & Rose, supra note 211, at 2104-05).

[174] See, e.g., Kartell v. Blue Shield of Mass., Inc., 749 F.2d 922 (1st Cir. 1984). See also Steven C. Salop, Question: What Is the Real and Proper Antitrust Welfare Standard? Answer: The True Consumer Welfare Standard, 22 Loy. Consumer L. Rev. 336, 342 (2010) (“However, Judge Breyer treated Blue Cross essentially as an agent for the customers it insured, rather than as an intermediary firm that purchased inputs and sold outputs as a monopolistic reseller. The court apparently assumed (perhaps wrongfully) that Blue Cross would pass on its lower input costs to its customers in the form of lower insurance premiums.”).

[175] See Jan M. Rybnicek & Joshua D. Wright, Outside In or Inside Out?: Counting Merger Efficiencies Inside and Out of the Relevant Market, in William E. Kovacic: An Antitrust Tribute Vol. II (2014) at *10, SSRN version available at (“Despite the incorporation of efficiencies analysis into modern merger evaluation, and the advances in economics that allow efficiencies to be identified and calculated more accurately than at the time of Philadelphia National Bank, antitrust doctrine in the United States still supports a regime that fails to take into account efficiencies arising outside of the relevant market.”).

[176] U.S. Dep’t of Justice & Fed. Trade Comm’n, Commentary on the Horizontal Merger Guidelines (2006), available at See also U.S. Dep’t of Justice & Fed. Trade Comm’n, Horizontal Merger Guidelines (1992, rev. 1997) § 4 at n.36 (“In some cases, merger efficiencies are “not strictly in the relevant market, but so inextricably linked with it that a partial divestiture or other remedy could not feasibly eliminate the anticompetitive effect in the relevant market without sacrificing the efficiencies in the other market(s).”).

[177] See, e.g., Brunswick Corp. v. Pueblo Bowl-O-Mat, Inc., 429 U.S. 477, 487 (1977) (“Every merger of two existing entities into one, whether lawful or unlawful, has the potential for producing economic readjustments that adversely affect some persons. But Congress has not condemned mergers on that account; it has condemned them only when they may produce anticompetitive effects.”). See also Robert H. Bork, The Antitrust Paradox: A Policy at War with Itself (2021) at 110 (“Those who continue to buy after a monopoly is formed pay more for the same output, and that shifts income from them to the monopoly and its owners, who are also consumers. This is not dead-weight loss due to restriction of output but merely a shift in income between two classes of consumers. The consumer welfare model, which views consumers collectively, does not take this income effect into account.”).

[178] Hemphill & Rose distinguish monopsony power from increased buyer leverage, which does not result in a deadweight loss but is simply a redistribution from sellers to buyers. Leverage will be partially passed through to consumers as lower prices. Standard monopsony increases in bargaining power will not generate lower prices, since “[a]n increase in monopsony power increases the firm’s perceived marginal cost and reduces output. Far from lowering output prices, the increased monopsony power raises price in output markets (if the firm faces downward sloping demand for its output) or else leaves it unchanged.” Hemphill & Rose, supra note 211, at 2106.

[179] Roman Inderst & Greg Shaffer, Buyer Power in Merger Control, in ABA Antitrust Section Handbook, Issues in Competition Law and Policy (Wayne Dale Collins, ed. 2008) at 1611, 1612-13 (emphasis added).

[180] Statement of the Federal Trade Commission Concerning the Proposed Acquisition of Medco Health Solutions by Express Scripts, Inc., FTC File No. 111-0210, at 7 (Apr. 2, 2012), available at

[181] Salop, supra note 218, at 342 (“Efficiency benefits count under the true consumer welfare standard, but only if there is evidence that enough of the efficiency benefits pass through to consumers so that consumers (i.e., the buyers) would directly benefit on balance from the conduct.”)

[182] The same analysis can be applied to a hypothetical merger between, say, Kroger and Trader Joe’s in which we assume for the sake of argument there is no increase in seller power, but there is an increase in buyer power.

[183] It is worth noting that, although the analogy between Blue Cross and Kroger here seems quite apt and powerful, there can be little doubt that Salop would not condone this mode of analysis in a such a case against Kroger. Whether (if correct) that is a function of one person’s idiosyncratic preferences or an expression of the complication inherent in assessing consumer welfare in monopsony cases is uncertain.

[184] See, e.g., Gregory J. Werden, Monopsony and the Sherman Act: Consumer Welfare in a New Light, 74 Antitrust L.J. 707, 735 (2007). (“Predatory pricing that excludes competitors and results in monopsony is condemned by the Sherman Act, just as the Act condemns predatory pricing that excludes competitors and obtains a monopoly.… Protecting consumer welfare is the principal goal of the Sherman Act, but it is only a goal: The Sherman Act protects the people by protecting the competitive process. The competitive process could not be under mined any more clearly than it is when competing buyers conspire to eliminate the competition among themselves, and it matters not one whit under the Sherman Act whether the conspiracy threatens the welfare of conspirators’ customers or the welfare of end users. It is enough that the conspiracy threatens the welfare of the trading partners exploited by the conspiracy. Harm to them implies harm to people protected by the Sherman Act.”).

[185] See, Illinois Brick Co. v. Illinois, 431 U.S. 720 (1977); Hanover Shoe, Inc. v. United Shoe Machinery Corp., 392 U.S. 481 (1968).

[186] Ohio v. Am. Express Co., 138 S. Ct. 2274 (2018).

[187] See Jonathan H. Adler, Conservation Through Collusion: Antitrust as an Obstacle to Marine Resource Conservation, 61 Wash. & Lee L. Rev 3 (2004) (“The purported aim of antitrust law is to improve consumer welfare by proscribing actions and arrangements that reduce output and increase prices. Conservation aims to improve human welfare by maximizing the long-term productive use of natural resources, an aim that often requires limiting consumption to sustainable levels. While such conservation measures might increase prices in the short-run, when successful they enhance consumer welfare by increasing long-term production and ensuring the availability of valued resources over time.”).

[188] See, Credit Suisse Securities (USA) LLC v. Billing, 551 U.S. 264, *19-*20, *1-*2 (2007) (holding that where “(1) an area of conduct [is] squarely within the heartland of… regulations; (2) [there is] clear and adequate… authority to regulate; (3) [there is] active and ongoing agency regulation; and (4) [there is] a serious conflict between the antitrust and regulatory regimes. . . , [such] laws are ‘clearly incompatible’ with the application of the antitrust laws…[,]” thus “implicitly precluding the application of the antitrust laws to the conduct alleged”). See also U.S. v. Philadelphia Nat. Bank, 374 U.S. 321, 398-74 (1963) (Harlan, J. dissenting) (“Sweeping aside the ‘design fashioned in the Bank Merger Act’ as ‘predicated upon uncertainty as to the scope of § 7 of the Clayton Act’ (ante, p. 349), the Court today holds § 7 to be applicable to bank mergers and concludes that it has been violated in this case. I respectfully submit that this holding, which sanctions a remedy regarded by Congress as inimical to the best interests of the banking industry and the public, and which will in large measure serve to frustrate the objectives of the Bank Merger Act, finds no justification in either the terms of the 1950 amendment of the Clayton Act or the history of the statute.”).

[189] See William M Landes & Richard A Posner, Market Power in Antitrust Cases, 94 HARV. L. REV. 937 (1981) at 938 (“The standard method of proving market power in antitrust cases involves first defining a relevant market in which to compute the defendant’s market share, next computing that share, and then deciding whether it is large enough to support an inference of the required degree of market power.”); Louis Kaplow, Why (ever) Define Markets?, 124 Harv. L. Rev. 437, 515 (2010) (“The market definition / market share paradigm plays a prominent role in competition law regimes. Its central justification is that it offers a useful means of making inferences about market power, indeed one that is easier or more reliable than other means of market power determination. Upon analysis, however, it appears that this widely accepted view is always false….”).

[190] Complaint, Fed. Trade Comm’n v. Facebook, Inc., No. 1:20-cv-03590 (D.C. Cir. filed Jan. 13, 2021), at 19. Consider the following passage from the FTC’s complaint: “Direct network effects are a significant barrier to entry into personal social networking. Specifically, because a core purpose of personal social networking is to connect and engage with personal connections, it is very difficult for a new entrant to displace an established personal social network in which users’ friends and family already participate. A potential entrant in personal social networking services also would have to overcome users’ reluctance to incur high switching costs.” This analysis fails to examine whether users can and do coordinate among themselves to join rival networks. For a detailed discussion of these considerations, see, e.g., Daniel F Spulber, Consumer Coordination in the Small and in the Large: Implications for Antitrust in Markets With Network Effects, 4 J. Competition L. & Econ. 207 (2008). See also, Dirk Auer, What Zoom Can Tell Us About Network Effects and Competition Policy in Digital Markets, Truth on the Market (Apr. 14, 2019),

[191] Complaint, Fed. Trade Comm’n v. Facebook, Inc., id. at 48.

[192] Vivek Bhattacharya, Gaston Illanes & David Stillerman, Merger Effects and Antitrust Enforcement: Evidence from U.S. Retail, NBER, Working Paper 31123 (2023), available at; Mert Demirer & Omer Karaduman, Do Mergers and Acquisitions Improve Efficiency: Evidence from Power Plants, available at; Celine Bonnet & Jan Philip Schain, An Empirical Analysis of Mergers: Efficiency Gains and Impact on Consumer Prices, 16 J. Comp. Law & Econ 1 (2020).

[193] Draft Guidelines, at 33.

[194] 2010 HMGs, at 30.

[195] Draft Guidelines, at 34.

[196] See, e.g., Reiter v. Sonotone Corp., 442 U.S. 330 (1979); Continental T.V., Inc. v. GTE Sylvania Inc., 433 U.S. 36 (1977); United States v. General Dynamics, 415 U.S. 486 (1974).

[197] See, e.g., Matt Stoller, The Secret Plot to Unleash Corporate Power, Big (Apr. 8, 2022),

[198] Noah Joshua Phillips & Christine S. Wilson, Comm’rs, Fed. Trade Comm’n, Statement Regarding the Request for Information on Merger Enforcement (Jan. 18, 2022) at 1-2, available at

[199] See U.S. Dep’t of Justice & F.T.C., Horizontal Merger Guidelines (2010), available at [hereinafter “2010 HMGs”].

[200] Carl Shapiro & Howard Shelanski, Judicial Response to the 2010 Horizontal Merger Guidelines, 58 Rev. Indus. Org. 51 (2021).

[201] Jan M. Rybnicek & Laura C. Onken, A Hedgehog in Fox’s Clothing: The Misapplication of GUPPI Analysis, 23 Geo. Mason L. Rev. 1187, 1190 (2016). (“This paper argues that the GUPPI regularly fails to live up to its promise for two principal reasons: (1) the GUPPI all too often is based on inaccurate or incomplete data and (2) there is insufficient guidance to allow the business community and the antitrust bar to draw reliable conclusions about how the GUPPI will be incorporated into the agencies’ enforcement decisions.”).

[202] Adam Di Vincenzo, Brian Ryoo, & Joshua Wade, Refining, Not Redefining, Market Definition: A Decade Under the 2010 Horizontal Merger Guidelines, Antitrust Source (Aug. 2020) at 11, available at (“Market definition has retained a central and often outcome-determinative role in courts’ merger analysis beyond the presumption of anticompetitive effects; in this respect, market definition is as important today as it was prior to the 2010 Guidelines.”).

[203] Fed. Trade Comm’n, Federal Trade Commission Withdraws Vertical Merger Guidelines and Commentary (Sep. 15, 2021),

[204] Id. (“The guidance documents… include unsound economic theories that are unsupported by the law or market realities.”).

[205] As the dissent from the withdrawal of the 2020 VMGs by Commissioners Philips and Wilson notes, “the FTC leadership continues the disturbing trend of pulling the rug out under from honest businesses and the lawyers who advise them, with no explanation and no sound basis of which we are aware .., with the minimum notice required by law, virtually no public input, and no analysis or guidance.” Noah Joshua Phillips & Christine S. Wilson, Comm’rs, Fed. Trade Comm’n, Dissenting Statement Regarding the Commission’s Rescission of the 2020 FTC/DOJ Vertical Merger Guidelines and the Commentary on Vertical Merger Enforcement (Sep. 15, 2021) at 1, See also, id. at 6 (“The majority could have waited to rescind the 2020 Guidelines until they had something with which to replace it. It appears they prefer sowing uncertainly in the market and arrogating unbridled authority to condemn mergers without reference to law, agency practice, economics, or market realities.”).

[206] Carl Shapiro & Herbert Hovenkamp, How Will the FTC Evaluate Vertical Mergers?, ProMarket (Sep. 23, 2021), Other choice words used by Shapiro & Hovenkamp in their extremely short essay to describe the FTC majority’s asserted basis for withdrawing the 2020 Guidelines include: “baffling,” “reli[ant] on specious economic arguments,” “demonstrably false,” “ignor[ing] relevant expertise,” “contrary to a broad consensus among economists going back at least to. . . 1968,” “flatly inconsistent with the Horizontal Merger Guidelines,” and “likely to cause real harm.” Id.

[207] See, generally, Geoffrey A. Manne, Kristian Stout & Eric Fruits, The Fatal Economic Flaws of the Contemporary Campaign Against Vertical Integration, 68 Kansas L. Rev. 923 (2020).

[208] Doug Melamed, in Antitrust Policy and Its Different Perspectives: Where Do the Antitrust Professionals Agree and Disagree? (interview by Alden Abbott with Doug Melamed and Joshua Wright), The Bridge Podcast (Apr. 19, 2022), transcript available at (“I will say I think the request for information that the agencies put out is a little worrisome because I think it’s very tendentious. At the outset, they say, ‘We’re interested in information that will help us strengthen merger enforcement.’ I would have thought the appropriate question would be information that would help us improve merger enforcement. They ask for information about false negatives, they don’t ask for information about false positives.”).

[209] Press Release, Federal Trade Commission and Justice Department Seek to Strengthen Enforcement Against Illegal Mergers (Jan. 18, 2022), (emphasis added).

[210] Christine A. Varney, Assistant Att’y Gen., Antitrust Div., U.S. Dept. of Justice, An Update on the Review of the Horizontal Merger Guidelines (Jan. 26, 2010) at 4, available at

[211] Judd E. Stone & Joshua D. Wright, The Sound of One Hand Clapping: The 2010 Merger Guidelines and the Challenge of Judicial Adoption, 39 Rev. Indus. Org. 145, 152 (2011).

[212] Report and Recommendations of the Antitrust Modernization Commission (Apr. 2007) at 54-55.

[213] Timothy J. Muris & Bilal Sayyed, Three Key Principles for Revising the Horizontal Merger Guidelines, Antitrust Source (Apr. 2010) at 3.

[214] Stone & Wright, supra note 366, at 157.

[215] Christine Varney, Assistant Att’y Gen., Antitrust Div., U.S. Dept. of Justice, Merger Guidelines Workshops (Sep. 22, 2009) at 4-5, available at

[216] See, e.g., Dennis Carlton, Revising the Horizontal Merger Guidelines, 6 J. Comp. L. & Econ. 1, 2 (2010) (“The Guidelines have proven to be a valuable and durable guide to antitrust practitioners and the courts”); William E. Kovacic, The Modern Evolution of Competition Policy Enforcement Norms, 71 Antitrust L.J. 377, 435 (“The Guidelines not only changed the way the U.S. courts and enforcement agencies examine mergers, but they also supplied an influential focal point for foreign competition authorities in the formulation of their own merger control regimes.”); Carl Shapiro, The 2010 Horizontal Merger Guidelines: From Hedgehog to Fox in Forty Years, 77 Antitrust L.J. 701, 703 (2010) (“One cannot help but marvel at how far merger enforcement has moved over the past forty years, with no change in the substantive provisions of the Clayton Act and very little new guidance on horizontal mergers from the Supreme Court”).

[217] Press Release, Department of Justice and Federal Trade Commission to Hold Workshops Concerning Horizontal Merger Guidelines (Sep. 22, 2009),

[218] Id.

[219] Christine A. Varney, Assistant Att’y Gen., Antitrust Div., U.S. Dept. of Justice, An Update on the Review of the Horizontal Merger Guidelines (Jan. 26, 2010) at 3, available at (emphasis added).

[220] 5 U.S.C. 553.

[221] We need not recount the entire series of actions here, but they include, inter alia: withdrawing the 2020 VMGs; rescinding the 2015 UMC Policy Statement; eviscerating HSR process by, among other things, suspending HSR early terminations and lowering merger-challenge thresholds; reinstating and expanding the use of prior-approval provisions; conducting business using “zombie votes”; and moving forward with competition rulemakings.

[222] There are myriad examples throughout the guidelines. To consider only a couple of examples, see, e.g., Draft Merger Guidelines fn 41 (citing Marine Bancorp for the proposition that “If the merging firm had a reasonable probability of entering the concentrated relevant market, the Agencies will usually presume that the resulting deconcentration and other benefits that would have resulted from its entry would be competitively significant” – Marine Bancorp speaks to the opposite circumstance, rejecting consideration of potential entry where state law prohibits such entry to occur at a meaningful scale); fn 53 (citing Brown Shoe at 328 for the proposition that, in the context of vertical mergers, “If the foreclosure share is above 50 percent, that factor alone is a sufficient basis to conclude that the effect of the merger may be to substantially lessen competition, subject to any rebuttal evidence.” – Brown Shoe at 329 further clarifies that “in which the foreclosure is neither of monopoly nor de minimis proportions, the percentage of the market foreclosed by the vertical arrangement cannot itself be decisive.”). Additionally, as other commentors note, the guidelines simply ignore decades of circuit and district court caselaw. In instances where they do cite to recent circuit court opinions, they do so improperly. See, e.g., Draft Merger Guidelines at fn 13 (citing United States v. AT&T, 916 F.3d 1029 (D.C. Cir. 2019) for the proposition that “Mergers Should not Substantially Lessen Competition by Creating a Firm that Controls Products or Services That Its Rivals May Use to Compete” – this was the government’s theory of harm in the case, not the court’s holding); fn 48 (citing FTC v. H.J. Heinz Co., 246 F.3d 708 (D.C. Cir. 2001) for the proposition that “the Agencies are unlikely to credit claims of commitments to protect or otherwise avoid harming their rivals that do not align with the firm’s incentives” – in the cited case the court was concern with “mere speculation and promises” that would protect rivals not “claims or commitments.”)..

[223] Brown Shoe Co. v. United States, 370 U.S. 294 (1962).

[224] United States v. Phila. Nat’l Bank, 374 U.S. 321 (1963).

[225] FTC v. Procter & Gamble Co., 386 U.S. 568 (1967).

[226] See, e.g., Douglas H Ginsburg & Joshua D Wright, Philadelphia National Bank: Bad Economics, Bad Law, Good Riddance, 80 Antitrust L.J. 377 (2015).


Continue reading
Antitrust & Consumer Protection

Are Employee Noncompete Agreements Coercive? Why the FTC’s Wrong Answer Disqualifies It from Rulemaking (For Now)

Scholarship Abstract The Federal Trade Commission recently proposed a rule banning nearly all employee noncompete agreement (“NCAs”) as unfair methods of competition under Section 5 of . . .


The Federal Trade Commission recently proposed a rule banning nearly all employee noncompete agreement (“NCAs”) as unfair methods of competition under Section 5 of the Federal Trade Commission Act. The proposed rule reflects two complementary pillars of an aggressive new enforcement agenda championed by Commission Chair Lina Khan, a leading voice in the NeoBrandeisian antitrust movement. First, such a rule depends on the assumption, rejected by most prior Commissions, that the Act empowers the Commission to issue legislative rules. Proceeding by rulemaking is essential, the Commission has said, to fight a “hyperconcentrated economy” that injures employees and consumers alike. Second, the content of the rule reflects the Commission’s repudiation of consumer welfare and the Sherman Act’s Rule of Reason as guides to implementing Section 5.

Affected parties will no doubt challenge the Commission’s assertion of authority to issue legislative rules. This article assumes for the sake of argument that the Commission possesses the authority to issue such rules enforcing Section 5. Still, prudence can counsel that an agency refrain from issuing rules before it has fully educated itself about the nature of the economic phenomena it hopes to regulate. Such prudence seems particularly appropriate when the Commission has very recently adopted an entirely new substantive standard governing such conduct. Deferring a rulemaking does not mean inaction. The Commission could develop competition policy regarding NCAs the old-fashioned way, investigating and challenging such agreements on a case-by-case basis.

The Commission rejected these prudential concerns and proceeded to ban nearly all NCAs, assuring the public that it had educated itself sufficiently about the origin and impact of NCAs to conduct a global assessment of such agreements. The Notice of Proposed Rulemaking (“NPRM”) offered three rationales for the proposed rule, drawn from a late 2022 Statement of Section 5 Enforcement Policy. First, the Commission opined that NCAs are “restrictive” because they prevent employees from selling their labor to other employers or starting their own business in competition with their employer. Second, NCAs result from procedural coercion, because employers use a “particularly acute bargaining advantage” to impose such agreements. Third, NCAs are substantively coercive, because they burden the employee’s right to quit and pursue a more lucrative opportunity.

The first rationale applied to all NCAs. The second and third applied to all NCAs except those binding senior executives. Such executives, the Commission said, bargain for such agreements with the assistance of counsel and presumably receive higher salary and/or more generous severance in return for entering such NCAs. Because NCAs also have a “negative impact on competitive conditions,” the NPRM also concluded that they are presumptively unfair methods of competition.

The Commission conceded that NCAs can create cognizable benefits. Nonetheless, the Commission concluded that such benefits do not justify NCAs, for two reasons. First, less restrictive means can “reasonably achieve” such benefits. Second, such benefits do not exceed the harms that NCAs produce.

The Commission also rejected the alternative remedy of mandatory precontractual disclosure of NCAs for two interrelated reasons. First, such disclosure would not prevent employers from using overwhelming bargaining power to impose such restraints. Second, disclosure would not alter the number or scope of NCAs and thus would not reduce their aggregate negative economic impact.

The procedural coercion rationale played an outsized role in the Commission’s Section 5 analysis, informing the findings that NCAs are also “restrictive” and substantively coercive. Moreover, the outsized emphasis on procedural coercion dovetailed nicely with the NeoBrandeisian claim that ordinary Americans are routinely helpless before large concentrations of private economic power. Indeed, when the Commission released the NPRM, Chair Khan separately tweeted that NCAs reduced core economic liberties.

Still, the Commission offered no definition of “coercion” or explanation of how to determine whether employers have used coercion to impose NCAs on employees. Instead, the Commission articulated several subsidiary determinations regarding the characteristics of employers and employees that, taken together, established that employers always possess and use an acutely overwhelming bargaining advantage to impose nonexecutive NCAs. Thus, the Commission emphasized that labor market power is widespread, due in part to labor market concentration, most employees are unaware of NCAs before they enter such agreements, NCAs generally appear in standard form contracts, employees rarely bargain over such agreements, most employees live paycheck-to-paycheck and thus have no choice but to accept NCAs, and individuals negotiating over terms of employment discount or ignore the possibility that they will depart from the job they are about to accept and thus downplay the potential impact of an NCA on their future employment autonomy.

This article contends that the Commission’s procedural coercion rationale for condemning nonexecutive NCAs does not withstand analysis. In particular, the Commission’s various subsidiary determinations that support the procedural coercion rationale have no basis in the evidence before the Commission, contradict such evidence and/or disregard modern economic theory regarding contract formation. For instance, a recent study by two Department of Labor economists finds that the average Herfindahl-Hirschman Index in American labor markets is 333, the equivalent of 30 equally-sized firms, each with a 3.33 percent market share, competing for labor in the same market. A previous version of the study was published on the Department of Labor’s website several months before the Commission issued the proposed rule. The NPRM offers no contrary evidence regarding the proportion of labor markets that are concentrated. “Hyperconcentration of labor markets” is apparently a myth.

Moreover, the NPRM ignores record evidence that 61 percent of employees know of NCAs before they accept the offer of employment. The NPRM’s failure to address these data is particularly strange, insofar as the NPRM cites the very same page of the academic article where these data appear three different times for other propositions. The Commission also erred when it assumed that employers with labor market power will use such power coercively to impose even beneficial NCAs. This assumption would have made perfect sense in 1965. However, since the 1980s, scholars practicing Transaction Cost Economics have explained how firms with market power, including labor market power, will not use that power to impose beneficial nonstandard agreements, including NCAs. The Commission was apparently unaware of this literature.

Nor does the lack of individualized bargaining and reliance on form contracts suggest that employers use power coercively to impose NCAs. Form contracts often arise in competitive markets and reduce transaction costs. Background rules governing contract formation, robust state court review of NCAs and exit by potential employees can constrain employers’ ability to obtain unreasonable provisions and induce employers to pay premium wages to compensate employees for agreeing to NCAs. These considerations may explain why a majority of employees who had advanced knowledge of NCAs considered the agreements reasonable, a finding the NPRM ignores.

Nor does it matter that most employees work paycheck-to-paycheck. The Commission ignored the possibility that such individuals may be employed when seeking a new job, bargain from a position of relative security and can thus “walk away” from onerous NCAs. The Commission also ignored economic literature establishing that the presence of some such individuals in a labor market can ensure that employers offer reasonable terms to all potential employees, including unemployed job seekers.

Refutation of the procedural coercion rationale for banning nonexecutive NCAs requires reconsideration of the other two rationales as well. For instance, nonexecutive NCAs are the result of voluntary integration and thus not procedurally coercive or substantively coercive, either. Moreover, because some nonexecutive NCAs are voluntary, the Commission must abandon its erroneous assumption that the beneficial impacts of NCAs necessarily coexist with coercive harms. Proper assessment of business justifications requires the Commission to ascertain the proportion of NCAs that constitute voluntary integration, revise downward its estimate of coercive harms and reassess NCAs’ relative harms and benefits. This revision could result in a determination that NCAs’ benefits in fact exceed their harms. Finally, recognition that beneficial NCAs are the result of voluntary integration requires the Commission to reconsider the mandatory disclosure remedy, which the Commission rejected based on the erroneous belief that employers use bargaining power to impose even fully-disclosed and beneficial NCAs. Such reconsideration could of course lead to revising the scope of the proposed ban or rejection of any ban.

The Commission may well be entirely capable of assessing the global impact of NCAs on economic variables such as price, output, and wages. However, the Commission rejected such a rule of reason approach in favor of a standard that turns in part on the process of contract formation. Thus, the Commission necessarily took on the task of gathering information regarding the process of forming NCAs and of assessing that data in light of applicable economic theory. The Commission’s demonstrably poor execution of this task reveals that it lacks the capacity to conduct a generalized assessment of NCAs under a governing standard that treats procedural coercion as legally significant.

Because it lacks the capacity to assess the process of forming nonexecutive NCAs, the Commission should withdraw the NPRM and start over. There are two alternative paths the Commission may take to develop well-considered competition policy governing NCAs. First, the Commission could revert to the rule of reason approach it rejected in 2021. The Commission could draw upon its considerable study of the impact of NCAs on wages, prices and employee training and promulgate a rule that bans those agreements the Commission believes produce net harm, after reconsidering regulatory alternatives such as mandatory disclosure.

Second, the Commission could continue to embrace its new Section 5 standard but take an “adjudication only” approach to implementation. The Commission could simultaneously take other steps through various forms of public engagement to educate itself about contract formation in general and the formation of NCAs in particular. The Commission could build on data it has to this point ignored regarding various attributes of employers, employees and labor markets more generally. Adjudication and self-education could be mutually reinforcing. Self-education could inform the Commission’s determination of which NCAs to challenge, while information generated in adjudication could improve the Commission’s knowledge base about NCAs. Ultimately this two-track approach could generate sufficient information to justify a well-considered rule governing NCAs.

Continue reading
Antitrust & Consumer Protection

The Law and Economics of Privacy

Scholarship Abstract Consumer welfare has been a north star of the Federal Trade Commission (FTC), providing an organizing principle for diverse issues under the Commission’s dual . . .


Consumer welfare has been a north star of the Federal Trade Commission (FTC), providing an organizing principle for diverse issues under the Commission’s dual competition and consumer protection missions and, specifically, a uniform ground on which to examine the law and economics of privacy matters and the tradeoffs that privacy policies entail. This paper provides the first contemporary literature synthesis by former FTC staff that brings together the legal and economics literatures on privacy. Our observations are the following: (a) privacy is a complex subject, not a simple attribute of goods and services or a simple state of affairs; (b) privacy policies entail complex tradeoffs for and across individuals; (c) the economic literature finds diverse effects, both intended and unintended, of privacy policies, including on competition and innovation; (d) while there is diverse and growing evidence of the costs of privacy policies, countervailing benefits have been understudied and, as of yet, empirical evidence of such benefits remains slight; and (e) observed costs associated with omnibus policies suggest caution regarding one-size-fits-all regulation.

Continue reading
Data Security & Privacy

AI Regulation Needs a Light Touch

TL;DR Background: Artificial intelligence—or “AI”—is everywhere these days. It powers our smartphones, cars, homes, and entertainment. It helps us diagnose diseases, teach children, and create art. . . .

Background: Artificial intelligence—or “AI”—is everywhere these days. It powers our smartphones, cars, homes, and entertainment. It helps us diagnose diseases, teach children, and create art. It promises to revolutionize every aspect of our lives, for better or worse. 

But … How should public policy respond to this powerful and rapidly evolving force? How should we ensure that AI serves our interests and values, rather than undermining or subverting them?

Some observers and policymakers fear that AI could pose existential threats to humanity, such as unleashing rogue superintelligences, triggering mass job losses, or sparking global wars. They argue that governments should take a prescriptive approach to AI regulation to preempt speculated threats.

Some argue that we need to impose strict and specific rules on AI development and deployment, before it is too late. In a recent U.S. Senate Judiciary Committee hearing, OpenAI CEO Sam Altman suggested that the United States needs a central regulator for AI. 

However … This approach is likely to be both misguided and counterproductive. Overregulation could stifle innovation and competition, depriving us of the benefits and opportunities that AI offers. It could put some countries at a disadvantage relative to those that pursue AI openly and aggressively. It could also stifle learning from AI and developing better AI.


A more sensible and effective approach to oversight is to pursue an adaptive framework that relies on existing laws and institutions, rather than creating new regulations, agencies, and enforcement mechanisms.

There are already laws, policies, agencies, and courts in place to address actual harms and risks, rather than hypothetical or speculative ones. This is what we’ve done with earlier transformative technologies like biotech, nanotech, and the internet. Each has been regulated by applying existing laws and principles, such as antitrust, torts, contracts, and consumer protection. 

In addition, an adaptive approach would foster international dialogue and cooperation, which have been essential for establishing norms and standards for emerging technologies.


Pursuing an adaptive approach does not mean that we should be complacent or naive about AI. Where the technology is misused or causes harm, there should be actionable legal consequences. For example, if a real-estate developer intentionally used AI tools to screen out individuals from purchasing homes on the basis of protected characteristics, that should be actionable. If a criminal found a novel way to use ChatGPT to commit fraud, that should be actionable. If generative AI is used to create “deep fakes” that amounts to libel, that should be actionable. But in each of these cases, it is not the AI itself that is the relevant unit of legal analysis, but the actions of criminals and the harms they cause.

Ultimately, it would be fruitless to try to build a regulatory framework that would make it impossible for bad actors to misuse AI. Bad actors will always find ways to misuse tools, and heavy-handed regulatory requirements would chill the development of the very AI tools that could combat misuse.


If history is any guide, it is likely that AI tools will allow firms and individuals to do more with less, expanding their productivity and improving their incomes.

By freeing capital from easily automated tasks, existing firms and new entrepreneurs could better focus on their core business missions. For example, investments in marketing or HR could be redeployed to R&D. At this point, we have little idea how AI will be used by people and firms. And more importantly, neither do politicians, policymakers, or regulators.


Overly burdensome AI regulation would likely hinder the entry and growth of new AI firms. For example, as an established player in the AI market, it should be no surprise that OpenAI’s CEO would favor a strong central regulator that can impose entry barriers on newcomers.  It is well-known in both law and economics that incumbent firms can profit from raising their rivals’ regulatory costs.

This dynamic can create strong strategic incentives for industry incumbents to promote regulation and can lead to a cozy relationship between agencies and incumbent firms in a process known as “regulatory capture.”


The key challenge confronting policymakers lies in navigating the dichotomy of mitigating actual risks posed by AI, while simultaneously fostering the substantial benefits it offers. 

To be sure, AI will bring about disruption and may provide a conduit for bad actors, just as technologies like the printing press and the internet have done in the past. This does not, however, merit taking an overly cautious stance that would suppress many of the potential benefits of AI.

Policymakers must eschew dystopian science-fiction narratives and instead base policy on realistic scenarios. Moreover, they should recognize the laws, policies, and agencies that already have enormous authority and power to find and punish those who misuse AI.

For more on this issue, see the International Center for Law & Economics’ (ICLE) response to the National Telecommunications and Information Administration’s AI Accountability Policy, as well as ICLE’s response to the similar inquiry from the White House Office of Science and Technology Policy.

Continue reading
Innovation & the New Economy

ICLE Response to the AI Accountability Policy Request for Comment

Regulatory Comments I. Introduction: How Do You Solve a Problem Like ‘AI’? On behalf of the International Center for Law & Economics (ICLE), we thank the National . . .

I. Introduction: How Do You Solve a Problem Like ‘AI’?

On behalf of the International Center for Law & Economics (ICLE), we thank the National Telecommunications and Information Administration (NTIA) for the opportunity to respond to this AI Accountability Policy Request for Comment (RFC).

A significant challenge that emerges in discussions concerning accountability and regulation for artificial intelligence is the broad and often ambiguous definition of “AI” itself. This is demonstrated in the RFC’s framing:

This Request for Comment uses the terms AI, algorithmic, and automated decision systems without specifying any particular technical tool or process. It incorporates NIST’s definition of an ‘‘AI system,’’ as ‘‘an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.’’  This Request’s scope and use of the term ‘‘AI’’ also encompasses the broader set of technologies covered by the Blueprint: ‘‘automated systems’’ with ‘‘the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.’’[1]

As stated, the RFC’s scope could be read to cover virtually all software.[2] But it is essential to acknowledge that, for the purposes of considering potential regulation, we lack a definition of AI that is either sufficiently broad as to cover all or even most areas of concern, and sufficiently focused as to be a useful lens for analysis. That is to say, what we think of as AI encompasses a significant diversity of discrete technologies that will be put to a huge number of potential uses.

One useful recent comparison is with the approach the Obama administration took in its deliberations over nanotechnology regulation in 2011.[3] Following years of consultation and debate, the administration opted for a parsimonious, context-specific approach precisely because “nanotechnology” is not really a single technology. In that proceeding, the administration ultimately recognized that it was not the general category of “nanotechnology” that was relevant, nor the fact that nanotechnologies are those that operate at very small scales, but rather the means by and degree to which certain tools grouped under the broad heading of “nanotechnology” could “alter the risks and benefits of a specific application.”[4] This calls to mind Judge Frank Easterbrook’s famous admonition that a “law of cyberspace” would be no more useful than a dedicated “law of the horse.”[5] Indeed, we believe Easterbrook’s observation applies equally to the creation of a circumscribed “law of AI.”

While there is nothing inherently wrong with creating a broad regulatory framework to address a collection of loosely related subjects, there is a danger that the very breadth of such a framework might over time serve to foreclose more fruitful and well-fitted forms of regulation.

A second concern in the matter immediately at hand is, as mentioned above, the potential for AI regulation to be formulated so broadly as to encompass essentially all software. Whether by design or accident, this latter case runs a number of risks. First, since the scope of the regulation will potentially cover a much broader subject, the narrow discussion of “AI” will miss many important aspects of broader software regulation, and will, as a consequence, create an ill-fitted legal regime. Second, by sweeping in a far wider range of tools into such a regulation than the drafters publicly acknowledge, the democratic legitimacy of the process is undermined.

A.      The Danger of Regulatory Overaggregation

The current hype surrounding AI has been driven by popular excitement, as well as incentives for media to capitalize on that excitement. While this is understandable, it arguably has led to oversimplification in public discussions about the underlying technologies. In reality, AI is an umbrella term that encompasses a diverse range of technologies, each with its own unique characteristics and applications.

For instance, relatively lower-level technologies like large language models (LLMs)[6] differ significantly from diffusion techniques.[7] At the level of applications, recommender systems can employ a wide variety of different machine-learning (or even more basic statistical) techniques.[8] All of these techniques collectively called “AI” also differ from the wide variety of algorithms employed by search engines, social media, consumer software, video games, streaming services, and so forth, although each also contains software “smarts,” so to speak, that could theoretically be grouped under the large umbrella of “AI.”

And none of the foregoing bear much resemblance at all to what the popular imagination conjures when we speak of AI—that is, artificial general intelligence (AGI), which some experts argue may not even be achievable.[9]

Attempting to create a single AI regulatory scheme commits what we refer to as “regulatory overaggregation”—sweeping together a disparate set of more-or-less related potential regulatory subjects under a single category in a manner that overfocuses on the abstract term and obscures differences among the subjects. The domains of “privacy rights” and “privacy regulation” are illustrative of the dangers inherent in this approach. There are, indeed, many potential harms (both online and offline) that implicate the concept of “privacy,” but the differences among these recommend examining closely the various contexts that attend each.

Individuals often invoke their expectation of “privacy,” for example, in contexts where they want to avoid the public revelation of personal or financial information. This sometimes manifests as the assertion of a right to control data as a form of quasi-property, or as a form of a right to anti-publicity (that is, a right not to be embarrassed publicly). Indeed, writing in 1890 with his law partner Samuel D. Warren, future Supreme Court Justice Louis Brandeis posited a “right to privacy” as akin to a property right.[10] Warren & Brandeis argued that privacy is not merely a matter of seclusion, but extends to the individual’s control over their personal information.[11] This “right to be let alone” delineates a boundary against unwarranted intrusion, which can be seen as a form of intangible property right.[12]

This framing can be useful as an abstract description of a broad class of interests and concerns, but it fails to offer sufficient specificity to describe actionable areas of law. Brandeis & Warren were concerned primarily with publicity;[13] that is, with a property right to control one’s public identity as a public figure. This, in turn, implicates a wide range of concerns, from an individual’s interest in commercialization of their public image to their options for mitigating defamation, as well as technologies that range from photography to website logging to GPS positioning.

But there are clearly other significant public concerns that fall broadly under the heading of “privacy” that cannot be adequately captured by the notion of controlling a property right “to be let alone.” Consider, for example, the emerging issue of “revenge porn.” It is certainly a privacy harm in the Brandeisian sense that it implicates the property right not to have one’s private images distributed without consent. But that framing fails to capture the full extent of potential harms, such as emotional distress and reputational damage.[14] Similarly, cases in which an individual’s cellphone location data are sold to bounty hunters are not primarily about whether a property right has been violated, as they raise broader issues concerning potential abuses of power, stalking, and even physical safety.[15]

These examples highlight some of the ways that, in failing to take account of the distinct facts and contexts that can attend privacy harms, an overaggregated “law of privacy” may tend to produce regulations insufficiently tailored to address those diverse harms.

By contrast, the domain of intellectual property (IP) may serve as an instructive counterpoint to the overaggregated nature of privacy regulation. IP encompasses a vast array of distinct legal constructs, including copyright, patents, trade secrets, trademarks, and moral rights, among others. But in the United States—and indeed, in most jurisdictions around the world—there is no overarching “law of intellectual property” that gathers all of these distinct concerns under a singular regulatory umbrella. Instead, legislation is specific to each area, resulting in copyright-specific acts, patent-specific acts, and so forth. This approach acknowledges that, within IP law, each IP construct invokes unique rights, harms, and remedies that warrant a tailored legislative focus.

The similarity of some of these areas does lend itself to conceptual borrowing, which has tended to enrich the legislative landscape. For example, U.S. copyright law has imported doctrines from patent law.[16] Despite such cross-pollination, copyright law and patent law remain distinct. In this way, intellectual property demonstrates the advantages of focusing on specific harms and remedies. This could serve as a valuable model for AI, where the harms and remedies are equally diverse and context dependent.

If AI regulations are too broad, they may inadvertently encompass any algorithm used in commercially available software, effectively stifling innovation and hindering technological advancements. This is no less true of good-faith efforts to craft laws in any number of domains that nonetheless suffer from a host of unintended consequences.[17]

At the same time, for a regulatory regime covering such a broad array of varying technologies to be intelligible, it is likely inevitable that tradeoffs made to achieve administrative efficiency will cause at least some real harms to be missed. Indeed, NTIA acknowledges this in the RFC:

Commentators have raised concerns about the validity of certain accountability measures. Some audits and assessments, for example, may be scoped too narrowly, creating a ‘‘false sense’’ of assurance. Given this risk, it is imperative that those performing AI accountability tasks are sufficiently qualified to provide credible evidence that systems are trustworthy.[18]

To avoid these unintended consequences, it is crucial to develop a more precise understanding of AI and its various subdomains, and to focus any regulatory efforts toward addressing specific harms that would not otherwise be captured by existing laws. The RFC declares that its aim is “to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.”[19] As we discuss below, rather than promulgate a set of recommendations about the use of AI, NTIA should focus on cataloguing AI technologies and creating useful taxonomies that regulators and courts can use when they identify tangible harms.

II. AI Accountability and Cost-Benefit Analysis

The RFC states that:

The most useful audits and assessments of these systems, therefore, should extend beyond the technical to broader questions about governance and purpose. These might include whether the people affected by AI systems are meaningfully consulted in their design and whether the choice to use the technology in the first place was well-considered.[20]

It is unlikely that consulting all of the people potentially affected by a set of technological tools could fruitfully contribute to the design of any regulatory system other than one that simply bans those tools.[21] Any intelligible accountability framework must be dedicated to evaluating the technology’s real-world impacts, rather than positing thought experiments about speculative harms. Where tangible harms can be identified, such evaluations should encompass existing laws that focus on those harms and how various AI technologies might alter how existing law would apply. Only in cases where the impact of particular AI technologies represents a new kind of harm, or raises concerns that fall outside existing legal regimes, should new regulatory controls be contemplated.

AI technologies will have diverse applications and consequences, with the potential for both beneficial and harmful outcomes. Rather than focus on how to constrain either AI developers or the technology itself, the focus should be on how best to mitigate or eliminate any potential negative consequences to individuals or society.

NTIA asks:

AI accountability measures have been proposed in connection with many different goals, including those listed below. To what extent are there tradeoffs among these goals?[22]

This question acknowledges that, fundamentally, AI accountability comes down to cost-benefit analysis. In conducting such analysis, we urge that the NTIA and any other agencies be sure to account not only for potential harms, but to take very seriously the massive benefits these technologies might provide.

A.      The Law Should Identify and Address Tangible Harms, Incorporating Incremental Changes

To illustrate the challenges inherent to tailoring regulation of a new technology like AI to address the ways that it might generally create harm, it could be useful to analogize to a different existing technology: photography. If camera technology were brand new, we might imagine a vast array of harms that could arise from its use. But it should be obvious that creating an overarching accountability framework for all camera technology is absurd. Instead, laws of general applicability should address harmful uses of cameras, such as the invasion of privacy rights posed by surreptitious filming. Even where a camera is used in the commission of a crime—e.g., surveilling a location in preparation to commit a burglary—it is not typically the technology itself that is the subject of legal concern; rather, it is the acts of surveillance and burglary.

Even where we can identify a tangible harm that a new technology facilitates, the analysis is not complete. Instead, we need to balance the likelihood of harmful uses of that technology with the likelihood of nonharmful (or beneficial) uses of that technology. Copyright law provides an apt example.

Sony,[23] often referred to as the “Betamax case,” was a landmark U.S. Supreme Court case in 1984 that centered on Sony’s Betamax VCR—the first consumer device that could record television shows for later viewing, a concept now referred to as time-shifting.[24] Plaintiffs alleged that, by manufacturing and selling the Betamax VCRs, Sony was secondarily liable for copyright infringement carried out by its customers when they recorded television shows.[25] In a 5-4 decision, the Supreme Court ruled in favor of Sony, holding that the use of the Betamax VCR to record television shows for later personal viewing constituted “fair use” under U.S. copyright law.[26]

Critical for our purposes here was that the Court found that Sony could not be held liable for contributory infringement because the Betamax VCR was capable of “substantial noninfringing uses.”[27] This is to say that, faced with a new technology (recording relatively high-quality copies of television shows and movies at home), the Court recognized that, while the Betamax might facilitate some infringement, it would be inappropriate to apply a presumption against its use.

Sony and related holdings did not declare that using VCRs to infringe copyright was acceptable. Indeed, copyright enforcement for illegal reproduction has continued apace, even when using new technologies capable of noninfringing uses.[28] At the same time, the government did not create a new regulatory and licensing regime to govern the technology, despite the fact that it was a known vector for some illicit activity.

Note, the Sony case is also important for its fair-use analysis, and is widely cited for the proposition that so-called “time shifting” is permissible. That is not central to our point here, particularly as there is no analogue to fair use proposed in the AI context. But even here, it represents how the law adapts to develop doctrines that excuse conduct that would otherwise be a violation. In the case of copyright, unauthorized reproduction is infringement, period.[29] Fair use is raised as an affirmative defense[30] to excuse some unauthorized reproduction because courts have long recognized that, when viewed case-by-case, application of legal rules need to be tailored to make room for unexpected fact patterns where acts that would otherwise be considered violations yield some larger social benefit.

We are not suggesting the development of a fair-use doctrine for AI, but are instead insisting that AI accountability and regulation must be consistent with the case-by-case approach that has characterized the common law for centuries. Toward that end, it would be best for law relevant to AI to emerge through that same bottom-up, case-by-case process. To the extent that any new legislation is passed, it should be incremental and principles-based, thereby permitting the emergence of law that best fits particular circumstances and does not conflict with other principles of common law.

By contrast, there are instances where the law has recognized that certain technologies are more likely to be used for criminal purposes and should be strictly regulated. For example, many jurisdictions have made possession of certain kinds of weapons—e.g., nunchaku, shuriken “throwing stars,” and switchblade knives—per se illegal, despite possible legal uses (such as martial-arts training).[31] Similarly, although there is a strong Second Amendment protection for firearms in the United States, it is illegal for a felon to possess a firearm.[32] The reason these prohibitions developed is because it was deemed that possession of these devices in most contexts had no other possible use than the violation of the law. But these sorts of technologies are the exception, not the rule. Many chemicals that can be easily used as poisons are nonetheless available as, e.g., cleaning agents or fertilizers.

1.        The EU AI Act: An overly broad attempt to regulate AI

Nonetheless, some advocate regulating AI by placing new technologies into various broad categories of risk, each with their own attendant rules. For example, as proposed by the European Commission, the EU’s AI Act would regulate the use of AI systems that ostensibly pose risks to health, safety, and fundamental rights.[33] The proposal defines AI systems broadly to include essentially any software, and sorts them into three risk levels: unacceptable, high, and limited risk.[34] Unacceptable-risk systems are prohibited outright, while high-risk systems are subject to strict requirements, including mandatory conformity assessments.[35] Limited-risk systems face certain requirements related to adequate documentation and transparency.[36]

The AI Act defines AI so broadly that it would apply even to ordinary general-purpose software, as well as software that uses machine learning but does not pose significant risks.[37] The plain terms of the AI Act could be read to encompass common office applications, spam filters, and recommendation engines, thus potentially imposing considerable compliance burdens on businesses for their use of software that provides benefits dramatically greater than any expected costs.[38] A recently proposed amendment would “ban the use of facial recognition in public spaces, predictive policing tools, and to impose transparency measures on generative AI applications OpenAI’s ChatGPT.”[39]

This approach constitutes a hodge-podge of top-down tech policing and one-off regulations. The AI Act starts with the presumption that regulators can design an abstract, high-level set of categories that capture the risk from “AI” and then proceeds to force arbitrary definitions of particular “AI” implementations into those categories. This approach may get some things right and some things wrong, but none of what good it does will be with principled consistency. For example, it might be the case that “predictive policing” is a problem that merits per se prohibition, but is it really an AI problem? What happens if the police get exceptionally good at using publicly available data and spreadsheets to approximate 80% of what they are able to do with AI? Or even just 50% efficacy? Is it the use of AI that is a harm, or is it the practice itself?

Similarly, a requirement that firms expose the sources on which they train their algorithms might be good in some contexts, but useless or harmful in others.[40] Certainly, it can make sense when thinking about current publicly available generative tools that create images and video, and have no ability to point to a license or permission for their training data. Such cases have a high likelihood of copyright infringement. But should every firm be expected to do this? Surely there will be many cases where firms use their own internal data, or data not subject to property-rights protection at all, but where exposing those sources reveals sensitive internal information, like know-how or other trade secrets. In those cases, a transparency obligation could have a chilling effect.

By contrast, it seems hard to believe that every use of public facial recognition should be banned. For instance, what if local authorities had limited access to facial recognition to find lost children or victims of trafficking?

More broadly, a strict transparency requirement could essentially make advanced machine-learning techniques illegal. By their nature, machine-learning systems and applications that employ LLMs make inferences and predictions that are, very often, not replicable.[41] That is, by their very nature they are not reviewable in a way that would be easily explained to a human in a transparency review. This means that strong transparency obligations could make it legally untenable to employ those techniques.

The broad risk-based approach taken by the AI Act faces difficult enforcement hurdles as well, as demonstrated by the EU’s proposal to essentially ban the open-source community from providing access to generative models.[42] In other words, not only do the proposed amendments seek to prohibit large companies such as OpenAI, Google, Anthropic, Amazon, Microsoft, and IBM from offering API access to generative AI models, but they would also prohibit open-source developers and distributors such as GitHub from doing the same.[43] Moreover, the prohibitions have extraterritorial effects; for example, the EU might seek to impose large fines on U.S. companies for permitting access to their models in the United States, on grounds that those models could be imported into the EU by third parties.[44] These provisions reflect not only an attempt to control the distribution of AI technology but also the wider implications that such attempts would essentially require steering worldwide innovation down a narrow, heavily regulated path.

2.        Focus on the harm and the wrongdoers, not the innovators

None of the foregoing is to suggest that it is impossible for AI to be misused. Where it is misused, there should be actionable legal consequences. For example, if a real-estate developer intentionally used AI tools to screen out individuals on the basis of protected characteristics from purchasing homes, that should be actionable. If a criminal found a novel way to use Chat GPT to commit fraud, that should be actionable. If generative AI is used to create “deep fakes” that further some criminal plot, that should be actionable. But in all those cases, it is not the AI itself that is the relevant unit of legal analysis, but the action of the criminal and the harm he causes.

To try to build a regulatory framework that makes it impossible for bad actors to misuse AI will be ultimately fruitless. Bad actors will always find ways to misuse tools, and heavy-handed regulatory requirements (or even strong suggestions of such) might chill the development of useful tools that could generate an enormous amount of social welfare.

B.      Do Not Neglect the Benefits

A major complication in parsing the wisdom of potential AI regulation is that the technology remains largely in development. Indeed, this is the impetus for many of the calls to “do something” before it is “too late.”[45] The fear that some express is that, unless a wise regulator intervenes in the development process, the technology will inevitably develop in ways that yield more harm than good.[46]

But trying to regulate AI in accordance with the precautionary principle would almost certainly stifle development and dampen the tremendous, but unknowable, good that would emerge as these technologies mature and we find unique uses for them. Moreover, precautionary regulation, even in high-risk industries like nuclear power, can lead to net harms to social welfare.[47]

It is important here to distinguish two broad categories of concern about AI. First, there is the generalized concern about AGI, expressed as fear that we are inadvertently creating a super intelligence with the power to snuff out human life at its whim. We reject this fear as a legitimate basis for new regulatory frameworks, although we concede that it is theoretically possible that this presumption may need to be revisited as AI technologies progress. None of the technologies currently under consideration are anywhere close to AGI. They are essentially just advanced prediction engines, whether the predictions concern text or pixels.[48] It seems highly unlikely that we will accidentally stumble onto AGI by plugging a few thousand prediction engines into one another.

There are more realistic concerns that these very impressive technologies will be misused to further discrimination and crime, or will have such a disruptive impact on areas like employment that they will quickly generate tremendous harms. When contemplating harms that could occur, however, it is also necessary to recognize that many significant benefits could also be generated. Moreover, as with earlier technologies, economic disruptions will provide both challenges and opportunities. It is easy to see the immediate effect on the jobs of content writers, for instance, posed by ChatGPT, but less easy to measure the benefits that will be realized by firms that can deploy this technology to “in-source” tasks.

Firms often face what is called the “make-or-buy” decision. A firm that decides to purchase the services of an outside designer or copywriter has determined that doing so is more efficient than developing that talent in-house. But the fact that many firms employ a particular mix of outsourced and in-house talent to fulfill their business needs does not suggest a universally optimal solution to the make-or-buy problem. All we can do is describe how, under current conditions, firms solve this problem.

AI will surely augment the basis on which firms deal with the make-or-buy decision. Pre-AI, it might have made sense to outsource a good deal of work that was not core to a firm’s mission. Post-AI, it might be the case that the firm can afford to hire additional workers who can utilize AI tools to more quickly and affordably manage the work that had been previously outsourced. Thus, the ability of AI tools to shift the make-or-buy decision, in itself, says nothing about the net welfare effects to society. Arguments could very well be made for either side. If history is any guide, however, it appears likely that AI tools will allow firms to do more with less, while also enabling more individuals to start new businesses with less upfront expense.

Moreover, by freeing capital from easily automated tasks, existing firms and new entrepreneurs could better focus on their core business missions. Excess investments previously made in supporting, for example, the creation of marketing content could be repurposed into R&D-intensive work. Simplistic static analyses of the substitution power of AI tools will almost surely mislead us, and make us neglect the larger social welfare that could be gained from organizations improving their efficiency with AI tools.

Economists have consistently found that dynamic competition—characterized by firms vying to deliver novel and enhanced products and services to consumers—contributes significantly more to economic growth than static competition, where technology is held constant, and firms essentially compete solely on price. As Joseph Schumpeter noted:

[I]t is not [price] competition which counts but the competition from the new commodity, the new technology, the new source of supply, the new type of organization…. This kind of competition is as much more effective than the other as a bombardment is in comparison with forcing a door, and so much more important that it becomes a matter of comparative indifference whether competition in the ordinary sense functions more or less promptly; the powerful lever that in the long run expands output and brings down prices is in any case made of other stuff.[49]

Technological advancements yield substantial welfare benefits for consumers, and there is a comprehensive body of scholarly work substantiating the contributions of technological innovation to economic growth and societal welfare. [50] There is also compelling evidence that technological progress engenders extensive spillovers not fully appropriated by the innovators.[51] Business-model innovations—such as advancements in organization, production, marketing, or distribution—can similarly result in extensive welfare gains.[52]

AI tools obviously are delivering a new kind of technological capability for firms and individuals. The disruptions they will bring will similarly spur business-model innovation as firms scramble to find innovative ways to capitalize on the technology. The potential economic dislocations can, in many cases, amount to reconstitution: a person who was a freelance content writer can be shifted to a different position that manages the output of generative AI and provides human edits to ensure that content makes sense and is based in fact. In many other cases, the dislocations will likely lead to increased opportunities for workers of all sorts.

With this in mind, policymakers need to consider how to identify those laws and regulations that are most likely to foster this innovation, while also enabling courts and regulators to adequately deal with potential harms. Although it is difficult to prescribe particular policies to boost innovation, there is strong evidence about what sorts of policies should be avoided. Most importantly, regulation of AI should avoid inadvertently destroying those technologies.[53] As Adam Thierer has argued, “if public policy is guided at every turn by the fear of hypothetical worst-case scenarios and the precautionary mindset, then innovation becomes less likely.”[54]

Thus, policymakers must be cautious to avoid unduly restricting the range of AI tools that compete for consumer acceptance. Key to fostering investment and innovation is not merely the endorsement of technological advancement, but advocacy for policies that empower innovators to execute and commercialize their technology.

By contrast, consider again the way that some EU lawmakers want to treat “high risk” algorithms under the AI Act. According to recently proposed amendments, if a “high risk” algorithm learns something beyond what its developers expect it to learn, the algorithm would need to undergo a conformity assessment.[55]

One of the prime strengths of AI tools is their capacity for unexpected discoveries, offering potential insights and solutions that might not have been anticipated by human developers. As the Royal Society has observed:

Machine learning is a branch of AI that enables computer systems to perform specific tasks intelligently. Traditional approaches to programming rely on hardcoded rules, which set out how to solve a problem, step-by-step. In contrast, machine learning systems are set a task, and given a large amount of data to use as examples (and non-examples) of how this task can be achieved, or from which to detect patterns. The system then learns how best to achieve the desired output.[56]

By labeling unexpected behavior as inherently risky and necessitating regulatory review, we risk stifling this serendipitous aspect of AI technologies, potentially curtailing their capacity for innovation. It could contribute to a climate of regulatory caution that hampers swift progress in discovering the full potential and utility of AI tools.

C.     AI Regulation Should Follow the Model of Common Law

In a recent hearing of the U.S. Senate Judiciary Committee, OpenAI CEO Sam Altman suggested that the United States needs a central “AI regulator.”[57] As a general matter, we expect this would be unnecessarily duplicative. As we have repeatedly emphasized, the right approach to regulating AI is not the establishment of an overarching regulatory framework, but a careful examination of how AI technologies will variously interact with different parts of the existing legal system. We are not alone in this; former Special Assistant to the President for Technology and Competition Policy Tim Wu recently opined that federal agencies would be well-advised to rely on existing law and enhance that law where necessary in order to catch unexpected situations that may arise from the use of AI tools.[58]

As Judge Easterbrook famously wrote in the context of what was then called “cyberspace,” we do not need a special law for AI any more than we need a “law of the horse.”[59]

1.        An AI regulator’s potential effects on competition

More broadly, there are risks to competition that attend creating a centralized regulator for a new technology like AI. As an established player in the AI market, OpenAI might favor a strong central regulator because of the potential that such an agency could act in ways that hinder the viability of new entrants.[60] In short, an incumbent often can gain by raising its rivals’ regulatory costs, or by manipulating the relationship between its industry’s average and marginal costs. This dynamic can create strong strategic incentives for industry incumbents to promote regulation.

Economists and courts have long studied actions that generate or amplify market dominance by placing competitors at a disadvantage, especially by raising rivals’ costs.[61] There exist numerous strategies to put competitors at a disadvantage or push them out of the market without needing to compete on price. While antitrust action focuses on private actors and their ability to raises rival’s costs, it is well-accepted that “lobbying legislatures or regulatory agencies to create regulations that disadvantage rivals” has similar effects.[62]

Suppose a new regulation costs $1 million in annual compliance costs. Only companies that are sufficiently large and profitable will be able to cover those costs, which keeps out newcomers and smaller competitors. This effect of keeping out smaller competitors by raising their costs may more than offset the regulatory burden on the incumbent. New entrants typically produce on a smaller scale, and therefore find it more difficult to spread increased costs over a large number of units. This makes it harder for them to compete with established firms like OpenAI, which can absorb these costs more easily due to their larger scale of production.

This type of cost increase can often look benign. In the United Mine Workers vs. Pennington[63] case, a coal corporation was alleged to have conspired with the union representing its workforce to establish higher wage rates. How could higher wages be anticompetitive? This seemingly contradictory conclusion came from University of California at Berkeley economist Oliver Williamson, who interpreted the action to be an effort to maximize profits by raising entry barriers.[64] Using a model with a dominant incumbent and a fringe of other competitors, he demonstrated that wage-rate increases could lead to profit maximization if they escalated the fringe’s costs more than they did the dominant firm’s costs. Intuitively, while the dominant firm is dominant, the market price is determined by the marginal producers and the dominant company’s price is determined by the prices of its competitors. If a regulation raises the competitors’ per-unit costs by $2, the dominant company will be able to raise its price by as much as $2 per unit. Even if the regulation hurts the dominant firm, so long as its price increase exceeds its additional cost, the dominant firm can profit from the regulation.

As a result, while regulations might increase costs for OpenAI, they also serve to protect it from potential competition by raising the barriers to entry. In this sense, regulation can be seen as a strategic tool for incumbent firms to maintain or strengthen their market position. None of this analysis rests on OpenAI explicitly wanting to raise its rivals’ costs. That is just the competitive implication of such regulations. Thus, while there may be many benign reasons for a firm like OpenAI to call for regulation in good faith, the ultimate lesson presented by the economics of regulation should counsel caution when imposing strong centralized regulations on a nascent industry.

2.        A central licensing regulator for AI would be a mistake

NTIA asks:

Are there ways in which accountability mechanisms are unlikely to further, and might even frustrate, the development of trustworthy AI? Are there accountability mechanisms that unduly impact AI innovation and the competitiveness of U.S. developers?[65]

We are not alone in the  belief that imposing a licensing regime would present just such a barrier to innovation.[66] In the recent Senate hearings, the idea of a central regulator was endorsed as means to create and administer a licensing regime.[67] Perhaps in some narrow applications of particular AI technologies, there could be specific contexts in which licensing is appropriate (e.g., in providing military weapons), but broadly speaking, we believe this is inadvisable. Owing to the highly diverse nature of AI technologies, trying to license AI development is a fraught exercise, as NTIA itself acknowledges:

A developer training an AI tool on a customer’s data may not be able to tell how that data was collected or organized, making it difficult for the developer to assure the AI system. Alternatively, the customer may use the tool in ways the developer did not foresee or intend, creating risks for the developer wanting to manage downstream use of the tool. When responsibility along this chain of AI system development and deployment is fractured, auditors must decide whose data and which relevant models to analyze, whose decisions to examine, how nested actions fit together, and what is within the audit’s frame.[68]

Rather than design a single regulation to cover AI, ostensibly administered through a single licensing regime, NTIA should acknowledge the broad set of industries currently seeking to employ a diverse range of AI products that differ in fundamental ways. The implications of AI deployment in health care, for instance, vastly differ from those in transportation. A centralized AI regulator might struggle to comprehend the nuances and intricacies of each distinct industry, thus potentially leading to ineffective or inappropriate licensing requirements.

Analogies have been drawn between AI and sectors like railroads and nuclear power, which have dedicated regulators.[69] These sectors, however, are more homogenous and discrete than the AI industry (if such an industry even exists, apart from the software industry more generally). AI is much closer to a general-purpose tool, like chemicals or combustion engines. We do not enact central regulators to license every aspect of the development and use of chemicals, but instead allow different agencies to treat their use differently as is appropriate for the context. For example, the Occupational Safety and Health Administration (OSHA) will regulate employee exposure to dangerous substances encountered in the workplace, while various consumer-protection boards will regulate the adulteration of goods.

The notion of licensing implies that companies would need to obtain permission prior to commercializing a particular piece of code. This could introduce undesirable latency into the process of bringing AI technologies to market (or, indeed, even of correcting errors in already-deployed products). Given the expansive potential to integrate AI technologies into diverse products and services, this delay could significantly impede technological progress and innovation. Given the strong global interest in the subject, such delays threaten to leave the United States behind its more energetic competitors in the race for AI innovation.

As in other consumer-protection regimes, a better approach would be to eschew licensing and instead create product-centric and harm-centric frameworks that other sectoral regulators or competition authorities could incorporate into their tailored rules for goods and services.

For instance, safety standards for medical devices should be upheld, irrespective of whether AI is involved. This product-centric regulatory approach would ensure that the desired outcomes of safety, quality, and effectiveness are achieved without stymieing innovation. With their deep industry knowledge and experience, sectoral regulators will generally be better positioned to address the unique challenges and considerations posed by AI technology deployed within their spheres of influence.

NTIA alludes to one of the risks of an overaggregated regulator when it notes that:

For some trustworthy AI goals, it will be difficult to harmonize standards across jurisdictions or within a standard- setting body, particularly if the goal involves contested moral and ethical judgements. In some contexts, not deploying AI systems at all will be the means to achieve the stated goals.[70]

Indeed, the institutional incentives that drive bureaucratic decision making often converge on this solution of preventing unexpected behavior by regulated entities.[71] But at what cost? If a regulator is unable to imagine how to negotiate the complicated tradeoffs among interested parties across all AI-infused technologies, it will act to slow or prevent the technology from coming to market. This will make us all worse off, and will only strengthen the position of our competitors on the world stage.

D.      The Impossibility of Explaining Complexity

NTIA notes that:

According to NIST, ‘‘trustworthy AI’’ systems are, among other things, ‘‘valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with their harmful bias managed.’’[72]

And in the section titled “Accountability Inputs and Transparency, NTIA asks a series of questions designed to probe what can be considered a realistic transparency obligation for developers and deployers of AI systems. We urge NTIA to resist the idea that AI systems be “explainable,” for the reasons set forth herein.

One of the significant challenges in AI accountability is making AI systems explainable to users. It is crucial to acknowledge that providing a clear explanation of how an AI model—such as an LLM or a diffusion model—arrives at a specific output is an inherently complex task, and may not be possible at all. As the UK Royal Society has noted in its paper on AI explainability:

Much of the recent excitement about advances in AI has come as a result of advances in statistical techniques. These approaches – including machine learning – often leverage vast amounts of data and complex algorithms to identify patterns and make predictions. This complexity, coupled with the statistical nature of the relationships between inputs that the system constructs, renders them difficult to understand, even for expert users, including the system developers. [73]

These models are designed with intricate architectures and often rely on vast troves of data to arrive at outputs, which can make it nearly impossible to reverse-engineer the process. Due to these complexities, it may be unfeasible to make AI fully explainable to users. Moreover, users themselves often do not value explainability, and may be largely content with a “black box” system when it consistently provides accurate results.[74]

Instead, to the extent that regulators demand visibility into AIs, the focus should be on the transparency of the AI-development process, system inputs, and the general guidelines for AI that developers use in preparing their models. Ultimately, we suspect that, even here, such measures will do little to resolve the inherent complexity in understanding how AI tools produce their outputs.

In a more limited sense, we should consider the utility in transparency of AI-infused technology for most products and consumers. NTIA asks:

Given the likely integration of generative AI tools such as large language models (e.g., ChatGPT) or other general-purpose AI or foundational models into downstream products, how can AI accountability mechanisms inform people about how such tools are operating and/or whether the tools comply with standards for trustworthy AI?[75]

As we note above, the proper level of analysis for AI technologies is the product into which they are incorporated. But even there, we need to ask whether it matters to an end user whether a product they are using relies on ChatGPT or a different algorithm for predictively generating text. If the product malfunctions, what matters is the malfunction and the accountability for the product. Most users do not really care whether a developer writes a program using C++ or Java, and neither should they explicitly care whether he incorporates a generative AI algorithm to predict text, or uses some other method of statistical analysis. The presence of an AI component becomes analytically necessary when diagnosing how something went wrong, but ex ante, it is likely irrelevant from a consumer’s perspective.

Thus, it may be the case that a more fruitful avenue for NTIA to pursue would be to examine how a strict-liability or product-liability legal regime might be developed for AI. These sorts of legal frameworks put the onus on AI developers to ensure that their products behave appropriately­. Such legal frameworks also provide consumers with reassurance that they have recourse if and when they are harmed by a product that contains AI technology. Indeed, it could very well be the case that overemphasizing “trust” in AI systems could end up misleading users in important contexts.[76] This would strengthen the case for a predictable liability regime.

1.        The deepfakes problem demonstrates that we do not need a new body of law

The phenomenon of generating false depictions of individuals using advanced AI techniques—commonly called “deepfakes”—is undeniably concerning, particularly when it can be used to create detrimental false public statements,[77] facilitate fraud,[78] or create nonconsensual pornography.[79] But while deepfakes use modern technological tools, they are merely the most recent iteration of the age-old problem of forgery. Importantly, existing law already equips us with the tools needed to address the challenges posed by deepfakes, rendering many recent legislative proposals at the state level both unnecessary and potentially counterproductive. Consider one of the leading proposals offered by New York State.[80]

Existing laws in New York and at the federal level provide remedies for individuals aggrieved by deepfakes, and they do so within a legal system that has already worked to incorporate the context of these harms, as well as the restrictions of the First Amendment and related defenses. For example, defamation laws can be applied where a deepfake falsely suggests an individual has posed for an explicit photograph or video.[81] New York law also acknowledges the tort of intentional infliction of emotional distress, which likely could be applied to the unauthorized use of a person’s likeness in explicit content.[82] In addition, the tort of unjust enrichment can be brought to bear where appropriate, as can the Lanham Act §43(a), which prohibits false advertising and implied false endorsements.[83] Furthermore, victims may hold copyright in the photograph or video used in a deepfake, presenting grounds for an infringement action.[84]

Thus, while advanced deepfakes are new, the harms they can cause and the law’s ability to address those harms is not novel. Legislation that attempts to carve out new categories of harms in these situations are, at best, reinventing the wheel and, at worst, risk creating confusing tensions in the existing legal system.

III.      The Role of NTIA in AI Accountability

NTIA asks if “the lack of a federal law focused on AI systems [is] a barrier to effective AI accountability?”[85] In short, no, this is not a barrier, so long as the legal system is allowed to evolve to incorporate the novel challenges raised by AI technologies.

As noted in the previous section, there is a need to develop standards, both legal and technical. As we are in the early days of AI technology, the exact contours of the various legal changes that might be needed to incorporate AI tools into existing law remain unclear. At this point, we would urge NTIA—to the extent that it wants to pursue regulatory, licensing, transparency, and other similar obligations—to develop a series of workshops through which leading technology and legal experts could confer on developing a vision for how such legal changes would work in practice.

By gathering stakeholders and fostering an ongoing dialogue, NTIA can help to create a collaborative environment in which organizations can share knowledge, experiences, and innovations to address AI accountability and its associated challenges. By promoting industry collaboration, NTIA could also help build a foundation of trust and cooperation among organizations involved in AI development and deployment. This, in turn, will facilitate the establishment of standards and best practices that address specific concerns, while mitigating the risk of overregulation that could stifle innovation and progress. In this capacity, NTIA should focus on encouraging the development of context-specific best practices that prioritize the containment of identifiable harms. By fostering a collaborative atmosphere, the agency can support a dynamic and adaptive AI ecosystem that is capable of addressing evolving challenges while safeguarding the societal benefits of AI advancements.

In addressing AI accountability, it is essential for NTIA to adopt a harm-focused framework that targets the negative impacts of AI systems rather than the technology itself. This approach would recognize that AI technology can have diverse applications, with consequences that will depend on the context in which they are used. By prioritizing the mitigation of specific harms, NTIA can ensure that regulations are tailored to address real-world outcomes and provide a more targeted and effective regulatory response.

A harm-focused framework also acknowledges that different AI technologies pose differing levels of risk and potential for misuse. NTIA can play a proactive role in guiding the creation of policies that reflect these nuances, striking a balance between encouraging innovation and ensuring the responsible development and use of AI. By centering the discussion on actual harms and their causes, NTIA can foster meaningful dialogue among stakeholders and facilitate the development of industry best practices designed to minimize negative consequences.

Moreover, this approach ensures that AI accountability policies are consistent with existing laws and regulations, as it emphasizes the need to assess AI-related harms within the context of the broader legal landscape. By aligning AI accountability measures with other established regulatory frameworks, the NTIA can provide clear guidance to AI developers and users, while avoiding redundancy and conflicting regulations. Ultimately, a harm-focused framework allows the NTIA to better address the unique challenges posed by AI technology and foster an assurance ecosystem that prioritizes safety, ethics, and legal compliance without stifling innovation.

IV.    Conclusion

Another risk of the current AI hysteria is that fatigue will set in, and the public will become numbed to potential harms. Overall, this may shrink the public’s appetite for the kinds of legal changes that will be needed to address those actual harms that do emerge. News headlines that push doomsday rhetoric and a community of experts all too eager to respond to the market incentives for apocalyptic projections only exacerbate the risk of that outcome. A recent one-line letter, signed by AI scientists and other notable figures, highlights the problem:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.[86]

Novel harms absolutely will emerge from products that employ AI, as has been the case for every new technology. The introduction of automobiles created new risks of harm from high-speed auto-related deaths, for example. But rhetoric about AI being an existential risk on the level of a pandemic or nuclear war is irresponsible.

Perhaps one of the most important positions NTIA can assume, therefore, is that of a calm, collected expert agency that helps restrain the worst impulses to regulate AI out of existence due to blind fear.

In essence, the key challenge confronting policymakers lies in navigating the dichotomy of mitigating actual risks presented by AI, while simultaneously safeguarding the substantial benefits it offers. It is undeniable that the evolution of AI will bring about disruption and may provide a conduit for malevolent actors, just as technologies like the printing press and the internet have done in the past. This does not, however, merit taking an overly cautious stance that would suppress the potential benefits of AI.

As we formulate policy, it is crucial to eschew dystopian science-fiction narratives and instead ground our approach in realistic scenarios. The proposition that computer systems, even those as advanced as AI tools, could spell the end of humanity lacks substantial grounding.

The current state of affairs represents a geo-economic competition to harness the benefits of AI in myriad domains. Contrary to fears that AI poses an existential risk, the real danger may well lie in attempts to overly regulate and stifle the technology’s potential. The indiscriminate imposition of regulations could inadvertently thwart AI advancements, resulting in a loss of potential benefits that could be far more detrimental to social welfare.

[1] AI Accountability Policy Request for Comment, Docket No. 230407-0093, 88 FR 22433, National Telecommunications and Information Administration (Apr. 14, 2023) (“RFC”).

[2] Indeed, this approach appears to be the default position of many policymakers around the world. See, e.g., Mikolaj Barczentewicz, EU’s Compromise AI Legislation Remains Fundamentally Flawed, Truth on the Market (Feb. 8, 2022),; The fundamental flaw of this approach is that, while AI techniques use statistics, “statistics also includes areas of study which are not concerned with creating algorithms that can learn from data to make predictions or decisions. While many core concepts in machine learning have their roots in data science and statistics, some of its advanced analytical capabilities do not naturally overlap with these disciplines.” See, Explainable AI: The Basics, The Royal Society (2019) at 7 available at (“Royal Society Briefing”).

[3] John P. Holdren, Cass R. Sunstein, & Islam A. Siddiqui, Memorandum for the Heads of Executive Departments and Agencies, Executive Office of the White House (Jun. 9, 2011), available at

[4] Id.

[5] Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. L. Forum 207 (1996).

[6] LLMs are a type of artificial-intelligence model designed to parse and generate human language at a highly sophisticated level. The deployment of LLMs has driven progress in fields such as conversational AI, automated content creation, and improved language understanding across a multitude of applications, even suggesting that these models might represent an initial step toward the achievement of artificial general intelligence (AGI). See Alejandro Pen?a et al., Leveraging Large Language Models for Topic Classification in the Domain of Public Affairs, arXiv (Jun. 5, 2023),

[7] Diffusion models are a type of generative AI built from a hierarchy of denoising autoencoders, which can achieve state-of-the-art results in such tasks as class-conditional image synthesis, super-resolution, inpainting, colorization, and stroke-based synthesis. Unlike other generative models, these likelihood-based models do not exhibit mode collapse and training instabilities. By leveraging parameter sharing, they can model extraordinarily complex distributions of natural images without necessitating billions of parameters, as in autoregressive models. See Robin Rombach et al., High-Resolution Image Synthesis with Latent Diffusion Models, arXiv (Dec. 20, 2021),

[8] Recommender systems are advanced tools currently used across a wide array of applications, including web services, books, e-learning, tourism, movies, music, e-commerce, news, and television programs, where they provide personalized recommendations to users. Despite recent advancements, there is a pressing need for further improvements and research in order to offer more efficient recommendations that can be applied across a broader range of applications. See Deepjyoti Roy & Mala Dutta, A Systematic Review and Research Perspective on Recommender Systems, 9 J. Big Data 59 (2022), available at

[9] AGI refers to hypothetical future AI systems that possess the ability to understand or learn any intellectual task that a human being can do. While the realization of AGI remains uncertain, it is distinct from the more specialized AI systems currently in use. For a skeptical take on the possibility of AGI, see Roger Penrose, The Emperor’s New Mind (Oxford Univ. Press 1989).

[10] Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193 (1890).

[11] Id. at 200.

[12] Id. at 193.

[13] Id. at 196-97.

[14] Notably, courts do try to place a value on emotional distress and related harms. But because these sorts of violations are deeply personal, attempts to quantify such harms in monetary terms are rarely satisfactory to the parties involved.

[15] Martin Giles, Bounty Hunters Tracked People Secretly Using US Phone Giants’ Location Data, MIT Tech. Rev. (Feb. 7, 2019),

[16] See, e.g., Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S. 417, 439 (1984) (The Supreme Court imported the doctrine of “substantial noninfringing uses” into copyright law from patent law).

[17] A notable example is how the Patriot Act, written to combat terrorism, was ultimately used to take down a sitting governor in a prostitution scandal. See Noam Biale, Eliot Spitzer: From Steamroller to Steamrolled, ACLU, Oct. 29, 2007,

[18] RFC at 22437.

[19] Id. at 22433.

[20] Id. at 22436.

[21] Indeed, the RFC acknowledges that, even as some groups are developing techniques to evaluate AI systems for bias or disparate impact, “It should be recognized that for some features of trustworthy AI, consensus standards may be difficult or impossible to create.” RFC at 22437. Arguably, this problem is inherent to constructing an overaggregated regulator, particularly one that will be asked to consulting a broad public on standards and rulemaking.

[22] Id. at 22439.

[23] Sony Corp. of Am. v. Universal City Studios, Inc., 464 417.

[24] Id.

[25] Id.

[26] Id. at 456.

[27] Id.

[28] See, e.g., Defendant Indicted for Camcording Films in Movie Theaters and for Distributing the Films on Computer Networks First Prosecution Under Newly-Enacted Family Entertainment Copyright Act, U.S. Dept of Justice (Aug. 4, 2005), available at

[29] 17 U.S.C. 106.

[30] See 17 U.S.C. 107; Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 590 (1994) (“Since fair use is an affirmative defense, its proponent would have difficulty carrying the burden of demonstrating fair use without favorable evidence about relevant markets.”).

[31] See, e.g., N.Y. Penal Law § 265.01; Wash. Rev. Code Ann. § 9.41.250; Mass. Gen. Laws Ann. ch. 269, § 10(b).

[32] See, e.g., 18 U.S.C.A. § 922(g).

[33] Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206 final. The latest proposed text of the AI Act is available at

[34] Id. at amendment 36 recital 14.

[35] Id.

[36] Id.

[37] See e.g., Mikolaj Barczentewicz, supra note 2.

[38] Id.

[39] Foo Yun Chee, Martin Coulter & Supantha Mukherjee, EU Lawmakers’ Committees Agree Tougher Draft AI Rules, Reuters (May 11, 2023),

[40] See infra at notes 71-77 and accompanying text.

[41] Explainable AI: The Basics, supra note 2 at 8.

[42] See e.g., Delos Prime, EU AI Act to Target US Open Source Software, (May 13, 2023),

[43] Id.

[44] To be clear, it is not certain how such an extraterritorial effect will be obtained, and this is just a proposed amendment to the law. Likely, there will need to be some form of jurisdictional hook, i.e., that this applies only to firms with an EU presence.

[45]  Eliezer Yudkowsky, Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, Time (Mar. 29, 2023),

[46] See, e.g., Kiran Stacey, UK Should Play Leading Role on Global AI Guidelines, Sunak to Tell Biden, The Guardian (May 31, 2023),

[47] See, e.g., Matthew J. Neidell, Shinsuke Uchida & Marcella Veronesi, The Unintended Effects from Halting Nuclear Power Production: Evidence from Fukushima Daiichi Accident, NBER Working Paper 26395 (2022), (Japan abandoning nuclear energy in the wake of the Fukushima disaster led to decreased energy consumption, which in turn led to increased mortality).

[48] See, e.g., Will Knight, Some Glimpse AGI in ChatGPT. Others Call It a Mirage, Wired (Apr. 10, 2023), (“GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input.”)

[49] Joseph A. Schumpeter, Capitalism, Socialism And Democracy 74 (1976).

[50] See, e.g., Jerry Hausman, Valuation of New Goods Under Perfect and Imperfect Competition, in The Economics Of New Goods 209–67 (Bresnahan & Gordon eds., 1997).

[51] William D. Nordhaus, Schumpeterian Profits in the American Economy: Theory and Measurement, NBER Working Paper No. 10433 (Apr. 2004) at 1, (“We conclude that only a miniscule fraction of the social returns from technological advances over the 1948-2001 period was captured by producers, indicating that most of the benefits of technological change are passed on to consumers rather than captured by producers.”).

[52] See generally Oliver E. Williamson, Markets And Hierarchies, Analysis And Antitrust Implications: A Study In The Economics Of Internal Organization (1975).

[53] See, e.g., Nassim Nicholas Taleb, Antifragile: Things That Gain From Disorder (2012) (“In action, [via negativa] is a recipe for what to avoid, what not to do.”).

[54] Adam Thierer, Permissionless Innovation: The Continuing Case For Comprehensive Technological Freedom (2016).

[55] See, e.g., Artificial Intelligence Act, supra note 33, at amendment 112 recital 66.

[56] Explainable AI: The Basics, supra note 2 at 6.

[57] Cecilia Kang, OpenAI’s Sam Altman Urges A.I. Regulation in Senate Hearing, NY Times (May 16, 2023),; see also Mike Solana & Nick Russo, Regulate Me, Daddy, Pirate Wires (May 23, 2023),

[58] Cristiano Lima, Biden’s Former Tech Adviser on What Washington is Missing about AI, The Washington Post (May 30, 2023),

[59] Frank H. Easterbrook, supra note 5.

[60]  See Lima, supra note 58 (“I’m not in favor of an approach that would create heavy compliance costs for market entry and that would sort of regulate more abstract harms.”)

[61] Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73:2 Am. Econ. R. 267, 267–71 (1983),

[62] Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36:1 J. Indus. Econ. 19 (1987),

[63] United Mine Workers of Am. v. Pennington, 381 U.S. 657, 661 (1965).

[64] Oliver E. Williamson, Wage Rates as a Barrier to Entry: The Pennington Case in Perspective, 82:1 Q. J. Econ. 85 (1968),

[65] RFC at 22439.

[66] See, e.g., Lima, supra note 58 (“Licensing regimes are the death of competition in most places they operate”).

[67] Kang, supra note 57; Oversight of A.I.: Rules for Artificial Intelligence: Hearing Before the Subcomm. on Privacy, Technology, and the Law of the S. Comm. on the Judiciary, 118th Cong. (2023) (statement of Sam Altman, at 11), available at

[68] RFC at 22437.

[69] See, e.g., Transcript: Senate Judiciary Subcommittee Hearing on Oversight of AI, Tech Policy Press (May 16, 2023), (“So what I’m trying to do is make sure that you just can’t go build a nuclear power plant. Hey Bob, what would you like to do today? Let’s go build a nuclear power plant. You have a nuclear regulatory commission that governs how you build a plant and is licensed.”)

[70] RFC at 22438.

[71] See, e.g., Raymond J. March, The FDA and the COVID?19: A Political Economy Perspective, 87(4) S. Econ. J. 1210, 1213-16 (2021), (discussing the political economy that drives incentives of bureaucratic agencies in the context of the FDA’s drug-approval process).

[72] RFC at 22434.

[73] Explainable AI: The Basics, supra, note 2 at 12.

[74] Id. at 20.

[75] Id. at 22439.

[76] Explainable AI: The Basics, supra note 2 at 22. (“Not only is the link between explanations and trust complex, but trust in a system may not always be a desirable outcome. There is a risk that, if a system produces convincing but misleading explanations, users might develop a false sense of confidence or understanding, mistakenly believing it is trustworthy as a result.”)

[77] Kate Conger, Hackers’ Fake Claims of Ukrainian Surrender Aren’t Fooling Anyone. So What’s Their Goal?, NY Times (Apr. 5, 2022),

[78] Pranshu Verma, They Thought Loved Ones Were Calling for Help. It Was an AI Scam, The Washington Post (Mar. 5, 2023),

[79] Video: Deepfake Porn Booms in the Age of A.I., NBC News (Apr. 28, 2023),

[80] S5857B, NY State Senate (2018),

[81] See, e.g., Rejent v. Liberation Publications, Inc., 197 A.D.2d 240, 244–45 (1994); see also, Leser v. Penido, 62 A.D.3d 510, 510–11 (2009).

[82] See, e.g., Howell v. New York Post Co,. 612 N.E.2d 699 (1993).

[83] See, e.g., Mandarin Trading Ltd. v. Wildenstein, 944 N.E.2d 1104 (2011); 15 U.S.C. §1125(a).

[84] 17 U.S.C. 106.

[85] RFC at 22440.

[86] Statement on AI Risk, Center for AI Safety, (last visited Jun. 7, 202).

Continue reading
Innovation & the New Economy

Comments from the International Center of Law and Economics on The Future of Competition Policy in Canada

Written Testimonies & Filings Executive Summary In what the Discussion Paper refers to as a “moment of reckoning” for competition law, it is crucial that the Government not overreact . . .

Executive Summary

In what the Discussion Paper refers to as a “moment of reckoning” for competition law, it is crucial that the Government not overreact with experimental legislative reform that will later be exceedingly difficult to unwind. Five main conclusions can be drawn from this submission, and they warrant a much more restrained approach.

First, the Government should follow several important guiding principles when it decides what competition policy is appropriate for Canada. Any potential reform should be based on careful examination of the facts and evidence, as well as the specifics of Canada’s economy, and it should be scrupulous in applying the error-costs framework. In addition, despite frequent rhetoric to the contrary, it is entirely unclear that “digital” markets present the sort of unique challenges that would necessitate an overhaul of the Competition Act. Accordingly, evidence does not recommend that Canada follow the sort of competition regulation or reform contemplated elsewhere, nor should Canada be compelled to act just because other countries are “doing something.”

Second, there is no rhyme or reason to presumptions against self-preferencing behavior. Self-preferencing is normal business conduct that can, and often does, yield procompetitive benefits, including efficiencies, enhanced economies of scope, and an improved products for consumers. In addition, a ban on self-preferencing would cause harms for the startup ecosystem by discouraging acquisitions by large firms, which would ultimately diminish the incentives for startups. This is presumably not what the Government wants to achieve.

Third, altering the purpose of the Competition Act would be a grave mistake. Competition law does not serve to protect competitors, but competition; nor can harm to competitors be equated with harm to competition. The quintessential task of competition laws—the Competition Act included—is distinguishing between the two, precisely because the distinction is so subtle, yet at the same time so significant. Similarly, “fairness” is a poor lodestar for competition-law enforcement because of its inherent ambiguity. Instead of these or other standards, the Competition Act should remain rooted in the principle of combating “a substantial lessening or prevention of competition.”

Fourth, the Government should exercise extreme caution in its exploration of labour-market monopsony, as altering the merger-control rules to encompass harms to labour risks both harming consumer welfare and the consistency and predictability of competition law.

Fifth, in its impetus to bolster competition-law enforcement by making it “easier” on the Canadian Competition Bureau, the Government should not sacrifice rights of defense and the rule of law for expediency. In this, at least, it can learn from the example of the EU’s Digital Market Act.


We thank the Government of Canada for the opportunity to comment on its Consultation on the future of competition policy in Canada. The International Center for Law and Economics (ICLE) is a non-profit, nonpartisan research center whose work promotes the use of law & economics methodologies to inform public-policy debates. We believe that intellectually rigorous, data-driven analysis will lead to efficient policy solutions that promote consumer welfare and global economic growth. ICLE’s scholars have written extensively on competition and consumer-protection policy. Some of our writings are included as references in the comment below. Additional materials may be found at our website:

On 17 November 2022, the Canadian Government (“Government”) published a Consultation for the Future of Competition Policy in Canada (“Consultation”) with the purpose of informing the Government’s next steps for improving competition in emerging and digital markets, including potential legislative changes (Government of Canada, 2022). The Consultation builds on a Discussion Paper issued by the Canadian Competition Bureau (“CCB”) entitled “The Future of Competition Policy in Canada” (“Discussion Paper”) which broaches several issues that have been hotly debated, both in Canada and abroad, such as so-called “killer acquisitions,” self-preferencing practices by dominant online platforms, the effects of monopsony power on labour, private damages claims, the necessity of bolstering antitrust enforcement, and deceptive marketing practices (Discussion Paper: 5). While all these questions undoubtedly deserve extensive commentary, we have decided to focus on five issues where we think our expertise in law and economics, as well as our experience in the regulation of digital markets, bring the most added value.

These comments are organized as follows. In Section I, we outline several general principles that guide any effective competition policy, especially in the realm of digital markets. We argue that sound competition policy needs to account for the economic specificities of the jurisdiction that passes it, the significant heterogeneity of digital platforms, and the important error costs associated with regulating digital markets. In Section II we argue that Canada should not follow the EU in imposing outright bans and ex ante obligations for conduct that is ubiquitous in the digital world, such as self-preferencing. We argue, instead, that there are legitimate reasons—ranging from economic efficiency to safety, privacy, and security—to prefer a more restrained, case-by-case approach. We also connect the skepticism toward self-preferencing with a broader, misguided belief that vertical integration is typically anticompetitive, which is not supported by the available evidence.

In Section III, we argue against a range of proposals that would, in one way or another, alter the purpose clause of the Competition Act. We emphasize that competition law serves to protect competition, not competitors; caution against the reliance on amorphous concepts, such as “fairness,” to guide competition-law enforcement; and hold that merger control should remain tethered to a standard of “substantial lessening or prevention of competition.” In Section IV, we explain that, while it may appear politically expedient and attractive, there are serious limits on the extent to which labour effects can be integrated into competition analysis.

Finally, Section V warns against sacrificing effective procedural safeguards and rights of defense for the sake of facilitating enforcement. More generally, we warn against the increasingly prevalent intuition that making enforcement easier is always good, effective, or costless; or that “more enforcement” is synonymous with the public good. Section VI concludes.

I.        Some General Principles for Effective Competition Policy

When done well, competition policy can provide the governing framework for free enterprise—a set of rules that prevent the formation of inefficient monopolies, while allowing markets to deliver benefits to consumers unfettered by heavy-handed government intervention. To achieve this goal, it is essential for competition policy to be grounded in several principles that ensure it achieves a balance between over- and under-deterrence of harmful conduct. These principles include having a competition policy that fits the specific needs and market realities of the jurisdiction enforcing it; ensuring that competition policy is mindful of error-cost considerations; and avoiding a one-size-fits-all approach that treats all markets, notably digital ones, as identical.

A.      Canada Should Implement the Right Competition Rules for Canada

The Consultation appears to assume that Canada’s adversarial system of competition-law enforcement is too archaic to deal with competition issues arising in the modern, digital economy (Ibid: 51), and that Canada is falling behind the regulatory trends set by “international partners,” such as the United States, Australia, and the European Union.

“[The Government] is committed to a renewed role for the Competition Bureau in protecting the public in our modern marketplace, in line with steps taken by many of Canada’s key international partners” (Ibid: 4).

While these trends exist—despite significant variation in terms of scope and legislative progress across jurisdictions—there is currently a dearth of evidence to suggest that they are a positive development worthy of emulation. It is even less clear whether emulating these developments would be the right move, given Canada’s specific market realities.

The EU’s Digital Markets Act (“DMA”), the most comprehensive legislative attempt to “rein in” digital companies, entered into force only last October, and it will not start imposing obligations on gatekeepers until February or March 2024 at the earliest. (Grafunder et al., 2022). Nevertheless, its sponsors have predictably touted it as a resounding success and a landmark piece of legislation that will upend the ways in which digital platforms do business. The press has also wasted no time in lionizing the EU’s regulatory pièce de résistance as a “victory” over tech companies, as if the relationship between business and government were a zero-sum game (Abend, 2015; Harris, 2022).

But it is important to carefully consider the facts and evidence. Indeed, while the DMA likely will transform how the targeted companies do business (albeit possibly not in the way the regulation’s supporters assume), the jury is still very much out on the question of whether the DMA is, or will be, a success. The DMA’s origins are enlightening in this regard. Prior to its adoption, many leading European politicians touted the text as a protectionist industrial-policy tool that would hinder U.S. firms to the benefit of European rivals—a far cry from the purely consumer-centric tool it is sometimes made out to be. French Minister of the Economy Bruno Le Maire acknowledged as much, saying (Pollet, 2021): “Digital giants are not just nice companies with whom we need to cooperate, they are rivals, rivals of the states that do not respect our economic rules, which must therefore be regulated… There is no political sovereignty without technological sovereignty. You cannot claim sovereignty if your 5G networks are Chinese, if your satellites are American, if your launchers are Russian and if all the products are imported from outside.”

Andreas Schwab, one of the DMA’s most important backers in the European Parliament, likewise argued that the DMA should focus on non-European firms (Broadbent, 2021): “Let’s focus first on the biggest problems, on the biggest bottlenecks. Let’s go down the line—one, two, three, four, five—and maybe six with Alibaba. But let’s not start with number seven to include a European gatekeeper just to please [U.S. president Joe] Biden.”

Even on its own terms, whether the DMA will achieve its dual goals of “fairness” and contestability is uncertain. Less certain still is whether it will produce negative unintended consequences for consumer prices, product quality, security, innovation, or the rule of law—as some commentators have warned (Auer & Radic, 2023; Barczentewicz, 2022; Colangelo, 2023; Radic, 2022; Ibáñez Colomo, 2021; Cennamo & Santaló, 2023; Bentata, 2021). In a similar vein, no evidence suggests that the competition-law cases against tech companies based on such theories of harm as self-preferencing will withstand the courts’ scrutiny or that they will result in net benefits to consumers or competition.

The still nascent “trends” in other jurisdictions offer even less in terms of evidence to counsel adoption of far-reaching DMA-style solutions like banning self-preferencing, forcing interoperability, or prohibiting the use of data generated by business users. The U.S. antitrust bills targeting a handful of companies seem unlikely to be adopted soon (Kelly, 2022); the UK’s Digital Markets Unit proposal has still not been put to Parliament; and Japan and South Korea have imposed codes of conduct only in narrow areas. The mere prevalence of trends—especially at a tentative stage—is not, on its own, indicative, much less dispositive, of the appropriateness of a regulatory response. It should therefore be treated neutrally by the Government, not with deference.

Second, the Discussion Paper fails to adequately grapple with the possibility that the EU’s regulatory response might not be well-suited to the Canadian context. For one, Canada’s economy is one-eighth as large as the EU’s (Koop, 2022), meaning that it is much less likely to be seen as an essential market by those companies affected by any potential antitrust/regulatory reform. Thus, while the EU can perhaps afford to impose costly and burdensome regulation on digital companies because it has considerable leverage to ensure—with some, though by no means absolute, certainty—that those companies will not desert the European market, Canada’s position is comparatively more precarious. In addition, the EU has an idiosyncratic digital strategy that has produced no notable digital platforms, with the arguable exceptions of Spotify and, and has instead shifted its attention almost entirely to redistributing rents across the supply chain from those digital platforms that have emerged (Manne and Radic, 2022; Manne and Auer, 2019). Even staunch supporters of the DMA have admitted that the DMA will do nothing to help the EU produce its own platforms to challenge the dominant U.S. firms (Caffarra, 2022) . The DMA and the European Commission’s recent flurry of cases against U.S. tech companies are arguably an integral part of that overarching strategy.

B.      Regulation Should Be Scrupulously Mindful of Error Costs

With rare exceptions, the Discussion Paper does not sufficiently acknowledge that regulation is neither free of risk nor costless to implement. Legal decision making and enforcement under uncertainty are, however, always difficult, and always potentially costly. The risk of error is always present, given the limits of knowledge, but it is magnified by the precedential nature of judicial decisions: an erroneous outcome affects not only the parties to a particular case, but also all subsequent economic actors operating in “the shadow of the law” (Manne, 2020a). The uncertainty inherent in judicial decision making is further exacerbated in the competition context, where liability turns on the difficult-to-discern economic effects of challenged conduct. This difficulty is magnified further still when competition decisions are made in innovative, fast-moving, poorly understood, or novel market settings—attributes that aptly describe today’s digital economy (Ibid.).

More specifically, Type I errors—i.e., enforcement of the rules against benign or beneficial conduct—might mean reducing firms’ incentives to make investments in areas where free-riding is seen by competitors as a viable strategy (Auer, 2021), thereby reshaping the products that consumers enjoy (such as Apple’s walled-garden iOS model, Canales 2023; Sohn, 2023; Auer, Manne & Radic, 2022); diminishing quality; or driving up prices (on this last point, see Section II). Where the possibility and likelihood of these costs is not brought into the equation, regulations will exceed the social optimum, to the harm of consumers, taxpayers, and, ultimately, society. To be sure, this is not to say that no regulation or legal reform should ever be undertaken; it is only to say that they should be undertaken within the error-cost framework.

When it comes to considering competition reform, the Government must be careful not to conflate correlation with causation. On several occasions, the Discussion Paper connects certain exogenous phenomena with anemic competition enforcement or a lack of significant competition reform since the 1980s (Discussion Paper: 6-7, 15). While the connection is made rhetorically explicit, however, the Discussion Paper provides no arguments or sources to support it. For instance, it is unclear that heightened competition enforcement would have mitigated the impact of the COVID-19 pandemic or that it attenuates economic inequality, as the Discussion Paper implies. Economic evidence and respect for the rule of law, rather than political expediency, should be the forces driving reform. Lastly, and more generally, if the objectives of the Competition Act are going to be stretched beyond their current understanding to encompass considerations extrinsic to competition—such as protecting the “social landscape and democracy” (Ibid: 7)—a much broader legislative reform is needed. That, in turn, would necessitate substantively more empirical research than the anecdotal evidence currently available on, say, the relationship between economic concentration and un-democratic outcomes (as well as tighter definitions of democracy) (Manne & Stapp, 2019; Stapp, 2019; Manne & Radic, 2022). In this connection, we have often cautioned against a “Swiss Army knife” approach to competition, in favor of tethering it to one quantifiable standard that it is best-placed to deliver (and which is expressly recognised in the Competition Act): providing consumers with competitive prices and product choices (Manne, 2022a; Manne & Hurwitz, 2018). After all, if, as the Discussion Paper suggests, the current iteration of the Competition Act, which focuses specifically on lower prices and product quality for consumers, has not contributed enough to drive down the costs of living for Canadians, why give it more wildly ambitious goals?

The danger here is threefold. The Competition Act may fail in achieving these ulterior goals; it may, by diluting the importance of prices and product quality for consumers, perform even more poorly at lowering the costs of living; and, lastly, the legal uncertainty resulting from the imposition of a quagmire of conflicting goals may chill efficient conduct (see Section III).

C.      ‘Digital Markets’ Are Not Inherently Prone to Market Failure

While any market or industry may be distinctive in certain regards, it is not at all established that digital markets are so distinctive to warrant special treatment under the competition rules—much less to justify new legislation. The Discussion Paper assumes, as has become increasingly popular, that digital markets are marked as special because of their data-driven network effects or extreme returns to scale. (Discussion Paper: 8-9) (Cremer, de Montjoye, & Schweitzer, 2019; Zingales & Lancieri, 2019). The Government, however, should at least contemplate the counterarguments to this assertion.

From the outset, it is worth noting that there is arguably no such thing as a “digital” market. Put differently, every market today—from higher education to supermarkets—employs some level of digital technology, which renders the label “digital” largely superfluous. The flipside of this is that some markets typically seen as the epitome of “digital” rely heavily on physical infrastructure. Online sales platforms like Amazon, for instance, sell physical products, stored in warehouses, through a distribution network made up of a fleet of trucks and planes. Both observations undercut the claim that digital markets embody a distinct kind of competition, and one that can be parsed from markets across the Canadian economy.

More fundamentally, digital markets are arguably less prone to “tipping”—i.e., the emergence of runaway leaders whose competitive advantage can no longer be eroded because of their large userbases—than is generally assumed. The value of data in creating network effects is significantly overestimated. It is important to note that network effects, on the one hand, and economies of scope and scale, on the other, are distinct economic phenomena. Whereas economies of scope and scale reflect cost-side savings, network effects “operate through user benefits enhancement as production increases. Network effects are therefore a reflection of consumers’ perception of value” (Tucker, 2019). While there is a common assumption that acquiring sufficient data and expertise is essential to compete in data-heavy industries, the “learning by doing” advantage of data rapidly reaches a point of diminishing returns, as do advantages of scale and scope in data assets (Manne & Auer, 2021). Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around. Indeed, Facebook managed to build a highly successful platform  relative to established rivals like MySpace (Jacobs, 2015).

Third, and relatedly, network effects in digital markets are rarely insurmountable. Several scholars in recent years have called for more muscular antitrust intervention in networked industries on grounds that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in and raise entry barriers for potential rivals (Discussion Paper: 23). But network effects can also be highly local. “For example, when I consider whether to use Dropbox or another file sharing service, I do not care about the total number of users of Dropbox; instead, I care about how many of my handful of collaborators also use it” (Tucker, 2019). Thus, network effects tend to destabilize market power: “[w]hile network effects facilitate the rapid growth of platforms, they also accelerate their demise.”(Ibid.)

There are countless examples of firms that easily have overcome potential barriers to entry and network externalities, ultimately disrupting incumbents. Recently, Zoom outcompeted long-established firms with vast client bases and far deeper pockets, such as Microsoft, Cisco, and Google, despite the video-communications market exhibiting several traits typically associated with the existence of network effects (Auer, 2019).[1] Other notable examples include the demise of Yahoo, the disruption of early instant-messaging applications and websites, and MySpace’s rapid decline. In each of these cases, outcomes did not match the predictions of theoretical models (Manne & Stapp, 2019).

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and powerful algorithm are the most likely explanations for its success (Nicolaou, 2019). While these developments certainly do not disprove network-effects theory, they eviscerate the belief, common in antitrust circles, that superior rivals cannot overthrow incumbents in digital markets.

Of course, this will not always be the case. The question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions (Auer, 2022).

Lastly, the widespread assumption that critical, large-scale data are exclusive to a few companies, who then misuse it to distort competition and exclude rivals, is largely unfounded. Data are widely used by a range of industries—not just “digital” services—and they are, or can be, the source of important procompetitive benefits. This is not sufficiently recognized in the Discussion Paper, which instead views data almost exclusively as a “currency” and a barrier to entry that serves to entrench market power. In fact, data can serve to drive innovation, optimize costs, and respond to rapidly changing consumer tastes—among other things (Manne & Auer, 2020: 1355). For instance, data in online search enable customers to find more (and more relevant) products and to compare product quality and price, especially using online reviews. Similarly, e-commerce enables consumers in more remote and thinly populated areas to obtain goods and services that were previously hard to access. Assuming that data are principally a barrier to entry erected to exclude rivals, that access to data should therefore be restricted for certain companies, or that the data at their disposal should be diluted, is not only fundamentally wrong, but also likely to harm consumers.

II.      Canada Should Not Introduce DMA-Style Per Se Prohibitions, nor a Presumption of Illegality for Self-Preferencing

In its section on abuse of dominance, the Discussion Paper toys with the idea of imposing per se prohibitions or presumptions of anticompetitive harm on certain unilateral conduct, notably self-preferencing (Discussion Paper, 2022:31-32). This wariness of self-preferencing is echoed by several scholars, not least Vass Bednar and her co-authors (2022: 28), who argue that:

“In a fair, competitive market, products may come to dominate markets by virtue of being superior to those of competitors in quality, price, or some other characteristic. However, through self-preferencing market operators may gain dominance in specific markets due to the fact that they operate and control how information is presented in the marketplace in which they sell their product. In this way, self-preferencing can undermine the competitive dynamic of these markets, leading to poorer market outcomes. Self-preferencing constitutes an advantage that is not based on the merits of competition, but instead the degree of dominance that the self-preferencing firm has in another market.”

Admittedly, some jurisdictions, including the EU, have prohibited dominant platforms outright from giving preferential treatment to their own products (see, e.g., Article 6(5) of the DMA). But as argued in the previous section, this says nothing on its own about whether Canada should follow suit. Accordingly, Canadian authorities should consider the actual costs and benefits of self-preferencing before they adopt sweeping prohibitions of this sort of conduct.

A.      Self-Preferencing Is Not Presumptively Harmful

Courts and regulators in other countries have recognized that self-preferencing can have important pro-competitive justifications. As the Fifth Interim Report of the Digital Platform Service Inquiry of the Australian Consumer and Competition Commission states:

The ACCC recognises that there may be legitimate justifications for some types of self-preferencing conduct, such as promoting efficiency, or addressing security or privacy concerns, which would need to be carefully considered in developing new obligations. Any new obligations to prevent self-preferencing should be tailored to address specific conduct likely to harm competition, rather than amounting to a broad prohibition on any and all selfpreferencing by Designated Digital Platforms (2020: 131).

Indeed, many companies’ business models, from supermarkets to consultancy firms (Moss, 2022), are based on various forms of vertical integration, which includes self-preferencing (Sokol, 2023). In the specific context of online platforms, self-preferencing allows companies to improve the value of their core products and to earn returns so that they have reason to continue investing in their development (Andrei Hagiu, Tat-How Teh, & Julian Wright , 2022; Manne & Bowman, 2020). The EU’s ban on self-preferencing does not contradict this: it merely indicates that, under the DMA, procompetitive justifications and efficiencies are deemed irrelevant—a blunt approach that the Government might reasonably want to avoid.

One important reason why self-preferencing is often procompetitive is that platforms have an incentive to maximize the value of their entire product ecosystem, which includes both the core platform and the services attached to it. Platforms that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product. Those that preference inferior products end up hurting their attractiveness to users of their “core” product, exposing themselves to competition from rivals. (Manne, 2020b).

Along similar lines, the notion that it is harmful (notably to innovation) when platforms enter competition with edge providers is unfounded. Indeed, a range of studies show that the opposite is likely true. Platform competition is more complicated than simple theories of vertical discrimination would have it, and there is certainly no basis for a presumption of harm (Manne, 2020c).

To cite just a few supportive examples from the empirical literature: Li and Agarwal found that Facebook’s integration of Instagram led to a significant increase in user demand, both for Instagram itself and for the entire category of photography apps. Instagram’s integration with Facebook increased consumer awareness of photography apps, which benefited independent developers, as well as Facebook (Li & Agarwal, 2016). Foerderer et al. found that Google’s 2015 entry into the market for photography apps on Android created additional user attention and demand for such apps generally. (Foerderer et al., 2018). Cennamo et al. found that video games offered by console firms often become blockbusters and expand the consoles’ installed base. As a result, these games expand the opportunities for independent game developers, even in the face of competition from first-party games (Cennamo, Ozalp, Kretschmer, 2018). That is, self-preferencing can confer benefits—even net benefits—on competing services, including third-party merchants. Finally, while some have suggested that Zhu and Liu (2018) demonstrate harm from Amazon’s competition with third-party sellers on its platform, the study’s findings are far from clear-cut. As co-author Feng Zhu noted in the Journal of Economics & Management Strategy: “[I]f Amazon’s entries attract more consumers, the expanded customer base could incentivize more third?party sellers to join the platform. As a result, the long-term effects for consumers of Amazon’s entry are not clear” (Zhu, 2018).

The ambivalent effects of self-preferencing are no less true when platforms use data from their services to compete against edge providers. Indeed, critics have argued that it is unfair to third parties using digital platforms to allow the platform’s owner to use the data gathered from its service to design new products, when third parties do not have equal access to that data. That seemingly intuitive complaint was, e.g., the basis for the European Commission’s landmark case against Google (see T-604/18, Google v. Comm’n, 2022 ECLI:EU:T:2022:541). But we cannot assume that conduct harms competition simply because it harms certain competitors (see also Section IIIB). Unambiguously procompetitive conduct, such as price-cutting and product improvements, similarly put competitors at a disadvantage. Improvements to a digital platform’s service may be superior (or preferred) to alternatives provided by the platform’s third-party sellers, and therefore procompetitive and beneficial to consumers. The alleged harm in such cases is the burden of having to compete with goods and service offerings that offer lower prices, higher quality, or both.

Finally, prohibiting companies from self-preferencing or significantly constraining their ability to do so could damage the entire venture-capital-backed ecosystem. In discouraging vertical integration, large companies will have diminished incentives to acquire startups; and those startups in turn will have less incentives to exist (Manne, 2022b). As pointed out recently by Daniel Sokol: “Without the ability to ‘self preference,’ companies will be less willing to acquire new businesses and technologies. The combination of weaker incentives for acquisition along with the inability to use contractual self preferencing will reduce scope economies and integration efficiencies” (Sokol, 2023).

The point applies equally to a firm’s internal investments: that is, a firm might invest in developing a successful platform and ecosystem because it expects to recoup some of that investment through, among other means, preferred treatment for some of its own products. And exercising a measure of control over downstream or adjacent products might drive the platform’s development in the first place. In sum, a hardline approach to self-preferencing would harm consumers, stifle innovation, and disrupt the startup ecosystem. There is also insufficient evidence to justify a presumption of harm or shifting the burden of proof to defendants.

B.      Vertical Integration and the Self-Fulfilling Prophecy of Self-Preferencing

At the most basic level, the misplaced condemnation of self-preferencing stems from another, earlier myth that recently has had a resurgence: the notion that vertical integration is commonly anticompetitive. Indeed, vertical conduct by digital firms—whether through mergers or through contract and unilateral action—frequently arouses the ire of critics of the current antitrust regime. Many critics point to a few recent studies that cast doubt on the ubiquity of benefits from vertical integration. But the findings of those studies are easily—and often—overstated. There is considerably more empirical evidence that vertical integration tends to be competitively benign. This includes widely acclaimed work by economists Margaret Slade and Francine Lafontaine (former director of the Federal Trade Commission’s Bureau of Economics under President Barack Obama), whose meta-analysis of vertical transactions led them to conclude:

[U]nder most circumstances, profit-maximizing vertical integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked (Lafontaine & Slade, 2007: 629).

Similarly, a study of vertical restraints by Cooper et al. (2005)—former FTC economists, including a former director of the FTC’s Bureau of Economics and three FTC deputy directors (two former and one current)—finds that “[e]mpirically, vertical restraints appear to reduce price and/or increase output. Thus, absent a good natural experiment to evaluate a particular restraint’s effect, an optimal policy places a heavy burden on plaintiffs to show that a restraint is anticompetitive.” As O’Brien (2008) observed, the literature suggests that diverse vertical practices “have been used to mitigate double marginalization and induce demand increasing activities by retailers. With few exceptions, the literature does not support the view that these practices are used for anticompetitive reasons.”

Subsequent research has tended to reinforce these findings. Reviewing the literature from 2009-18, Lipsky et al. (2018),  conclude that more recent studies “continue to support the conclusions from Lafontaine & Slade (2007) and Cooper et al. (2005) that consumers mostly benefit from vertical integration. While vertical integration can certainly foreclose rivals in theory, there is only limited empirical evidence supporting that finding in real markets (Lipsky et al., 2018: 8).”

Ultimately, the notions that self-preferencing and vertical integration are anticompetitive reinforce each other. Self-preferencing purportedly exemplifies why vertical integration is (or can be) harmful, as only companies that are vertically integrated engage in self-preferencing. At the same time, calls to ban or limit self-preferencing are built on the unsubstantiated intuition that vertical integration itself is generally harmful, which is likely why the negative effects of self-preferencing are summarily presumed, despite a lack of clear and convincing evidence to that effect. The circular logic is evident and fallacious.

None of this is to suggest that proposed vertical mergers should not be subject to scrutiny, or that vertical restraints ought to be per se lawful. It is, in fact, possible for vertical mergers or other vertical conduct to harm competition, and vertical conduct—both unilateral and concerted—should remain subject to fact-specific, rule-of-reason inquiry into its effects on competition and consumers. Evidence does not, however, suggest a general skepticism of vertical integration is merited, and nor does it support a fundamental change in the competition standards or presumptions that apply to vertical integration (Fruits, Manne, & Stout, 2020: 950). As discussed in the previous sub-section, it also does not substantiate a presumption of illegality or a per se prohibition on self-preferencing.

III.    Repurposing the Purpose Clause: Antitrust Should Remain Grounded in Robust Effects Analysis and Efficiencies Should Remain a Viable Defense

There is a clear impetus in the Discussion Paper to degrade, if not shun entirely, evidence of procompetitive effects and efficiency considerations in the context of antitrust enforcement. For example, it is suggested that the Competition Act’s Purpose Clause should be reframed as protecting “fair competition,” with “less focus on competitive effects,” and that this reframing would be in the interest of achieving a “level playing field” (Discussion Paper: 38). The Discussion Paper also proposes broadening the definition of “anti-competitive act” for the purpose of abuse of dominance to ensure that it includes harm toward a competitor, not just to competition (lbid: 17). In a similar vein, efficiencies are consistently framed as an obstacle to the Government’s ability to block “potentially harmful” deals, rather than as instances where government intervention should rightly be avoided (lbid: 5).

The Discussion Paper also appears to suggest, albeit less explicitly, the possibility of lowering the evidentiary standard of proof for merger review from “substantial lessening or prevention of competition” to a more enforcer-friendly “appreciable risk” of lessening competition (lbid: 23).  While the combined effect of these proposals would surely be to make enforcement easier for the Bureau, a point we discuss in Section IV, there are also concrete, substantive harms associated with abandoning longstanding competition standards.

A.      Competition Law Serves to Protect Competition, not Competitors

Antitrust law does not serve to protect competitors—only to protect competition. As courts have long recognized, the natural process of competition is such that it results in some companies inevitably abandoning the market. But this is not a flaw to be corrected through antitrust enforcement; it is the central feature of competition. Indeed, as the European Court of Justice has repeatedly held in a well-established line of case-law:

Not every exclusionary effect is necessarily detrimental to competition (see, by analogy, TeliaSonera Sverige, paragraph 43). Competition on the merits may, by definition, lead to the departure from the market or the marginalisation of competitors that are less efficient and so less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation (Case C 209/10 Post Danmark, EU:C:2012:172, para 22).

Repurposing competition law to protect all competitors, rather than competition itself, vitiates the essence of antitrust law, rendering it, and competition, pointless. Indeed, at the most essential level, the purpose of the competition rules is to distinguish between conduct that anticompetitively serves to exclude competitors, on the one hand, and competition on the merits that may lead firms to exit the market, on the other. While even first-year law students intuitively understand this critical distinction, it can prove challenging to distinguish between the two in real-world cases. The reason is simple: anticompetitive foreclosure and competition on the merits both ultimately result in the same observable outcome—that rivals exit the market. To draw the line, antitrust enforcers and policymakers have developed a wealth of tools to infer both the root causes and the effects of firms’ market exit, such as, e.g., the “as efficient competitor test” in the EU (Auer & Radic, 2023).

Blurring this subtle but crucial conceptual boundary by reorienting the Competition Act toward the protection of competitors would also have serious economic ramifications. By artificially retarding or foreclosing firm exit, the Competition Act would have the perverse effect of encouraging free-riding, discouraging efficient firm behavior and, ultimately, harming consumers and the economy as a whole.

B.      “Fairness” Is Not a Useful Goal for Antitrust Law—or Regulation, for that Matter

Fairness is not a foreign concept to antitrust law, and fairness considerations are not new to it (Colangelo, 2023). Its perennial allure lies in the evocation of principles of equality and justice with which few would disagree. (Who, after all, is in favor of “unfairness?”)

The problem lies in the inherent ambiguity of the concept, which makes it much more valuable as a rhetorical device—albeit a politically attractive one—than a working, quantifiable threshold of anticompetitive conduct. Under traditional liberal notions of fairness, understood as equality before the law, the case for redistributing rents away from dominant digital companies—especially where such dominance has resulted from a superior business model, management, and/or product-design decisions—is comparatively weak. On the other hand, if fairness is understood as equality of outcome, then ensuring that rents generated by digital platforms are distributed equally across the supply chain and horizontally to competitors suddenly becomes more defensible.

This conceptual fuzziness is exacerbated by the existence of multiple sets of stakeholders, which diminishes the possibility of identifying “fair” outcomes for any given group. Thus, what may seem like “fair” compensation for access to a platform and customer base from the perspective of, e.g., app developers, may not seem “fair” to the platforms that have invested time, research, and money into developing such a platform, or to low-usage consumers who may be asked to pay more for their devices to compensate developers whose apps they don’t use.

The use of fairness as either a goal of competition policy or a standard to adjudicate antitrust disputes inevitably raises complicated value judgements: Which group should competition authorities favor; what definition of “fairness” should enforcers mobilize; and, more fundamentally, should competition authorities be empowered to make such value-laden judgments in the first place? Contemporary competition policy has traditionally steered clear of these largely intractable questions (Ibid: 12). As the Discussion Paper rightly indicates, the Competition Act “does not proactively dictate how to conduct business, allocate resources among stakeholders, or designate participants, winners or losers in the free market (Discussion Paper: 13).”

And yet, under the inherent uncertainty of a DMA-style fairness standard, the Bureau would inevitably be forced to do just that—whether it wanted to or not. This would subvert the entire edifice of Canadian competition law, ensconcing a new standard as the system’s lodestar with entirely unpredictable material consequences. It would also, and perhaps even more importantly, signal a shift away from the rule of law and toward government discretion, transforming the Bureau from an executive enforcer of the law to a social engineer. Ironically, for all the talk about market concentration and democracy, the inverse relationship between unfettered government discretion and democracy is much better understood, and historically accounted for, than the supposed link between market concentration and undemocratic outcomes (Hayek, 2007, 2011; Mises, 2014; Friedman, 2002).

C.      Merger Control Should Remain Tethered to a “Substantial Lessening or Prevention of Competition” Principle

The Discussion Paper notes that “[o]ne of the antitrust reform bills before the U.S. Senate would modify the legal test for merger intervention from substantial lessening of competition to ‘an appreciable risk of materially lessening competition’” (Discussion Paper: 23). Specifically, the Discussion Paper identifies the U.S. bill’s proposal that the burden of proof for certain mergers be reversed, based on, e.g., increases in concentration, the size of the merger (valuations exceeding US$5 billion), or the identity (and presumed dominance) of the acquiring firm (Ibid). In the alternative, it is suggested that there be a more stringent competition test or reporting threshold for certain sensitive sectors. While the question of the best competition policy for Canada remains paramount, it is worth noting that the U.S. bill was not enacted by the U.S. Congress, and for good reasons.

1.     Industry concentration, firm size and mergers

As a background matter, the Government should consider that some of the concerns motivating the failed U.S. legislation stemmed from potentially misleading characterizations of concentration across U.S. industries. Of signal influence was a 2016 brief issued by then-President Barack Obama’s Council of Economic Advisors (“CEA”) (White House, 2016). As observed by Carl Shapiro—a former Obama CEA member and a former chief economist at the U.S. Justice Department’s Antitrust Division—certain statements in the exhibits and the text were potentially (and, for many, actually) misleading:

[S]imply as a matter of measurement, the Economic Census data that are being used to measure trends in concentration do not allow one to measure concentration in relevant antitrust markets, i.e., for the products and locations over which competition actually occurs. As a result, it is far from clear that the reported changes in concentration over time are informative regarding changes in competition over time (Shapiro, 2018: 727-28).

Shapiro did not deny that changes in concentration in specific markets could be concerning. Rather, he pointed out that key indicators in the CEA issue brief were not relevant to competition analysis.  For example, cited concentration ratios were far higher than any that should flag competition concerns, and identified industry groupings were far too broad to assess market power in any specific markets (Ibid: 721-722). At bottom: “Industrial organization economists have understood for at least 50 years that it is extremely difficult to measure market concentration across the entire economy in a systematic manner that is both consistent and meaningful (Ibid: 722).”

One approach to assessing the relationship between concentration, profits, and competition is embodied in the Structure-Conduct-Performance (“SCP”) paradigm, which tended to measure concentration by the Herfindahl-Hirschman Index (HHI), and which used specific HHI thresholds for competitive screening or evaluation. But while HHIs may still be used for rough and preliminary screening purposes, merger analysis has—by and large, and for decades—left the SCP framework behind, as both theoretical and empirical work has undermined the approach (Schmalensee, 1989; Evans, Froebd, & Werden, 1993; Berry, 2017; Salinger, 1990; Miller et al., 2022). Industry-specific research has only reinforced the wisdom of rejecting the SCP framework, demonstrating that, e.g., various new screening tools are more accurate than concentration measures in flagging health-care-provider mergers that are potentially anticompetitive (Garmon, 2017).

The “substantial lessening of competition” standard focuses on the question of whether harm to competition has occurred, or is likely to occur, with a focus on actual or likely consequences: harm to consumers, often in terms of increased prices, but also in terms of reduced output and nonprice dimensions of competition, such as lower product quality and diminished convenience or availability. Alternatives tend to be less clear, harmful to consumer welfare, or both.

The suggestion that merger policy should alter its methods or standards according to the size of the firm (or firms) involved recalls the “big is bad” approach to antitrust enforcement prevalent in the first half of the twentieth century. That approach, and the assumption of market power (and harm to competition) had no real economic basis:

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. . . .

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates (Berry et al., 2019: 48).

Scale is not an accurate proxy for either market power or anticompetitive conduct. To reimplement the big-is-bad approach risks arbitrary impediments to broad categories of procompetitive mergers, and reduced innovation in business models that would benefit consumers. It would protect inefficient (high-cost) producers from precisely the kinds of competitive pressures that competition law is supposed to foster (Manne & Hurwitz, 2018: 1,6).

To be sure, large tech firms’ impressive scale might appear to imply market power; and such firms, among others, may possess a degree of market power in one or another market. Large firms, like small ones, also may engage in anticompetitive conduct. Nonetheless, and especially in the contemporary tech industry, it is “not unusual for efficient, competitive markets to comprise only a few big, innovative firms. Unlike the textbook models of monopoly markets, these markets tend to exhibit extremely high levels of research and development, continual product evolution, frequent entry, almost as frequent exit—and economies of scope and scale (i.e., ‘bigness’). Size simply does not correlate with anything recognizable as ‘consumer harm’” (Ibid).

A presumption against large firms (and large transactions) would necessarily benefit smaller firms, independent of the question of whether they provide consumers with superior or less-costly goods and services. Indeed, some courts have expressly recognized that deciding competition matters for the purpose of favoring small firms entailed that “occasional higher costs and prices might result from the maintenance of fragmented industries and markets” (Brown Shoe Co. v. United States, 370 U.S. 294, 344 (1967)). Such maintenance has always raised the question of which decision standard should be employed, and what its economic basis should be, as well as the rationale for trading consumer welfare for benefits to certain smaller firms. Not incidentally, thresholds recently proposed for presumptively suspect firms or transactions are such that many very large firms escape heightened scrutiny. That includes firms that may have significant market power in one or more markets. And, of course, small firms might well enjoy significant market power in niche markets.

There remain legitimate debates about the optimal methods and standards for competition policy, but the drive toward a consumer welfare standard, begun in the 1960s and 1970s, ultimately identified a coherent and predictable outcome against which to evaluate both specific competition matters and competition policy: greater consumer welfare is achieved through the condemnation of conduct that suppresses innovation, increases prices, or diminishes desirable nonprice dimensions of goods and service, such as quality and convenience. Application of the consumer welfare standard is not always trivial, but it is generally tractable, and increasingly so, as developments in data sources and industrial-organization economics continue.

A recent policy statement by the U.S. Federal Trade Commission (FTC) set a template for the disadvantages of popular reform proposals, with something akin to an “appreciable risk” standard. The FTC had withdrawn its prior Unfair Methods of Competition policy statement and, in doing so, disavowed the consumer welfare standard as “open ended” and capable of delivering “inconsistent and unpredictable results” (Federal Trade Commission, 2021). In its place, the FTC announced a new standard: a prohibition of “unfair” conduct that “tend[s] to negatively affect competitive conditions.”

What that means is not clear. We are told that unfair conduct is “coercive, exploitative, collusive, abusive, deceptive, predatory”—terms that may be evocative in ordinary usage and some of which occur, in dicta, in certain historical U.S. antitrust cases. But those terms have no clear established meaning in Canadian, U.S., or European competition jurisprudence. The statement also declares as unfair any conduct that “involve[s] the use of economic power of a similar nature,” or that “may” be “otherwise restrictive or exclusionary.” That all seems relatively open-ended.

Further, as Gilman and Hurwitz (2022) explain, the phrase “tends to negatively affect competitive conditions” is noteworthy mostly for what it is not. It does not specify either harm to competition or harm to consumers, but rather a tendency (not necessarily a likelihood) to “negatively affect” (perhaps to harm) “competitive conditions.” Thus, we have a sort of any-party-in-the-marketplace standard, concerned with effects on “consumers, workers, or other market participants” and whether conduct “tends to” affect (negatively) any party, and which does not turn to whether the conduct directly caused actual harm in the specific instance at issue. Effects need not be “current” or “measurable” or even “actual.” And they need not be likely.

The new FTC standard is certainly no model of clarity. Establishing “harm to consumers, workers, or other market participants” may be more tractable than establishing harm to consumers. But that’s only because nearly any potential harm to anyone would seem to suffice, no matter the cost to consumers. Indeed, in disclaiming the need to show either actual or likely harm, the relevance of efficiencies, and of relative costs and benefits, the FTC sets the enforcement bar lower still. Whatever degree of unpredictability might attach to the consumer welfare standard, it is impossible to see the FTC’s 2022 proposal as an improvement.

The FTC’s new policy also appears to buy lower administrative costs at the expense of both predictability and, necessarily, consumer welfare. Fundamentally, the FTC ignores completely the problem of error costs. To the extent that competition policy is concerned with consumer welfare, loose (and seemingly arbitrary) standards will lower administrative costs but increase Type 1 errors (false positives) by sometimes condemning procompetitive and benign conduct as anticompetitive. But amorphous standards may also increase Type 2 errors, as enforcement untethered from consumer welfare and economic foundations may well increase the total number of cases and determinations of liability, while missing difficult cases where real harms might have been found through traditional methods.

Thomas Lambert (2021) employs a decision-theoretic framework to compare competing institutional approaches to competition law and, specifically, to address the market power of large digital platforms, both actual and presumed:

(1) the traditional U.S. antitrust approach; (2) imposition of ex ante conduct rules such as those in the EU’s Digital Markets Act and several bills recently advanced by the Judiciary Committee of the U.S. House of Representatives; and (3) ongoing agency oversight, exemplified by the UK’s newly established “Digital Markets Unit.” After identifying the advantages and disadvantages of each approach, this paper examines how they might play out in the context of digital platforms. . . . [and] shows how three features of the agency oversight model—its broad focus, political susceptibility, and perpetual control—render it particularly vulnerable to rent-seeking efforts and agency capture. The paper concludes that antitrust’s downsides (relative indeterminacy and slowness) are likely to be less significant than those of ex ante conduct rules (large error costs resulting from high informational requirements) and ongoing agency oversight (rent-seeking and agency capture) (Lambert, 2021).

2.     Nascent Competition

Finally, some argue that an “appreciable risk” to competitive harm standard would be more appropriate in the context of acquisitions of nascent or potential competitors. The argument is that, by their nature, the risks associated with acquisitions of nascent competitors is more speculative. Since we cannot know for sure, given their current size and scope, we need to account for these risks and have a standard that can incorporate them. The argument is laid out most completely by Steven Salop in his paper Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits. In it, he argues that:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide (Salop, 2021:6).

Taken to its logical conclusion, this approach would support a presumption against any acquisition, because there is always a risk, no matter how remote, that any company could compete with the incumbent in the future. It is unclear how far the qualifier “appreciable” goes toward countering this overly stringent presumption. On this note, it is important to realize that eliminating a potential competitor is not the same thing as eliminating potential competition. The market power of firms, even monopolists, is disciplined by how closely the closest potential competitor is to the incumbent. In the jargon of economics: the marginal competitor matters. How quickly could the marginal competitor enter? How closely could the marginal competitor compete on price?

When there are just two firms in a market, we are confident that the second-largest firm is the marginal competitor for the largest. Once we open consideration to all possible or potential competitors, our ability to know in advance which may provide a disciplinary force greatly decreases. As such, any competition standard needs to recognize such limitations and keep potential-competition challenges to clearly articulated cases.

The FTC’s recent challenge of Meta’s acquisition of Within serves as a natural experiment in showcasing the limits of opening potential-competition challenges to more speculative cases. The FTC’s case rested on arguing that Facebook was a potential competitor to Within’s virtual-reality fitness app Supernatural. While the judge ultimately did not reject the possibility of potential-competition harms, in theory, he rejected the evidence of such harms in this case (Paul Weiss, 2023).

IV.    There Are Serious Limits to Considering the Effects of Mergers on Labour

The Discussion Paper notes “at least two points in the Canadian System where a closer examination of labour effects could occur” (Discussion Paper: 28) Those are, first “in the evaluation of competitive effects, namely as to whether mergers may result in distortions to the labour market, even if there are no harmful competitive effects downstream”; and second, “in the evaluation of efficiencies, in which reduction of labour may be viewed as efficient or pro-competitive” (Ibid.). We recommend the Commission exercise extreme caution in these areas, as both risk harms to consumer welfare, and to the consistency and predictability of competition law.

The Discussion Paper notes “various challenges and pitfalls of applying competition law to labour markets, including, inter alia, the difficulty of integrating the role (and benefits) of technological change and ‘creative destruction,’” complexities in assessing compensation wholistically, and the question of market definition (Discussion Paper: 28). These measurement difficulties exceed those typically observed in product markets and raise questions regarding whether—and if so, how—to account for trade-offs among, e.g., labour interests and pro-consumer efficiencies and innovation in products, production, or distribution, or between labour interests and consumer welfare.

The concerns cited by the Boyer report are important. For one thing, one cannot distinguish between efficiency gains and the exercise of monopsony power if one looks only to price and quantity in an input market, such as labour. Consider a merger that generates either efficiency gains or market (now monopsony) power. A merger that creates monopsony power will necessarily reduce the prices (wages) and quantity purchased (hired) of inputs, such as labour. But this same effect (reduced prices/wages and quantities for inputs) could be observed if the merger is efficiency-enhancing. If we assess downstream output, efficiency-enhancing mergers will necessarily be associated with greater output. Efficiencies achieved through innovation in product offerings, production, management, or distribution will lead to increased output. If, on the other hand, the merger increases monopsony power, the post-merger firm will perceive its marginal cost as higher than it was pre-merger, and it will reduce downstream output accordingly (Hemphill & Rose, 2018).

To parse labour markets from downstream product and service markets, and to consider the impact on the latter of “out-of-market” effects, would confound the distinction of efficiency-enhancing mergers from monopsony-creating ones, while simultaneously isolating competition analysis of labour markets from observations of pro-consumer efficiencies. It is unclear whether (and, if so, how) using competition law to discipline alleged harm to labour markets is consistent with the consumer welfare standard, the lodestar of antitrust enforcement, at least as it is currently understood.

Marinescu & Hovenkamp assert that, “[p]roperly defined, the consumer welfare standard applies in exactly the same way to monopsony. Its goal is high output, which comes from the elimination of monopoly power in the purchasing market…. [W]hen consumer welfare is properly defined as targeting monopolistic restrictions on output, it is well suited to address anticompetitive consequences on both the selling and the buying side of markets, and those that affect labor as well as the ones that affect products (Marinescu & Hovenkamp, 2014).”

But there are at least two problems with this reasoning.

First, the assertion that harm to input providers alone should be actionable is based on a tenuous assertion that a mere pecuniary transfer is sufficient to establish anticompetitive harm. As Marinescu and Hovenkamp note “there is merely a transfer away from workers and towards the merging firms. Yet. . . such a transfer is a harm for antitrust law.” (Ibid: 1062) But such harms to labour (and other input suppliers) may benefit consumers. In the typical case, at least some of the benefits of employer leverage (relative advantage in negotiation) are passed along to consumers; in the limit, all such benefits are passed on to consumers (Salop, 2010: 342). The main justification for ignoring such cross-market effects is primarily a pragmatic one, but one considerably diminished by modern analytical methods (Rybnicek & Wright, 2014: 10). Particularly in the context of inputs to a specific output market, these cross-market effects are inextricably linked and hardly beyond calculation.

The assertion that pure pecuniary transfers are actionable is also inconsistent with the fundamental basis for competition law, which seeks to mitigate deadweight loss, not mere pecuniary transfers that do not result in anticompetitive effects (Bork, 2021: 110).

Finally, market definition, too, is a confounding problem for the prospect of labour competition analysis. In monopoly cases, enforcers and courts can face enormous challenges in identifying a relevant market. These challenges are multiplied in input markets—especially labour markets—in which monopsony is alleged. Many inputs are highly substitutable across a wide range of industries, firms, and geographies. For example, changes in technology, such as the development of PEX tubing and quick-connect fittings, allows for labourers and carpenters to perform work previously done exclusively by plumbers. Technological changes have also expanded the relevant market in skilled labour: Remote work during the COVID-19 pandemic, for example, demonstrates that many skilled workers are not bound by geography and compete in national—if not international—labour markets.

At the same time, many labour markets—especially (but not only) lower-wage labour markets—remain local. They have the potential to crosscut both product markets and their associated geographic markets. And both mergers and unilateral conduct can raise questions concerning how to trade harm to labour—e.g., reduced wages, benefits, or jobs—in one locale against benefits in another.

In short, there is a serious knowledge gap to plug before competition authorities can satisfactorily analyze the impact of mergers on labour markets. Until that is the case, competition law would gain by limiting its focus to output markets.

V.      Bolstering the Bureau’s Powers and the ‘Effectiveness’ of Enforcement Should not Come at the Expense of Parties’ Rights of Defense, the Rule of Law, and Procompetitive Outcomes

One of the key themes of the Discussion Paper is “the often-narrow circumstances where the Competition Bureau can intervene (Discussion Paper: 4).” For example, the Discussion Paper laments that bringing abuse-of-dominance cases is currently too burdensome for the CCB and suggests implementing EU-style presumptions (Ibid: 34-35) or substituting the need to show intent and (likely) effects for a mere capability of anticompetitive effects (Ibid: 37).  But the fact that some cases are not easy to bring is not, on its own, a justification for reform (see Section I). Procedural safeguards and burdens of proof exist for a reason: to cabin enforcers’ discretion, ensure that rights of defense and the rule of law are respected, and to minimize errors. Furthermore, “more enforcement” is neither good nor bad. What makes it one or other is contingent on the likelihood and extent of the error costs of intervention vs. non-intervention (see Section IB).

In this way, the EU’s experience warns of the risk of granting to public authorities extensive powers to enforce novel regulations, while treating the rights of defense as an afterthought (Lamadrid, 2022; Auer and Radic, 2023). Like the ethos that undergirds the Discussion Paper, the DMA is propelled by the (dubious) logic that the competition laws in their current form cannot be deployed easily or quickly enough to address the supposedly unique, endemic challenges of “digital” markets (for the opposite view, see Colangelo, 2022).

But this eagerness to intervene at any cost itself comes at a cost. In the EU, for instance, the draft implementing regulation of the DMA (DIR) indulges in serious procedural over-reach, which is likely to have significant ramifications for targeted companies, third parties, and the Commission itself. Thus, from the outset, the DIR makes clear that the Commission prioritizes procedural effectiveness over procedural fairness (Lamadrid, 2022). It establishes a “succinct” (short) right to respond to the Commission’s preliminary findings, thereby abridging parties’ rights to defense in ways that the Commission is not similarly constrained in issuing its preliminary findings.

Procedural rules exist to protect parties from abuses by the administration, as well as to protect the administration from costly and unnecessary litigation. This has been recognized, in one way or another, by the European courts. Just this past year, two marquee decisions were quashed by the European Court of Justice, at least partially because of procedural irregularities: Qualcomm and Intel. The lesson to be learned for the CCB is that, even if the Competition Act is reformed, Canadian law still recognizes robust rights of defense and procedural safeguards that, if breached because of an administrative over-eagerness to “do more,” will be promptly checked by the courts.

VI.    Conclusion

In this “moment of reckoning,” (Discussion Paper: 6) it is crucial that the Government not overreact with experimental legislative reform that will be exceedingly difficult to unwind. Five main conclusions can be drawn from this submission, and they warrant a much more restrained approach. First, the Government should critically reassess the assumptions that underpin the Discussion Paper. Evidence does not recommend that Canada follow the sort of competition regulation or reform contemplated elsewhere, nor should Canada be compelled to act just because other countries are “doing something.” Any potential reform should be based on careful examination of the facts and evidence and should be scrupulous in applying the error-costs framework. In addition, despite frequent rhetoric to the contrary, it is entirely unclear that “digital” markets present the sort of unique challenges that would necessitate an overhaul of the Competition Act.

Second, there is no rhyme or reason to presumptions against self-preferencing behavior. Self-preferencing is normal business conduct that can—and often does—yield procompetitive benefits, including improved economies of scope, greater efficiencies, and improved products for consumers. In addition, a ban on self-preferring could harm the startup ecosystem by discouraging acquisitions by large firms, which would ultimately diminish the incentives for startups. This is presumably not what the Government wants to achieve.

Third, altering the purpose of the Competition Act would be a grave mistake. Competition law does not serve to protect competitors, but competition; nor can harm to competitors be equated with harm to competition. To do so would harm competition and, necessarily, Canadian consumers. The quintessential task of competition laws—the Competition Act included—is to distinguish between the two, precisely because the distinction is so subtle, yet at the same time so significant. Similarly, “fairness” is a poor lodestar for competition-law enforcement because of its inherent ambiguity. Instead of these, or other standards, the Competition Act should remain rooted in the standard of “substantial lessening or prevention of competition.”

Fourth, the Government should exercise extreme caution in addressing labour-market monopsony, as altering the merger-control rules to encompass harms to labour risks both harming consumer welfare and the consistency and predictability of competition law.

Fifth, in its impetus to bolster competition-law enforcement by making it “easier” on the CCB, the Government should not sacrifice rights of defense and the rule of law for expediency. In this, at least, it can learn from the DMA’s example.


Abend, L. (2015, May 20). Why This Woman Is Google’s Worst Nightmare. Time. Retrieved from

Auer, D. (2021, November 1). The Epic Flaws of Epic’s Antitrust Gambit. Truth on the Market. Retrieved from

Auer, D. (2021, November 1). The Epic Flaws of Epic’s Antitrust Gambit. Truth on the Market. Retrieved from

Auer, D. (2022, January 18). 10 Things the American Innovation and Choice Online Act Gets Wrong. Truth on the Market. Retrieved from

Auer, D. and Manne, G. (2019). Is European Competition Law Protectionist? A Quantitative Analysis of the Commission’s Decisions. ICLE Antitrust & Consumer Protection Program Issue Brief.

Auer, D. and Radic, L. (2022). The Growing Legacy of Intel. Journal of European Competition Law and Practice 14:1.

Auer, D., and Radic, L. (2023, January 6). Implementing the DMA: Great Power Requires Great Procedural Safeguards. International Center for Law and Economics. Retrieved from

Auer, D., Manne, G., and Radic, L. (2022, April 1). Assessing Less Restrictive Alternatives and Interbrand Competition in Epic v Apple. Truth on the Market. Retrieved from

Australian Competition Commission. (2022). Digital Platform Services Inquiry, Interim Report No.5 —Regulatory Reform.

Barczentewicz, M. (2022, June 22). DMA Update: It’s Still a Privacy Danger. Truth on the Market. Retrieved from

Bednar, V., Qarri, A., and Shaban, R. (2022). Study of Competition Issues in Data-Driven Markets in Canada. The Ministry of Innovation, Science and Economic Development.

Bentata, P. (2022). Regulating “Gatekeepers”: Predictable “Unintended Consequences” of the DMA for Users’ Welfare. Competition Forum 0031.

Berry, S. (2017). Market Structure and Competition, Redux. FTC Micro Conference.

Berry, S., Gaynor, M., and Scott Morton F. (2019). Do Increasing Markups Matter? Lessons from Empirical Industrial Organization. Journal of Economic Perspectives 33:3.

Bork, R.H. (2021). The Antitrust Paradox: A Policy at War with Itself.  Bork Publishing.

Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.

Broadbent, M. (2021, September 15). Implications of the Digital Markets Act for Transatlantic Cooperation. CSIS. Retrieved from

Canales, K. (2022, September 2). More Americans Are Using iPhones than Android Phones for the First Time Ever, New Report Says – A Major Milestone for Apple. Business Insider. Retrieved from

Cennamo, C. and Santaló, J. (2023). Potential Risks and Unintended Effects of the New EU Digital Markets Act. Open Internet Governance Institute Paper Series 4.

Cennamo, C., Ozalp, H., and Kretschmer, T. (2018). Platform Architecture and Quality Trade-offs of Multihoming Complements. Information Systems Research 29(2), 461–478.

Colangelo, G. (2023, March 13). Fairness and Ambiguity in EU Competition Policy. International Center for Law & Economics.

Colangelo, G. (2023). In Fairness (We Should Not) Trust. The Duplicity of the EU Competition Policy Mantra in Digital Markets. ICLE White Paper.

Cooper, J.C. et al. (2005). Vertical Antitrust Policy as a Matter of Inference. 23 International Journal of Industrial Organization.

Crémer, J., de Montjoye, Y., and Schweitzer, H. (2019). Competition Policy for the Digital Era. European Commission.

Caffarra, C. (2022, August 4). Cristina Caffarra on the State of Europe’s Antitrust Regulation and Enforcement. Podcast Second Request from Capitol Forum. 13:40-14:20.

Dearborn, Meredith R., et al., (2023). FTC Loses Challenge to Meta-Within Deal, but Court Accepts Viability of Potential Competition Theories. Paul Weiss.

Evans, W., N., Froeb, L., M., and Werden, G., J. (1993). Endogeneity in the Concentration–Price Relationship: Causes, Consequences, and Cures. The Journal of Industrial Economies 41:4.

Federal Trade Commission. (2022, November 10). Policy Statement Regarding the Scope of Unfair Methods of Competition Under Section 5 of the Federal Trade Commission Act. Available at

Federal Trade Commission. (2021, July 1). Statement of Chair Lina M. Khan, Joined by Commissioner Rohit Chopra and Commissioner Rebecca Kelly Slaughter, on the Withdrawal of the Statement of Enforcement Principles Regarding “Unfair Methods of Competition” Under Section 5 of the FTC Act. Available at

Foerderer, J., Mithas, S., Heinzl, A., and Kude, T. (2018). Does Platform Owner’s Entry Crowd Out Innovation? Evidence from Google Photos. Information Systems Research. Retrieved from

Friedman, M. (2002). Capitalism and Freedom: Fortieth Anniversary Edition. University of Chicago Press.

Fruits, E., Manne, G., and Stout, K. (2020). The Fatal Economic Flaws of the Contemporary Crusade Against Vertical Integration. 68(5) Kansas Law Review.

Federal Trade Commission. (2022). Policy Statement Regarding the Scope of Unfair Methods of Competition Under Section 5 of the Federal Trade Commission Act.

Garmon, Christopher. (2017). The Accuracy of Hospital Screening Methods. The RAND Journal of Economics 48: 2.

Gilman, D. and Hurwitz, G. (2022). The FTC’s UMC Policy Statement: Untethered from Consumer Welfare and the Rule of Reason. International Center for Law & Economics Issue Brief. Available at

Government of Canada. (2022). Consultation on the Future of Competition Policy in Canada. Innovation, Science and Economic Development Canada Main Site / Site principal d’Innovation, Sciences et Développement économique Canada. Available at

Government of Canada. (2022). Making Competition Work for Canadians. Innovation, Science and Economic Development. Grafunder, R., Stefanowicz-Baranska, A., Tamke, M., and Marek, M. (2022).

Harris, A. (2022, September 8). How Margrethe Vestager Got the Upper Hand over Big Tech. Fast Company. Retrieved from

Hayek, F.A. (2007). The Road to Serfdom. University of Chicago Press.

Hayek, F.A. (2011). The Constitution of Liberty: The Definitive Edition. University of Chicago Press.

Hemphill, C.S. and C. Rose, N.L. (2018). Mergers that Harm Sellers. Yale Law Journal 127.

HM Treasury. (2019). On Independent Report Unlocking Digital Competition, Report of the Digital Competition Expert Panel.

Ibáñez Colomo, P. (2021). The Draft Digital Markets Act: A Legal and Institutional Analysis. Journal of European Competition Law & Practice 12.

Jacobs, H. (2015). Former MySpace CEO Explains Why Facebook Was Able to Dominate Social Media Despite Coming Second. Business Insider. Retrieved from

Kelly, M. (2022, December 20). Congress Blew Its Last Chance to Curb Big Tech’s Power. The Verge. Retrieved from

Koop, A. (2022). Top Heavy: Countries by Share of the Global Economy. Visual Capitalist. Retrieved from

Lambert, T.A. (2021). Tech Platforms and Market Power: What’s the Optimal Policy Response? Mercatus Working Paper.

LaFontaine F., and Slade, M. (2007). Vertical Integration and Firm Boundaries: The Evidence. Journal of Economic Literature 45.

Lafontaine, F., and Slade, M. (2008). Exclusive Contracts and Vertical Restraints: Empirical Evidence and Public Policy. 10 Handbook of Antitrust Economics 391, 408-09 (Buccirossi ed., 2008).

Lafontaine, F., Slade, M. (2010). Transaction Cost Economics and Vertical Market Restrictions— The Evidence. 55 The Antitrust Bulletin, 587.

Lamadrid, A. (2022, September 5). The DMA— Procedural Afterthoughts. Chillin’ Competition. Retrieved from

Li, Z., and Agarwal, A. (2016). Platform Integration and Demand Spillovers in Complementary Markets: Evidence from Facebook’s Integration of Instagram. Management Science.

Lipsky et al. (2018). The Federal Trade Commission’s Hearings on Competition and Consumer Protection in the 21st Century, Vertical Mergers, Comment of the Global Antitrust Institute, Antonin Scalia Law School, George Mason University. George Mason Law & Economics Research Paper No. 18-27, 8–9.

Manne, G. (2020a). Error Costs in Digital Markets. The Global Antitrust Institute Report on Digital Economy 3.

Manne, G. (2020b). Invited Statement of Geoffrey A. Manne on House Judiciary Investigation into Competition in Digital Markets: Correcting Common Misconceptions About the State of Antitrust Law and Enforcement. ICLE.

Manne, G. (2020c). Against the Vertical Discrimination Presumption. Foreword l Concurrences N° 2-2020.

Manne, G. (2022a). Testimony of Geoffrey A. Manne, ‘Reviving Competition, Part 5: Addressing the Effects of Economic Concentration on America’s Food Supply’. ICLE.

Manne, G. (2022b). How Startups Could Be a Casualty of the War on Self-Preferencing. Truth on the Market. Retrieved from

Manne, G., and Auer, D. (2020). Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and their Origins. George Mason Law Review 28(4).

Manne, G., and Hurwitz, G. (2018). Big Tech’s Big Time, Big-Scale Problem. Cato Institute Policy Report. Retrieved from

Manne, G., and Radic, L. (2022, January 11). Amazon Italy’s Efficiency Offense. Truth on the market. Retrieved from

Manne, G., and Stapp, A. (2019, December 30). Does Political Power Follow Economic Power? Truth on the Market. Retrieved from

Manne, G. and Stapp, A. (2021, November 1). This Too Shall Pass: Unassailable Monopolies that Were, in Hindsight, Eminently Assailable. Truth on the Market. Retrieved from

Marinescu, I. and Hovenkamp, H.J. (2014). Anticompetitive Mergers in Labor Markets. Indiana Law Journal 94.

Nicolaou, A. (2019). How to Become TikTok Famous. Financial Times. Retrieved from

O’Brien, D. (2008). The Antitrust Treatment of Vertical Restraint: Beyond the Possibility Theorems. In Konkurrensverket (report), The Pros and Cons of Vertical Restraints (40-96).

Mises, L. (2014). Planned Chaos. Ludwig Von Mises Institute.

Miller, N. et al. (2022). On the Misuse of Regressions of Price on the HHI in Merger Review. Journal of Antitrust Enforcement, 10:2.

Moss, T. (2022, December 14). Supermarkets Offer More Store Brands to Lure Cost-Conscious Shoppers. The Wall Street Journal. Retrieved from

Pollet, M. (2021, December 14). France to Prioritise Digital Regulation, Tech Sovereignty During EU Council Presidency. Euractiv. Retrieved from

Radic, L. (2022, March 25). Final DMA: Now We Know Where We’re Going, but We Still Don’t Know Why. Truth on the Market. Retrieved from

Radic, L. (2022, November 24). Antitrust & Democracy. ICLE. Retrieved from

Rybnicek, J.M. and Wright, J.D. (2014). Outside In or Inside Out?: Counting Merger Efficiencies Inside and Out of the Relevant Market. An Antitrust Tribute Vol. II.

Salinger, Michael. (1990). The Concentration-Margins Relationship Reconsidered. Brookings Papers on Economic Activity.

Salop, S.C. (2010). Question: What Is the Real and Proper Antitrust Welfare Standard? Answer: The True Consumer Welfare Standard. Loyola Consumer Law Review 22.

Salop, S.C. (2021). Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits. Georgetown University Law Center.

Schmalensee, R. (1989). Inter-Industry Studies of Structure and Performance. In Schmalensee, R. and Willig, R. (Eds.)  Handbook of Industrial Organization (951–1009).

Shapiro, C. (2018). Antitrust in a Time of Populism. International Journal of Industrial Organization 61.

Sohn, J. (2023, February 27). Apple’s iPhones Winning Over Gen Z —and the World’s Premium Market. Wallstreet Journal. Retrieved from

Sokol, D. (2023, March 12). Don’t Destroy Entrepreneurship with Poorly Designed Antitrust Legislation. The Hill. Retrieved from n/congress-blog/3896647-dont-destroy-entrepreneurship-with-poorly-designed-antitrust-legislation.

Stapp, A. (2019, December 20). Tim Wu’s Bad History: Big Business and the Rise of Fascism. Niskanen Center. Retrieved from

Tucker, C. (2019). Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility. Review of Industrial Organization, 54(4).

Zingales, L. and Lancieri, F. (2019). Stigler Committee on Digital Platforms Final Report. Stigler Center for the Study of the Economy and the State.

Zhu, F., and Liu, Q. (2018). Competing with Complementors: An Empirical Look at Strategic Management Journal, 39(10).

White House (2016). Benefits of Competition and Indicators of Market Power. Council of Economic Advisors, CEA Issue Brief. Available at

Continue reading
Antitrust & Consumer Protection

Fairness and Ambiguity in EU Competition Policy

ICLE White Paper Abstract The concept of fairness is not foreign to competition law, nor are considerations of fairness new to it. Persistent uncertainty regarding what constitutes fairness . . .


The concept of fairness is not foreign to competition law, nor are considerations of fairness new to it. Persistent uncertainty regarding what constitutes fairness has, however, traditionally counseled against its application as a standalone legal standard. Indeed, antitrust enforcers often have been reluctant to define even what constitutes unfair terms and conditions. Nonetheless, amid a swell of accusations of undue corporate power and market concentration in the digital economy, debates about fairness have recently taken center stage in the policy debate—particularly in Europe, where several recent regulatory interventions have been touted as promoting fairness in digital markets. This paper argues that policymakers are attracted to “fairness” remedies precisely because the term’s meaning is so ambiguous, thus granting them more discretion and room for intervention.


In public debates over the emerging ubiquity of digital markets and platform-business models, the concept of “fairness” has been elevated into a guiding principle of competition-law enforcement. Dissatisfied with the ways that profits are allocated in digital-services markets and decrying what they see as undue corporate power and market concentration, interlocutors in such debates have invoked fairness as the cure for bigness.

This is particularly apparent in the European Union (EU), where several recent legislative initiatives have been adopted with the stated goal of promoting fairness in the digital economy. A central focus of such initiatives is the “gatekeeping” position enjoyed by a few large online platforms, which purportedly allows them to exert intermediation power over whether and under what terms the platform’s business users can reach their end users. As such, critics of so-called “Big Tech” assert, these platforms represent unavoidable trading partners who can exploit their superior bargaining power by imposing unfair contract terms and conditions. Moreover, since they often occupy a dual role—acting simultaneously as intermediaries and as competitors on their own platforms—they may have incentive to discriminate in favor of their own services or subsidiaries (so-called self-preferencing).[1]

In response to the perceived risks generated by these conflicts of interest and imbalances of bargaining power, policymakers in various jurisdictions around the world have proposed or enacted provisions intended to ensure a level playing field and to neutralize the competitive advantages of large intermediator platforms. According to this line of reasoning, Big Tech firms must be compelled to treat both their rivals and their guests on the platform fairly.

Fairness has therefore become part of the larger debate on the role of competition law in the digital economy, with some militating for more aggressive intervention to ensure fairness and questioning whether the consumer welfare standard should remain the lodestar of antitrust law. Because it eschews many other potential goals of competition law, the argument goes, the consumer welfare standard systematically biases antitrust toward underenforcement,[2] with some even labeling it a “distraction” or a “catch phrase.”[3] Rather than the efficiency-oriented approach favored by the Chicago School, the ostensibly holistic approach that has earned support among progressives would combine competition law with other fields of law in order to take into account such broad social interests and ethical goals as labor protection, wealth inequality, and environmental sustainability.[4]

Considerations of fairness are not, however, new to competition law.[5] The history of antitrust law in the United States, for example, demonstrates that U.S. lawmakers and jurists have long had a profound concern for economic liberty as a notion embedded in the nation’s conception of freedom.[6] After all, “[i]f efficiency is so important in antitrust, then why doesn’t that word, ‘efficiency,’ appear anywhere in the antitrust statutes?”[7] Indeed, antitrust has been described as a body of law designed to promote economic justice, fairness, and opportunity.[8] Therefore, the purpose of antitrust law is to protect the competitive process in service of both prosperity and freedom. Rather than a myopic focus on promoting efficiency, antitrust economics should be concerned with ensuring that competition may flourish among a significant number of rivals in free and open markets.[9] And at the heart of the competitive process is the guarantee that “everyone participating in the open market—consumers, farmers, workers, or anyone else” has the opportunity to choose freely among alternative offers.[10]

This is also evident in the EU, where competition law has always reflected various social, political, and ethical objectives, even as the so-called “more economic approach” was adopted in the late 1990s.[11] Moreover, the goal of ensuring equal opportunity in the marketplace by guaranteeing a level playing field among firms has been incorporated in EU antitrust law, reflecting the influence of the philosophy of Ordoliberalism and the Freiburg School of economic thought.[12] From this perspective, fairness would include the protection of economic freedom, rivalry, the competitive process, and small- and medium-size firms.[13]

Nonetheless, it should not be overlooked that the rise of the Chicago School approach, which affirms the need to anchor antitrust enforcement in objective criteria, was itself a response to the limitations and drawbacks of prioritizing various noneconomic goals in competition law. Precisely because “fairness” is so difficult to both define and delineate, it has traditionally proven unsuitable as a standalone legal standard.[14] The same doubts are raised today by some U.S. scholars regarding the possibility of replacing the consumer welfare standard with what has been called the “competitive process test.”[15]

Like considerations of distribution or justice, debates about fairness are inevitably bedeviled by the existence of many differing and sometimes contradictory definitions, rendering the term’s content undefined and incomplete.[16] Despite its many appealing features in the abstract, fairness is a subjective and vague moral concept and, hence, essentially useless as a decision-making tool. Behavioral economics has provided evidence that fairness motives do affect many people’s behavior and can restrict the actions of profit-seeking firms, while simultaneously confirming that notions of fairness can vary widely among individuals.[17] As a result, it is inherently unclear what benchmark should be applied to measure fairness. This poses a serious challenge for legal certainty, as actors cannot predict ex ante whether a practice will be sanctioned for having trespassed the unfairness threshold. Accordingly, policymakers have been invited to give no weight to fairness in choosing legal rules, but rather to assess policies entirely on the basis of their effects on individuals’ well-being.[18]

As notions of fairness have taken a central place in recent EU regulatory interventions, it is worth investigating whether a clear and enforceable definition has been provided (and, in this case, whether the content of fairness has been specified as a rule or as a standard) or whether the vagueness and ambiguity associated with the term’s meaning can be exploited to grant policymakers convenient procedural shortcuts. Indeed, an unmeasurable goal will tend to be irresistibly attractive to enforcement agencies, as it can mean anything they want it to. This paper aims to demonstrate that the revival of fairness considerations in competition law functions primarily to offer policymakers greater latitude to intervene, relieving them of the burden of economic analysis and allowing them to pursue political ends. Chief among the latter is restoring what the U.S. neo-Brandeisian movement considers the original mission of antitrust law: namely, to ensure a more democratic distribution of power and to protect “small dealers and worthy men.”[19] Rather than being used to assess whether practices are anti-competitive, fairness is used to correct market outcomes.

Similar concerns have been raised about a new policy statement issued recently by the U.S. Federal Trade Commission (FTC) regarding the scope of the agency’s authority to prohibit unfair methods of competition (UMC) under the Section 5 of the FTC Act.[20] The FTC points to the legislative record to argue that Section 5 was enacted to protect “smaller, weaker business organizations from the oppressive and unfair competition of their more powerful rivals.”[21] Against the declared aim of “reactivating Section 5,”[22] Commissioner Christine S. Wilson noted in her dissent that, by preferring a “near-per se approach” that discounts or ignores both the business rationales that may underly challenged conduct and the potential efficiencies that such conduct may generate, the policy statement reflects a “repudiation of the consumer welfare standard and the rule of reason” and resembles the work of an academic or a think tank fellow who “dreams of banning unpopular conduct and remaking the economy.”[23]

This paper is structured as follows. Section I describes how fairness considerations lie at the core of European Commissioner for Competition Margrethe Vestager’s political mandate. Section II examines how the notion of unfairness has been applied in EU antitrust case law. Section III analyzes the use of fairness as a rationale for recent EU legislative initiatives in the digital economy. Section IV illustrates that these initiatives do not provide a meaningful contribution to the application of fairness, either as a standard or as a rule. Section V concludes.

I.        The Vestager Mandate: Fairness as Political Signaling

As has been widely noted, fairness has emerged as a guiding principle of EU competition policy during Commissioner Vestager’s previous and current terms.[24] She has referred to fairness in numerous speeches, characterizing her political mandate as one of advocating vigorously for antitrust rules to uphold notions of fairness. But rather than articulate a substantive standard of fairness that could be applied consistently in antitrust enforcement, Vestager has weaponized the notion of fairness as political signaling.

Among Vestager’s pronouncements on the subject are that “competition policy also reflects an idea of what society should be like” and that this is “the idea of a Europe that works fairly for everyone.”[25] She has contended that “when competition works, we end up with a market that treats people more fairly.”[26] Moreover, Vestager concludes that “fair markets are just what competition is about”[27] and “we all have a responsibility to help build a fairer society.”[28]  As the power of digital platforms has grown, Vestager says, “it’s become increasingly clear that we need something more, to keep that power in check, and to keep our digital world open and fair.”[29]

The Europe envisaged by the founders of the Treaty of Rome is, she argues, “one that would bring prosperity and fairness, not just to a few, but to all Europeans.”[30] While some of the commissioner’s speeches invoke fairness primarily in the context of competition giving consumers the power to demand a “fair deal”[31] by ensuring that “their choices and preferences count,”[32] others imply that firms have a responsibility to run their businesses “in a way that is fair to your competitors, fair to your business partners.”[33]

Taken as a whole, her various invocations of fairness frame antitrust law not as economic policy, but as a kind of morality play.[34] Addressing her speeches to the “people,” Vestager emphasizes competition law’s fundamental role in building a fair society. [35]

People don’t just want to be told that open markets make us better off. They want to know that they benefit everyone, not just the powerful few. And that is exactly what competition enforcement is about … public authorities are here to defend the interests of individuals, not just to take care of big corporations. And that everyone, however rich or powerful, has to play by the rules.[36]

II.      EU Antitrust Enforcement: Fairness as a Standard

The notion of fairness is not foreign to EU competition law. The Preamble to the Treaty on the Functioning of the European Union (TFEU) includes a reference to “fair competition.” Its antitrust provisions, while prohibiting restrictive agreements and practices, creates an exception for those that grant consumers a “fair share” of procompetitive benefits (Article 101). The provisions also prohibit abuses of dominant position that impose “unfair purchase or selling prices” or other “unfair trading conditions” (Article 102). Moreover, Vestager has argued that state-aid rules, which prevent member states from granting companies a selective advantage, likewise reflect the notion of fairness within “the ordinary meaning of the word.”[37]

In general, these provisions endorse a standard-based approach to fairness that specifies the content of the law ex post, rather than a rule-based approach that introduces more specific legal commands ex ante.[38] Because fairness remains undefined and its meaning is disputed, the standard is hard to operationalize.

A.      Unfair Terms and Excessive Pricing

While only a handful of judgments and decisions by the European Court of Justice (CJEU) and the European Commission analyze the notion of unfairness, what these typically share is a focus on clauses that either were not functional to achieve the purpose of the agreement or that unjustifiably restricted the freedom of the parties.[39] The relationship between unfairness and the absence of a functional relationship between the contract’s purpose and challenged contractual clauses was highlighted in Tetra Pak II[40] and Duales System Deutschland (DSD).[41] It can be inferred from some of the Commission’s other decisions that unfairness may been associated with opaque contractual conditions that render a dominant firm’s counterparties weaker, particularly when those counterparties are unable to understand the terms of the commercial offer in question.[42]

Recent years have seen a revival of cases concerning “unfair prices,” particularly in cases concerned with drug pricing or the collection of  royalties.[43] But rather than establish the meaning of fairness, courts and competition authorities have tended toward a rule-based approach to identify unfair prices, developing alternative measures rooted in economic reasoning.[44] Indeed, since United Brands, the CJEU has evaluated whether a price is unfair by  determining whether it has a reasonable relation to the economic value of the product.[45] For example, in SABAM, the CJEU confirmed that the royalty rate requested by a collective society should bear relation to the economic value of the copyright work.[46] But courts and antitrust authorities have also struggled to apply the test set out by the CJEU in United Brands to assess whether prices are unfair.[47] As acknowledged in AKKA-LAA, “there is no single adequate method” to evaluate unfair-pricing cases.[48] Given this, Advocate General Nils Wahl has argued that a price charged by a dominant undertaking should be deemed abusive only when no rational economic explanation (other than a firm possessing the capacity and willingness to use its market power) can be found for why it is so high.[49]

B.      Margin Squeeze

Unfair-pricing practices have also been investigated in the context of the margin-squeeze strategy, which is a standalone abuse under EU competition law on grounds that it undermines equality of opportunity between economic operators.[50] Rather than refusing to supply, a vertically integrated dominant firm may instead charge a price for a product on the upstream market that would not allow an equally efficient competitor to compete profitably on a lasting basis with the price the dominant firm charges on the downstream market. A margin squeeze exists if the difference between the retail prices charged by a dominant undertaking and the wholesale prices it charges its competitors for comparable services is negative, or insufficient to cover the product-specific costs to the dominant operator of providing its own retail services to end-users.[51] Accordingly, the unfair spread between the upstream price and the retail price is deemed exclusionary when it squeezes rivals’ margins on the retail market, thereby undermining their ability to compete on equal terms. The dominant player is therefore required to leave its rivals a fair margin between the wholesale and retail prices.[52]

C.      FRAND-Encumbered SEPs

The notion of fairness has also been raised in the context of standard-essential patents (SEPs), whose holders are subject to fair, reasonable, and non-discriminatory (FRAND) licensing obligations.[53] The process of developing standards can create opportunities for companies to engage in anticompetitive behavior where such standards give rise to holdup problems involving the strategic use of patents. The claim is that SEPs confer market power because the standardization process leads to the exclusion of alternative technologies. As a consequence, SEP owners enjoy ex post monopoly power that could enable them to charge excessively high royalty rates in their licensing agreements or to constructively refuse to license their patents.

To address these concerns, standard-setting organizations (SSOs) typically require SEPs holders to submit FRAND commitments. The goal is to make SEPs available at a price equivalent to what patents would have been worth in the market prior to the time they were declared essential.

It is a matter of debate, however, whether FRAND commitments can effectively prevent SEP owners from imposing excessive royalty obligations on licensees. In fact, there are no generally agreed-upon tests to determine whether a particular license does or does not satisfy a FRAND commitment. There is also little consensus regarding the legal effects of FRAND commitments, such as whether they imply a waiver of the general law of remedies (more precisely, injunctive relief and other extraordinary remedies). Such broad uncertainty has prompted a wave of litigation around the globe in recent decades.

While some SSOs and courts have moved toward a rule-based approach to define fair/reasonable rates and to develop methods for the valuation of FRAND royalties, the CJEU in Huawei[54] endorsed a hybrid approach.[55] Indeed, rather than define the meaning of FRAND (which remains left to a standard-based approach), the CJEU imposed a procedural framework for good-faith SEP-licensing negotiations. The framework identifies the steps that patent holders and implementers must follow in negotiating FRAND royalties, with the threats of antitrust liability and patent enforcement as levers to steer the parties toward a mutually agreeable level. Nonetheless, none of these approaches has thus far proven effective in reducing either uncertainty or litigation.

D.     Abuse of Economic Dependence

Over the years, several EU member states have adopted provisions related to the abuse of economic dependence (also known as relative market power or superior bargaining power), creating yet another context in which the unfairness of terms and conditions may be implicated.[56] Rules forbidding the abuse of economic dependence reflect concerns about the asymmetry of economic power in business-to-business relationships, which is considered a potential source of unfair-trading practices.

Although abuse of economic dependence is not regulated at the EU level, national-level legislation is authorized by Article 3(2) of the Regulation 1/2003 on the implementation of competition rules, which allows member states to adopt and apply stricter laws prohibiting or sanctioning unilateral conduct.[57] Recital 8 of the regulation refers specifically to national provisions that prohibit or impose sanctions on abusive behavior toward economically dependent undertakings.

Economic dependence is typically the result of significant switching costs that may lock a party into a business relationship and prevent it from finding equivalent alternative solutions. Therefore, evaluations of economic dependence include examining the amount of relationship-specific investment the dependent firm has undertaken (i.e., investments required to support its trading relationship), which may expose weak parties to holdup, as well as whether the counterparty should be considered an unavoidable trading partner because of its exclusive control over an essential input.

It is worth noting that recent legislative initiatives signal a willingness by EU member states to rely on abuse-of-economic-dependence claims to tackle digital platforms’ purportedly unfair conduct and trading relationship with business users. In 2020, Belgium approved an amendment to its Code of Economic Law to insert a provision on abuse of economic dependence,[58] with lawmakers making specific reference to the perceived legislative gap concerning digital platforms. In 2021, alongside its new antitrust tool focused on firms of “paramount significance for competition across markets,” the German Bundestag extended its economic-dependence provision to target firms acting as “intermediaries on multi-sided markets,” insofar as business users are significantly dependent on their intermediary services to access supply and sales markets such that sufficient and reasonable alternatives do not exist.[59] Finally, in 2022, the Italian Annual Competition Law included a specific provision introducing a rebuttable presumption of economic dependence when a firm uses intermediation services provided by a digital platform that play a “key role” in reaching end users or suppliers due to network effects or the availability of data.[60]

E.      Summary of Findings

There are two primary takeaways from this brief overview of fairness in EU antitrust law. First, despite some references in the TFEU, antitrust enforcers have traditionally been reluctant to engage with the unfairness of terms and conditions. Uncertainty regarding the definition and legal boundaries of fairness make it challenging to use as an actionable standard for the evaluation of anticompetitive behavior. Second, if recent case law is suggestive of how attitudes about the use of fairness in antitrust are evolving, courts and competition authorities likely will continue to prefer that fairness be anchored in specific economic values or a detailed code of conduct (i.e., switching to a rule-based approach), rather than relying on political or moral considerations. The ongoing disputes over how to assess whether prices are excessive, as well as determining “fair” royalties for SEPs, suggest that questions about the scope and nature of unfair conduct cannot be usefully resolved by references to “the ordinary meaning of the word.”

Moreover, while fairness is explicitly mentioned in exploitative-abuse cases, Article 102 TFEU makes no reference to fairness as a benchmark for such cases. In this regard, the CJEU’s Servizio Elettrico Nazionale ruling affirmed the effects-based approach the court would take to assessing the abusive nature of unfair practices.[61] Notably, the CJEU definitively stated that competition law is not intended to protect the existing structure of the market, but rather that the ultimate goal of antitrust intervention is the protection of consumer welfare.[62] Accordingly, as the court previously found in Intel, not every exclusionary effect is necessarily detrimental to competition.[63] Competition on the merits may, by definition, mean that less-efficient competitors who are less attractive to consumers in terms of price, choice, quality, or innovation may be marginalized or forced to exit the market.[64]

III.    EU Competition Policy in Digital Markets: Fairness as a Rule?

The preceding overview of EU antitrust enforcement demonstrates that, despite recent political interest in the subject of fairness, authorities and courts continue to struggle to apply it as a substantive standard. Commissioner Vestager’s fairness agenda nonetheless permeates several recent legislative initiatives to regulate the digital economy through specific rules, rather than a general standard.

A common feature of these interventions is their preoccupation with the intermediation (or bottleneck) power that some large online platforms may wield vis-à-vis business users, to the extent that they may be unavoidable trading partners in a wide range of contexts. As a result, proponents argue, the interventions are needed to ensure a level playing field and to prevent unfair behavior to the detriment of business users.

A.      Platform-to-Business Regulation

In 2019, the EU adopted the regulation on promoting fairness and transparency for business users of online intermediation services (P2B Regulation).[65] Its aim was to lay down rules to ensure that digital intermediation platforms and search engines grant appropriate transparency, fairness, and effective redress to business users and corporate websites, respectively.[66] According to the P2B Regulation, online intermediation services can be “crucial” for the commercial success of firms who use such services to reach consumers. Given that dependence, such platforms often have superior bargaining power that enables them to behave unilaterally in ways that can be unfair, harmful to the legitimate interests of their business users, and also, indirectly, to consumers.[67]

While fairness is referenced in the P2B Regulation’s formal title, its provisions are more concerned with enhanced transparency, rather than forbidding or prescribing specific conduct. Nonetheless,  the regulation left open the potential for further measures if its provisions proved insufficient to adequately address imbalances and unfair commercial practices in the sector.[68] A few months after the P2B Regulation was promulgated, the European Commission unveiled in a communication to the European Parliament its view for the circumstances under which further legislative intervention would be needed.[69] Since platforms that act as “private gatekeepers to markets, customers and information” may jeopardize the fairness and openness of markets, and “competition policy alone cannot address all the systemic problems that may arise in the platform economy,” the Commission noted that additional rules may still be needed to ensure contestability, fairness, and innovation in digital markets, as well as the possibility of market entry.[70] Notably, the Commission’s declared policy goal was to ensure “a level playing field for businesses,” which it argued “is more important than ever” in the digital era.[71]

B.      Digital Markets Act

It was against this backdrop that the European Commission proposed the Digital Markets Act (DMA),[72] with the goal of ensuring “contestability and fairness” for digital markets.[73] In the Commission’s view, the distinctive characteristics of digital services (i.e., the presence of strong economies of scale, indirect network effects, economies of scope due to the role of data as a critical input, and conglomerate effects, along with consumers’ behavioral biases and single-homing tendency) generate significant barriers to entry that confer gatekeeping power on certain large platforms.[74]

The Commission warned that this situation would lead to “serious imbalances in bargaining power and, consequently, to unfair practices and conditions” both for business users and for platforms’ end users, to the detriment of prices, quality, “fair competition,” choice, and innovation in the market.[75] Moreover, gatekeepers frequently play a dual role, being simultaneously operators of a marketplace and sellers of their own products and services in competition with rival sellers.[76] Therefore, the Commission contended, rules are needed to prevent gatekeepers from unfairly benefitting and to impose on them a special responsibility to ensure a level playing field, which de facto amounts to the introduction of a platform-neutrality regime.[77]

Implicit in the DMA is the presumption that market processes are often incapable of ensuring “fair economic outcomes” with regard to core platform services,[78] apparently requiring a rethinking of competition policy. Under this view, competition law is deemed unfit to effectively address challenges posed by gatekeepers that are not necessarily dominant in competition-law terms.[79] Indeed, antitrust is limited to certain examples of market power (e.g., dominance on specific markets) and of anti-competitive behavior.[80] Further, its enforcement occurs ex post and requires an extensive investigation on a case-by-case basis of what are often very complex facts.[81]

The DMA therefore aims to protect a different legal interest from antitrust rules. Rather than protect undistorted competition on any given market, as defined in competition law terms, the DMA seeks to ensure that markets where gatekeepers are present are and remain “contestable and fair,” independent of the actual, likely, or presumed effects of gatekeeper conduct.[82] As a result, it introduces a set of ex ante obligations for online platforms designated as gatekeepers, thereby effectively relieving enforcers of the responsibility to define relevant markets, prove dominance, and measure market effects.

Despite that proclaimed protection of a different legal interest, however, there is no indication that the DMA’s promotion of fairness and contestability differs from the substance and scope of competition law.[83] The draft DMA didn’t define either fairness or contestability, nor did it indicate how the obligations it would impose on digital gatekeepers was intended to deliver each objective. The final version fills part of this gap, including a definition of these goals. With regard to contestability, the DMA targets practices that increase barriers to entry or expansion in digital markets and imposes obligations that tend to lower these barriers.[84] Therefore, contestability relates to firms’ ability to “effectively overcome barriers to entry and expansion and challenge the gatekeeper on the merits of their products and services.”[85] With respect to fairness, the obligations seek to address the “imbalance between the rights and obligations of business users” that allows gatekeepers to obtain a “disproportionate advantage” by appropriating the benefits of market participants’ contributions.[86] Indeed, “[d]ue to their gateway position and superior bargaining power, it is possible that gatekeepers engage in behaviour that does not allow others to capture fully the benefits of their own contributions, and unilaterally set unbalanced conditions for the use of their core platform services or services provided together with, or in support of, their core platform services.”[87]

Nonetheless, the DMA also considers fairness to be “intertwined” with contestability.[88] “The lack of, or weak, contestability for a certain service can enable a gatekeeper to engage in unfair practices. Similarly, unfair practices by a gatekeeper can reduce the possibility for business users or others to contest the gatekeeper’s position.”[89] Therefore, an obligation may address both. Unfortunately, because the DMA does not index the obligations based on the specific goal they purportedly advance, it also does not clarify which obligations are intended to safeguard contestability and/or promote fairness. This is despite the fact that the title of the DMA’s Chapter III refers to practices of gatekeepers that limit contestability “or” are unfair.[90]

The confusion between the two policy goals is confirmed in several passages of the text, which refer indiscriminately to contestability “and” fairness.[91] In line with the definition of contestability and fairness provided in the DMA, the table below summarizes the obligations according to protected interests and principal beneficiaries.

The vast majority of the DMA’s provisions seek to promote contestability. Most are clearly described in this way, including explicit references to terms such as contestability, switching, multi-homing, and barriers to entry and expansion.[92] Two of the provisions instead introduce pure transparency obligations. Although they are described as functional to promote contestability and fairness,[93] they do not appear to either affect the imbalance of bargaining power or lower barriers to entry and expansion.

An interesting case is provided by the ban on “sherlocking” (i.e., the use of business users’ data to compete against them), which apparently does not belong to any of the proclaimed goals. Indeed, even if the prohibition is justified to prevent gatekeepers from unfairly benefitting from their dual role,[94] the characterization of the conduct in question does not match the definition of fairness provided in Recital 33.

The goal of fairness is almost always confused (rectius, “intertwined”) with contestability. Indeed, some provisions are justified on grounds that the imposition of contractual terms and conditions by gatekeepers may limit inter-platform contestability.[95] Other provisions are deemed necessary to promote multi-homing and to prevent reinforcing business users’ dependence on gatekeepers’ core platform services.[96] Further, to ensure a “fair commercial environment” and to protect the contestability of the digital sector, the DMA considers it important to safeguard the right to raise concerns about unfair practices by gatekeepers.[97] Moreover, the DMA contends that, since certain services are “crucial” for business users, gatekeepers should not be allowed to leverage their position against their dependent business users and therefore “the freedom of the business user to choose alternative services” should be protected.[98] Finally, the law suggests that some practices should be prohibited because they give gatekeepers a means to capture and lock in new business users and end users, thus raising barriers to entry.[99]

Thus, there is significant definitional overlap between contestability and fairness under the DMA. Further, while Recital 33 links the notion of fairness to the imbalance between business users’ rights and obligations, some provisions also protect end users against unfair practices.[100] The law also embraces fairness as a notion applicable to both contractual terms and market outcomes. Indeed, in order to justify intervention that exceeds traditional antitrust rules, the DMA states that market processes are often incapable of ensuring “fair economic outcomes” with regard to core platform services.[101] In other words, rather than concern itself with specific practices, the DMA’s approach to fairness starts with a presumption that the outcome is unfair and regulates some practices to redress this.

Article 6(12) represents the only provision clearly addressed at ensuring just fairness as defined in Recital 33. Indeed, describing the FRAND access obligation, Recital 62 includes several keywords from that definition, stating that pricing or other general-access conditions should be considered unfair if they lead to an “imbalance of rights and obligations” imposed on business users or confer a “disproportionate advantage” on the gatekeeper. But “fairness” in such circumstances acts as a standard rather than a rule. To avoid the scenario already illustrated with regard to SEPs, Recital 62 provides some benchmarks to determine the fairness of general-access conditions.

Article 5(3) forbids parity clauses, also known as most-favored nation (MFN) agreements or across-platform parity agreements (APPAs). The provision bans both the broad and narrow versions of such clauses, thereby prohibiting gatekeepers from restricting business users’ ability to offer products or services under more favorable conditions through other online intermediation services or through direct online sales channels. The DMA maintains that, while the broad version of the parity clause may limit inter-platform contestability, its narrow version would unfairly restrain business users’ freedom to use direct online sales channels.[102]

To the extent that the rationale for the ban is to protect weak business parties against the superior bargaining power exerted by digital intermediaries, the potential effects of broad and narrow MFNs differ significantly. While broad parity clauses are more likely to produce net anti-competitive effects, efficiency justifications related to the protection of platforms’ investments against the risk of free riding usually prevail in case of narrow parity clauses. Indeed, the original DMA proposal only forbade broad MFNs, as the European Commission has traditionally endorsed a case-by-case analysis of their effects under competition law.[103] The more lenient approach toward narrow MFNs is seen in the new guidelines on vertical restraints, where it is stated that narrow retail-parity obligations are more likely to fulfil the conditions of Article 101(3) TFEU than across-platform retail parity obligations “primarily because their restrictive effects are generally less severe and therefore more likely to be outweighed by efficiencies” and “[m]oreover, the risk of free riding by sellers of goods or services via their direct sales channels may be higher, in particular because the seller incurs no platform commission costs on its direct sales.”[104]

By banning narrow MFNs, the final version of the DMA disregards these efficiency justifications. A more fulsome notion of fairness would be concerned not only with gatekeepers’ disproportionate advantage, but also with the risk of free riding by business users, which may reduce the incentive to invest in platform development.[105] Indeed, relying on the definition provided in Recital 33, this could be a case where fairness may even be invoked by a gatekeeper against business users, because the former may be unable to fully capture the benefits of its own investment.

C.      Data Act

Ambiguity about the notion of fairness also characterizes the proposed Data Act.[106] On the one hand, the proposal pursues the goal of “fairness in the allocation of value from data” among actors in the data economy.[107] This concern stems from the observation that the value of data is concentrated in the hands of relatively few large companies, while the data produced by connected products or related services are an important input for aftermarket, ancillary, and other services.[108] Given this, the Data Act attempts to facilitate access to and use of data by consumers and businesses, while preserving incentives to invest in ways of generating value from data. On the other hand, to ensure fairness in the underpinning data-processing services and infrastructure, the proposal seeks “fairer and more competitive markets” for data-processing services, such as cloud-computing services.[109]

Moreover, such objectives include operationalizing rules to ensure “fairness in data sharing contracts.”[110] Notably, to prevent the exploitation of contractual imbalances that hinder fair data-access and use for small or medium-sized enterprises (SMEs),[111] Chapter IV of the Data Act addresses unfair contractual terms in data-sharing contracts in situations where a contractual term is imposed unilaterally by one party on a SME. The proposal justifies this requirement by assuming that SMEs will typically be in a weaker bargaining position, without meaningful ability to negotiate the conditions for access to data. They are thus often left with no other choice but to accept take-it-or-leave-it contractual terms.[112]

Terms imposed unilaterally on SMEs are subject to an unfairness test,[113] where a contractual term is considered unfair if it is of such a nature that its use grossly deviates from good commercial practice, contrary to good faith and fair dealing.[114] But given how vague and broad concepts such as “gross deviation from good commercial practices” or “contrary to good faith and fair dealing” are, the unfairness test may simply serve to generate further uncertainty, which could be heightened by potential differing interpretations at the national level.

Therefore, rather than outline specific rules, the proposed Data Act opts for a standard-based approach and provides a yardstick to interpret the unfairness test.[115] Article 13 includes a list of terms that are always considered unfair and another list of terms that are presumed to be unfair. If a contractual term is not included in these lists, the general unfairness provision applies. Moreover, model contractual terms recommended by the Commission may assist commercial parties in concluding contracts based on fair terms.

Some terms considered unfair by the Data Act are clearly inspired by the abuse-of-economic-dependence standard.[116] Given the implicit parallel between data dependence and economic dependence, the exclusion of SMEs from the scope of application of Article 13 is not justified.[117] Indeed, abuse-of-economic-dependence cases involve scrutinizing the unfairness of terms and conditions due to the imbalance of bargaining power between business parties, regardless of the size of the players involved. Moreover, in the case of data-sharing contracts, such imbalance would be generated by data dependence, which may also emerge when SMEs exert control over certain data.

In summary, to achieve a greater balance in the distribution of the economic value from data among actors, the fairness of both contractual terms and market outcomes are addressed in the Data Act. The creation of a cross-sectoral governance framework for data access and use aims to ensure contractual fairness by rebalancing the bargaining power of SMEs vis-à-vis large players in data sharing contracts.[118] As a result, fairer and more competitive market outcomes shall be promoted in aftermarkets and in data processing services.[119]

D.     Summary of Findings

Recent EU legislative efforts motivated by the objective of promoting fairness in digital markets have thus far appeared to confirm traditional doubts about the possibility of relying on it as a suitable tool to assess anti-competitiveness.

If fairness has proven to be unsuitable to serve as a substantive standard in EU competition-law enforcement, the shift towards a rule-based approach does not seem to provide a significant improvement. Fairness represents a vague overarching goal. The envisaged black and white rules do not plainly address fairness, which instead is still essentially treated according to a standard-based approach. Moreover, the lack of clarity about the meaning of the term and the boundaries of its scope remains a relevant and thorny issue.

Indeed, the recent initiatives apply fundamentally different concepts of fairness. While the P2B Regulation treats fairness as de facto equivalent to transparency rules, the DMA defines it as referring to an imbalance in bargaining power that prevents a fair share of value among all players that contribute to a platform ecosystem. That definition notwithstanding, almost all of the DMA’s obligations putatively intended to promote fairness are, in effect, addressed at promoting contestability. Furthermore, the only provision clearly aimed at ensuring fairness as defined in the DMA relies on a standard-based approach. In a similar vein, the proposed Data Act treats fairness as a standard, introducing contractual protections based solely on the size of the players (i.e., SMEs) and providing a yardstick to apply the unfairness test.

IV.    Fairness as a Blanket License for Regulatory Intervention

Alongside the apparent difficulties in operationalizing fairness as either a standard or a rule, in practice, the lines separating fairness in the process from the outcomes of competition are inevitably blurred.[120] After all, Commissioner Vestager has not hidden her dissatisfaction with current market outcomes, showing an inclination to evaluate market structure as a proxy for fairness. Despite the efforts to describe efficiency and fairness as converging objectives for competition-policy enforcers, she implicitly acknowledged the trade-off between these goals.[121] Notably, Vestager argued that “[i]t’s true that competition, by its very nature, involves winners and losers. But as long as the social market economy is working properly, the efficiency gains that accrue from this process can be fairly and justly shared across all stakeholders.”

It is hard to deny the fundamental contradiction between defending efficient markets and promoting distributive justice. It is also difficult to reconcile Vestager’s message with the CJEU’s well-established principle that exclusionary effects do not necessarily undermine competition.[122] Indeed, rather than interpret fairness as equality of initial opportunities, Vestager explicitly refers to the fairness of market outcomes.

From this perspective, it would be more coherent to state that the reason why there is no clash between efficiency and fairness is because they perform different functions. While the former acts as a substantive standard for antitrust enforcement, the latter is a mere aspiration that has proven useful for political signaling.

It is not surprising that the recent push to revive fairness considerations in digital markets has originated outside the competition-law framework. Such policy choices implicitly acknowledge the impossibility of using fairness as an alternative standard to competition on the merits in antitrust law. As recently recalled by the CJEU, the ultimate goal of antitrust intervention is the protection of consumer welfare, rather than any particular market structure. The exclusion of as-efficient competitors is key to triggering antitrust liability for competition foreclosure. Therefore, for those who pursue the political agenda of building a fairer society,[123] it is necessary to bypass competition law, arguing—as the DMA does—that it is unfit to address the new challenges posed by digital gatekeepers. Indeed, in the setting of per se regulation, fairness can be invoked to justify more discretion, disregarding economic analysis and demonstration of the anticompetitive effects of conduct.

Against this background, the definition of fairness envisaged by the DMA (as protection against the asymmetric negotiating power of digital gatekeepers vis-à-vis business users to ensure an adequate sharing of the surplus) appears insufficient to provide the much-needed limits to its scope of application. This particular flavor of distributive justice may, indeed, favor regulatory capture, justifying interventions that actually reflect rent-seeking strategies aimed at shielding some legacy players from competition at the expense of consumers.

This is apparently the case with some EU policy initiatives such as the directive on copyright in the Digital Single Market.[124] In line with the proclaimed purpose of achieving “a well-functioning and fair marketplace for copyright,”[125] the directive grants to publishers a right to control the reproduction of digital summaries of press publications, which currently are often offered by information-service providers.[126] The new right aims to address the value gap dispute between digital platforms and news publishers, as the former are accused of capturing a huge share of the advertising revenue that might otherwise go to the latter by free riding on the investments made in producing news content. The argument is that these platforms take advantage of the value created by publishers when they distribute content that they do not produce and for which they do not bear the costs.[127]

Notably, because of publishers’ reliance on some Big Tech platforms for traffic (i.e., Google and Facebook), the latter are deemed to exert substantial bargaining power, which makes it difficult for publishers to negotiate on an equal footing.[128] Accordingly, it has been argued that a harmonized legal protection is needed to put publishers in better negotiating position in their contractual relations with large online platforms.

The European reform has not, however, been guided by an evidence-led approach. Indeed, there is no empirical evidence to support the free-riding narrative.[129] It relies merely on evidence of the crisis in the newspaper industry, without proof of the claim that digital infomediaries negatively impact legacy publishers by displacing online traffic. Looking at the previous ancillary-rights solutions at the national level (i.e., in Germany and Spain), empirical results show no evidence of a substitution effect, but rather demonstrate the existence of a market-expansion effect. This therefore proves that online news aggregators complement newspaper websites and may benefit them in terms of increased traffic and more advertising revenue. Such aggregators allow consumers to discover news outlets’ content that they would not otherwise be aware of, while reducing search times and enabling readers to consume more news.[130]

In a similar vein, as part of the 2030 digital-policy program,[131] the Commission and other European institutions appear set to deliver another legislative initiative that would force some large online platforms to contribute to the cost of telecommunications infrastructure.[132] Indeed, telecom operators claim that internet-traffic markets are unbalanced, arguing that just a few large online companies generate a significant portion of all network traffic, but they do not adequately contribute to the development of such networks[133]. As the argument goes, while network operators bear massive investments to ensure connectivity, digital platforms free ride on the infrastructure that carries their services.

Moreover, strong competition in the retail telecommunications market and regulatory interventions on the wholesale level have contributed to declining profit margins for telecom firms’ traditional retail revenue streams. Therefore, telecom operators argue that their costs of capital are higher than their returns on capital. Finally, network operators complain that they are not in a position to negotiate fair terms with these platforms due to their strong market positions, asymmetric bargaining power, and the lack of a level regulatory playing field. Hence, they argue, a legislative intervention is needed to address such imbalances and ensure a fair share of network usage costs are financed by large online content providers.[134]

Following this path, the EU Council has recently supported the view expressed in the European Declaration on Digital Rights and Principles for the Digital Decade that it is necessary to develop adequate frameworks so that “all market actors benefiting from the digital transformation assume their social responsibilities and make a fair and proportionate contribution to the costs of public goods, services and infrastructures, for the benefit of all Europeans.”[135]

The arguments advanced by telecom operators to support introducing a network-fee payment scheme would amount to a sending-party-network-pays system. Such proposals are not new, and they have already been rejected. As the Body of European Regulators for Electronic Communications (BEREC) noted 10 years ago, such proposals overlook that it is the success of content providers that lies at the heart of increases in demand for broadband access.[136] Indeed, requests for data flows stem not from content providers. but from internet consumers, from whom internet service providers already derive revenues.[137] From this perspective, both sides of the market (content providers and end users) already contribute to paying for Internet connectivity.[138] Further, “[t]his model has enabled a high level of innovation, growth in Internet connectivity, and the development of a vast array of content and applications, to the ultimate benefit of the end user.”[139]

Moreover, by charging Big Tech firms, the proposal may clash with the legal obligation of equal treatment that ensues from the Net Neutrality Regulation,[140] which has been justified under the opposite view that is it broadband providers who enjoy endemic market power as terminating-access monopolies, and hence should be precluded from discriminating against some traffic.[141] From this perspective, it would be difficult to justify an intervention intended to restore fairness in the relationship between network operators and content providers on the premise that the former suffers from an asymmetry of bargaining power without repealing the Net Neutrality Regulation.

BEREC recently affirmed its view in a preliminary assessment of the mechanism of direct compensation to telecom operators.[142] Changes in the traffic patterns do not modify the underlying assumptions regarding the sending-party-network-pays charging regime, therefore “the 2012 conclusions are still valid.”[143] The sending-party-network-pays model, BEREC argues, would provide ISPs “the ability to exploit the termination monopoly” and such a significant change could be of “significant harm to the internet ecosystem.”[144] Further, BEREC questioned the assumption that an increase in traffic directly translates into higher costs, noting that the costs of internet-network upgrades necessary to handle an increased traffic volume are very low relative to total network costs, while upgrades come with a significant increase in capacity.[145] Moreover, BEREC once again found no evidence of free riding along the value chain[146]: the IP-interconnection ecosystem is still largely competitive and the costs of internet connectivity are typically covered and paid for by ISP customers.

V.      Conclusion

Like the sirens’ music in the Odyssey, fairness exerts an irresistible allure. By evoking principles of equity and justice, fairness makes it hard for anyone to disagree with the pursuit of a goal that would make not just markets, but the whole society better off. As Homer warned, however, the rhetoric may be deceptive and designed to distract from the proper path. We see such risk in the call for fairness to serve as the guiding principle of EU competition policy in digital markets.

The experience of EU competition-law enforcement is illustrative of the difficulties inherent in relying on fairness as an applicable standard. It also underscores why enforcers have traditionally been reluctant to do so. Indeed, attempts to evaluate the unfairness of prices have required courts and competition authorities to identify economic values, while the struggle in finding agreement on the economic definition of what is fair has generated a wave of litigation in the SEP-licensing scenario. Therefore, while seeking refuge in the “ordinary meaning of the word” is apparently useless, envisaging an economic proxy for fairness is particularly challenging.

Despite this background, the EU institutions have embarked on a mission to appoint fairness as the lodestar of policy in digital markets. The DMA offers one definition of fairness, while all the other initiatives (P2B Regulation, the proposed Data Act, the Copyright Directive, and the ongoing discussion on the cost of telecom infrastructure) are likewise moved to address imbalances in bargaining power that do not guarantee that surplus will be adequately shared among market participants. On closer inspection, however, the initiatives are not fully consistent with any particular definition. The notion of fairness is often merged with contestability and is invoked to protect a wide range of stakeholders (business users, end users, rivals, or just small players), even when there is no evidence of disproportionate advantage for large online companies. Moreover, rather than being translated into specific rules, fairness is still primarily promoted according to a standard-based approach.

The revival of fairness considerations appears motivated primarily by policymakers’ desire to be free of any significant procedural constraints. An analogous policy trend can be seen among U.S. authorities, who likewise question the role of efficiency in antitrust enforcement and call for a “return to fairness.”[147] In the name of fairness, various business practice, strategies, and contractual terms can be evaluated without incurring the burden of economic analysis. And even the market structure can be questioned.

Fairness has the power to transform policymakers into judges, deciding what is right and who is worthy, which is a temptation that would require the sagacious foresight of Ulysses.

[1] Giuseppe Colangelo, Antitrust Unchained: The EU’s Case Against Self-Preferencing, International Center for Law & Economics (Oct. 7, 2022) ICLE White Paper,

[2] Jonathan Kanter, Remarks at New York City Bar Association’s Milton Handler Lecture, U.S. Justice Department (May 18, 2022)

[3] Ibid.

[4] See, e.g., Amelia Miazad, Prosocial Antitrust, 73 Hastings Law J. 1637 (2022); Dina I. Waked, Antitrust as Public Interest Law: Redistribution, Equity and Social Justice, 65 Antitrust Bull. 87 (Feb. 28, 2020); Ioannis Lianos, Polycentric Competition Law, 71 Curr Leg Probl 161 (Dec. 1, 2018); Lina M. Khan & Sandeep Vaheesan, Market Power and Inequality: The Antitrust Counterrevolution and its Discontents, 11 Harv. L. & Pol’y Rev. 235 (2017). See also Margrethe Vestager, Fairness and Competition Policy, European Commission (Oct. 10, 2022),, arguing that properly functioning markets become an instrument of social change and progress as, e.g., “keeping markets open to smaller players and new entrants benefits female entrepreneurs and entrepreneurs with a migrant background.”

[5] Eleanor M. Fox, The Battle for the Soul of Antitrust, 75 Cal. L. Rev. 917 (May 1987).

[6] Kanter, supra note 2; See also Alvaro M. Bedoya, Returning to Fairness, Federal Trade Commission, 2 (Sep. 22, 2022), available at, noting that “when Congress convened in 1890 to debate the Sherman Act, they did not talk about efficiency.”; See also Waked, supra note 4, framing antitrust as public-interest law and arguing that a sole focus on efficiency goals is inconsistent with the history of antitrust; For analysis of the conceptual links among competition, competition law, and democracy in the EU and the United States, see Elias Deutscher, The Competition-Democracy Nexus Unpacked—Competition Law, Republican Liberty, and Democracy, Yearbook of European Law (forthcoming), arguing that the idea of a competition-democracy nexus can only be explained through the republican conception of liberty as nondomination; In a similar vein, see Oisin Suttle, The Puzzle of Competitive Fairness, 21 PPE 190 (Mar. 7, 2022), distinguishing competitive fairness from equality of opportunity, sporting fairness (e.g., a level playing field), and economic efficiency, and arguing that competitive fairness is justified under the republican ideal of nondomination, namely the status of being a free agent protected from subjection to arbitrary interference.

[7] Bedoya, supra note 6, 8.

[8] See, e.g., Louis B. Schwartz, “Justice” and Other Non-Economic Goals of Antitrust, 127 Univ PA Law Rev 1076 (1979); John J. Flynn, Antitrust Jurisprudence: A Symposium on the Economic, Political and Social Goals of Antitrust Policy, 125 Univ PA Law Rev 1182 (1977).

[9] Eleanor M. Fox, Modernization of Antitrust: A New Equilibrium, 66 Cornell L. Rev. 1140 (August 1981).

[10] Kanter, supra note 2; See also Bedoya, supra note 6, 5, stating that “[w]hen antitrust was guided by fairness, these farmers’ families were part of a thriving middle class across rural America. After the shift to efficiency, their livelihoods began to disappear.”

[11] See Anu Bradford, Adam S. Chilton, & Filippo Maria Lancieri, The Chicago School’s Limited Influence on International Antitrust, 87 U Chi L Rev 297 (2020), arguing that the influence of the Chicago School has been more limited outside the United States.

[12] Niamh Dunne, Fairness and the Challenge of Making Markets Work Better, 84 Mod Law Rev 230, 236 (March 2021).

[13] Christian Ahlborn & Jorge Padilla, From Fairness to Welfare: Implications for the Assessment of Unilateral Conduct Under EC Competition Law, in Claus-Dieter Ehlermann & Mel Marquis (eds.), European Competition Law Annual 2007: A Reformed Approach to Article 82 EC (Hart Publishing, 2008), 55, 61-62; See also Vestager, supra note 4, stating that “[f]airness is what motivated us to take a look at the working conditions of the solo self-employed. … And fairness is what we considered first in our design of the Temporary Crisis Framework – avoiding subsidy races while ensuring those most affected by the crisis can receive the support they need.”

[14] See, e.g., Dunne, supra note 12, 237; Maurits Dolmans & Wanjie Lin, How to Avoid a Fairness Paradox in EU Competition Law, in Damien Gerard, Assimakis Komninos, & Denis Waelbroeck (eds.), Fairness in EU Competition Policy: Significance and Implications, GCLC Annual Conference Series, Bruylant (2020), 27-76; Francesco Ducci & Michael Trebilcock, The Revival of Fairness Discourse in Competition Policy, 64 Antitrust Bull. 79 (Feb. 12, 2019); Harri Kalimo & Klaudia Majcher, The Concept of Fairness: Linking EU Competition and Data Protection Law in the Digital Marketplace, 42 Eur. Law Rev. 210 (2017).

[15] See Einer Elhauge, Should The Competitive Process Test Replace The Consumer Welfare Standard?, ProMarket (May 24, 2022),; Herbert Hovenkamp, The Slogans and Goals of Antitrust Law, Faculty Scholarship at Penn Carey Law. 2853, (Jun. 2, 2022)

[16] See Bart J. Wilson, Contra Private Fairness, 71 Am J Econ Sociol 407 (April 2012), arguing that the understanding and use of the term “fair” in economics can be described as muddled, at best.

[17] Daniel Kahneman, Jack L. Knetsch, & Richard Thaler, Fairness as a Constraint on Profit Seeking: Entitlements in the Market, 76 Am Econ Rev 728 (September 1986); See also Ernst Fehr & Klaus M. Schmidt, A Theory of Fairness, Competition, and Cooperation, 114 Q J Econ 817 (August 1999).

[18] Louis Kaplow & Steven Shavell, Fairness Versus Welfare, Harvard University Press (2002).

[19] United States v. Trans-Mo. Freight Ass’n, 166 U.S. 290, 323 (1897); See Bedoya, supra note 6, 2, arguing that “today, it is axiomatic that antitrust does not protect small business. And that the lodestar of antitrust is not fairness, but efficiency” (emphasis in original); See also Margrethe Vestager, The Road to a Better Digital Future, European Commission (Sep. 22, 2022),, welcoming the Digital Markets Act because it will empower the EU “to make sure large digital platforms do not squeeze out small businesses.”

[20] Policy Statement Regarding the Scope of Unfair Methods of Competition Under Section 5 of the Federal Trade Commission Act, U.S. Federal Trade Commission (Nov. 10, 2022),

[21] Ibid., footnotes 15, 18, and 21.

[22] Lina M. Khan, Rebecca Kelly Slaughter, Alvaro M. Bedoya, On the Adoption of the Statement of Enforcement Policy Regarding Unfair Methods of Competition Under Section 5 of the FTC Act, U.S. Federal Trade Commission (Nov. 10, 2022), 1,

[23] Christine S. Wilson, Dissenting Statement Regarding the Policy Statement Regarding the Scope of Unfair Methods of Competition Under Section 5 of the Federal Trade Commission Act, U.S. Federal Trade Commission (Nov. 10, 2022), 1-3,, also arguing that “[t]he only crystal-clear aspect of the Policy Statement pertains to the process following invocation of an adjective: after labeling conduct ‘facially unfair,’ the Commission plans to skip an in-depth examination of the conduct, its justifications, and its potential consequences.”

[24] See, e.g., Konstantinos Stylianou & Marios Iacovides, The Goals of EU Competition Law: A Comprehensive Empirical Investigation, Leg Stud (forthcoming), reporting the various goals mentioned in speeches by EU commissioners during their terms in office; Dunne, supra note 12, 238, noting that Vestager invoked fairness in 85% of speeches in her first term in office.

[25] Margrethe Vestager, Fair Markets in a Digital World, European Commission (Mar. 9, 2018),

[26] Ibid.

[27] Ibid.

[28] Margrethe Vestager, Competition and Fairness in a Digital Society, European Commission (Nov. 22, 2018)

[29] Margrethe Vestager, Competition in a Digital Age, European Commission (Mar. 17, 2021),

[30] Margrethe Vestager, What Is Competition For?, European Commission (Nov. 4, 2021),

[31] See, e.g., Margrethe Vestager, Fairness and Competition, European Commission (Jan. 25, 2018),; Margrethe Vestager, Making the Decisions that Count for Consumers, European Commission (May 31, 2018)

[32] Vestager, supra note 25.

[33] Margrethe Vestager, A Responsibility to Be Fair, European Commission (Sep. 3, 2018),

[34] Thibault Schrepel, Antitrust Without Romance, 13 N. Y. Univ. J. Law Lib. 326 (May 4, 2020); As noted by Dolmans & Lin, supra note 14, 38, fairness, “with its moral overtones, confers a rhetorical flourish and sense of intrinsic righteousness when used to describe an act or situation.”; However, see Sandra Marco Colino, The Antitrust F Word: Fairness Considerations in Competition Law, 5 J. Bus. Law 329, 343 (2019), arguing that “[i]t makes little sense to defend a competition policy that develops with its back purposefully turned to the attainment of moral and social justice.”; For a more balanced reading, see Johannes Laitenberger, Fairness in EU Competition Law Enforcement, European Commission (Jun. 20, 2018), arguing that “while ‘fairness’ is a guiding principle, it is not an instrument that competition enforcers can use off the shelf to go about their work in detail. In each and every case the Commission looks into, it must dig for evidence; conduct rigorous economic analysis; and check findings against the law and the guidance provided by the European Courts.”

[35] Margrethe Vestager, Competition for a Fairer Society, European American Chamber of Commerce (Sep. 29, 2016); see also Margrethe Vestager, Antitrust for the Digital Age, European Commission (Sep. 16, 2022), arguing that the power that large platforms wield “is not just an issue for fair competition; it is an issue for our very democracies” and that the most important goal of competition policy is to make markets work for people; Margrethe Vestager, Keynote at the Making Markets Work for People Conference, European Commission (Oct. 27, 2022), stating that “[t]he only policy goal for markets is to serve the people.”; on the social rationale of competition law, see Damien Gerard, Fairness in EU Competition Policy: Significance and Implications, 9 J. Eur. Compet 211 (2018).

[36] Vestager, supra note 4, stating that “[w]e are on the side of the people, sometimes when no one else is.”; in a similar vein, on the U.S. side, see Bedoya, supra note 6, 9, describing antitrust as a way to protect “people living paycheck to paycheck” (“For me, that’s what antitrust is about: your groceries, your prescriptions, your paycheck. I want to make sure the Commission is helping the people who need it the most.”); see also Ariel Ezrachi & Maurice E. Stucke, The Fight over Antitrust’s Soul, 9 J. Eur. Compet 1 (2018), arguing that “[u]ltimately the divide is over the soul of antitrust: Is antitrust solely about promoting some form of economic efficiency (or as cynics argue, the interests of the powerful who hide behind a narrow utilitarian approach) or the welfare of the powerless (the majority of citizens who feel increasingly disenfranchised by big government and big business)?”; see also Adi Ayal, Fairness in Antitrust: Protecting the Strong from the Weak, Hart (2016).

[37] Vestager, supra note 28; see also @vestager, Twitter (Nov 8, 2022, 4:39 AM), featuring Vestager’s reaction to the European Court of Justice’s (CJEU) judgment annulling the Commission’s decision that found Luxembourg had granted selective tax advantages to Fiat in Fiat Chrysler Finance Europe v. Commission.

[38] There is an extensive literature devoted to investigating the tradeoffs between rules and standards: see, e.g., Daniel A. Crane, Rules Versus Standards in Antitrust Adjudication, 64 Wash. Lee Law Rev. 49 (2007); Louis Kaplow, Rules Versus Standards: An Economic Analysis, 42 Duke L.J. 557 (1992); Isaac Ehrlich & Richard A. Posner, An Economic Analysis Of Legal Rulemaking, 3 J. Leg. Stud. 257 (January 1974).

[39] See, e.g., CJEU, Case C-127/73, Belgische Radio en Televisie and Société Belge des Auteurs, Compositeurs et Editeurs v. SV SABAM and NV Fonior (Mar. 27, 1974), EU:C:1974:25, para. 15, holding that an exploitative abuse may occur when “the fact that an undertaking entrusted with the exploitation of copyrights and occupying a dominant position … imposes on its members obligations which are not absolutely necessary for the attainment of its object and which thus encroach unfairly upon a member’s freedom to exercise his copyright.”

[40] European Commission, Case IV/31.043, Tetra Pak II (Jul. 24, 1991), paras. 105-108, (1992) OJ L 72/1.

[41] European Commission, Case COMP D3/34493, DSD (Apr. 20, 2001), para. 112, (2001) OJ L 166/1; affirmed in GC, Case T-151/01, DerGrünePunkt – Duales System DeutschlandGmbH v. European Commission (May 24, 2007), EU:T:2007:154 and CJEU, Case C-385/07 P (Jul. 16, 2009), EU:C:2009:456.

[42] See European Commission, Case COMP/E-2/36.041/PO, Michelin (Michelin II) (Jun. 20, 2001), paras. 220-221 and 223-224, (2002) OJ L143/1, arguing that a discount program was unfair because it “placed [Michelin’s dealers] in a situation of uncertainty and insecurity,” because “it is difficult to see how [Michelin’s dealers] would of their own accord have opted to place themselves in such an unfavourable position in business terms,” and because Michelin’s retailers were not in a position to carry out “a reliable evaluation of their cost prices and therefore [could not] freely determine their commercial strategy.”

[43] Opinion of Advocate General Pitruzzella, Case C-372/19, Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v. Weareone.World BVBA, Wecandance NV (Jul. 16, 2020), EU:C:2020:598, para. 21; see also Marco Botta, Sanctioning Unfair Pricing Under Art. 102(a) TFEU: Yes, We Can!, 17 Eur. Compet. J. 156 (2021); for an overview of recent case law, see Giovanni Pitruzzella, Recent CJEU Case Law on Excessive Pricing Cases, in The Interaction of Competition Law and Sector Regulation: Emerging Trends at the National and EU Level (Marco Botta, Giorgio Monti, and Pier Luigi Parcu, eds.), Elgar 2022, 169; Margherita Colangelo, Excessive Pricing In Pharmaceutical Markets: Recent Cases in Italy and in the EU, ibid., 210.

[44] Dolmans & Lin, supra note 14, 59-60; see also Botta, supra note 43, arguing that, since the imposition of excessive prices by a dominant firm directly harms consumer welfare, the resurgence of excessive-pricing cases is linked to the role of consumer’s welfare standard in EU competition policy.

[45] CJEU, Case C-27/76, United Brands Company and United Brands Continental BV v. Commission of the European Communities (Feb. 14, 1978) EU:C:1978:22.

[46] CJEU, Case C-372/19, Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v. Weareone.World BVBA, Wecandance NV (Nov. 25, 2020), EU:C:2020:959.

[47] United Brands, supra note 45, para. 252, holding that the questions to be determined are “whether the differences between the costs actually incurred and the price actually charged is excessive, and, if the answer to this question is in the affirmative, whether a price has been imposed which is either unfair in itself or when compared to competing products.”

[48] CJEU, Case C-177/16, Autortiesi?bu un Komunice?s?ana?s Konsulta?ciju Ag?entu?ra v. Latvijas Autoru Apvieni?ba v Konkurences Padome (Sep. 14, 2017), EU:C:2017:689, para. 49.

[49] Opinion of Advocate General Wahl, Case C-177/16 (Apr. 6, 2017), EU:C:2017:286, para. 131.

[50] See European Commission, Guidance on the Commission’s enforcement priorities in applying Article 82 of the EC Treaty to abusive exclusionary conduct by dominant undertakings, (2009) OJ C 45/7, para. 80; CJEU, 14 October 2010, Case C-280/08 P, Deutsche Telekom AG v. European Commission, EU:C:2010:603; CJEU, 17 February 2011, Case C-52/09, Konkurrensverket v. TeliaSonera Sverige AB, EU:C:2011:83; CJEU, 10 July 2014, Case C?295/12 P, Telefónica SA and Telefónica de España SAU v. European Commission, EU:C:2014:2062; CJEU, 25 March 2021, Case C-165/19 P, Slovak Telekom a.s. v. Commission, EU:C:2021:239.

[51] However, in Teliasonera (supra note 50), the CJEU found that there can be an exclusionary abuse even where the margin level of input purchasers is positive (so-called positive margin squeeze theory), being enough that rivals’ margins are insufficient, for instance because they must operate at artificially reduced levels of profitability.

[52] On the US side, rejecting margin squeeze as a stand-alone offense, the Supreme Court in Pacific Bell Tel. Co. v. linkLine, 555 U.S. 438 (2009) argued that it is nearly impossible for courts to determine the fairness of rivals’ margins and quoted Town of Concord v. Boston Edison Co., 915 F. 2d 17, 25 (1st Cir. 1990) asking “how is a judge or jury to determine a ‘fair price?’ Is it the price charged by other suppliers of the primary product? None exist. Is it the price that competition ‘would have set’ were the primary level not monopo­lized? How can the court determine this price without examining costs and demands, indeed without acting like a rate-setting regulatory agency, the rate-setting proceedings of which often last for several years? Further, how is the court to decide the proper size of the price ‘gap?’ Must it be large enough for all inde­pendent competing firms to make a ‘living profit,’ no matter how inefficient they may be? . . . And how should the court respond when costs or demands change over time, as they inevitably will?”

[53] For an overview, see Oscar Borgogno & Giuseppe Colangelo, Disentangling the FRAND Conundrum, DEEP-IN Research Paper (2019),

[54] CJEU, Case C-170/13, Huawei Technologies Ltd. v. ZTE Corp. (Jul. 16, 2015), EU:C:2015:477.

[55] Nicolas Petit & Amandine Le?onard, FRAND Royalties: Relus v Standards? Chi.-Kent J. Intell. Prop. (forthcoming).

[56] For an overview, see Giuseppe Colangelo, The European Digital Markets Act and Antitrust Enforcement: A Liaison Dangereuse, 47Eur. Law Rev. 597 (July 2022); see also Inge Graef, Differentiated Treatment in Platform-to-Business Relations: EU Competition Law and Economic Dependence, 38 Yearbook of European Law 448 (2019), suggesting giving a stronger role to economic dependence both within and outside EU competition law.

[57] Council Regulation (EC) No. 1/2003 of 16 December 2002 on the implementation of the rules on competition laid down in Articles 81 and 82 of the Treaty, [2003] OJ L 1/1.

[58] Belgian Royal Decree of 31 July 2020 amending books I and IV of the Code of economic law as concerns the abuse of economic dependence, Article 4.

[59] GWB Digitalization Act, 18 January 2021, Section 20.

[60] Italian Annual Competition Law, 5 August 2022, No. 118, Article 33.

[61] CJEU, Case C-377/20, Servizio Elettrico Nazionale SpA v. Autorità Garante della Concorrenza e del Mercato (May 12, 2022), EU:C:2022:379.

[62] Ibid., para. 46.

[63] CJEU, Case C-413/14 P, Intel v. Commission (Sep. 6, 2017), EU:C:2017:632, paras. 133-134. The same principle has been affirmed in discrimination and margin-squeeze cases, such as CJEU, C?525/16, MEO v. Autoridade da Concorrência (Apr. 19, 2018), EU:C:2018:270 and CJEU, Case C-209/10, Post Danmark A/S v. Konkurrencerådet (Mar. 27, 2012), EU:C:2012:172, respectively.

[64] CJEU, Intel, supra note 63, para. 73; see Alfonso Lamadrid de Pablo, Competition Law as Fairness, 8 J. Eur. Compet 147 (Feb. 15, 2017), arguing that the notion of merit-based competition implicitly carries in it a sense of fairness, understood as equality of opportunity; see also Alberto Pera, Fairness, Competition on the Merits and Article 102, 18 Eur. Compet. J. 229 (April 2022).

[65] Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services, [2019] OJ L 186/57.

[66] Ibid., Article 1(1).

[67] Ibid., Recital 2.

[68] Ibid., Recital 49.

[69] European Commission, Shaping Europe’s Digital Future, COM(2020) 67 final.

[70] Ibid., 8-9.

[71] Ibid., 8.

[72] Regulation (EU) 2022/1925 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act), (2022) OJ L 265/1.

[73] Ibid., Recital 7.

[74] Ibid., Recital 2.

[75] Ibid., Recitals 2 and 4.

[76] Ibid., Recitals 46, 47, 51, 56, and 57.

[77] Colangelo, supra note 60; see also Oscar Borgogno & Giuseppe Colangelo, Platform and Device Neutrality Regime: The New Competition Rulebook for App Stores?, 67 Antitrust Bull. 451 (2022).

[78] DMA, supra note 72, Recital 5.

[79] Ibid.

[80] Ibid.

[81] Ibid.

[82] Ibid., Recital 11.

[83] Pinar Akman, Regulating Competition in Digital Platform Markets: A Critical Assessment of the Framework and Approach of the EU Digital Markets Act, 47 Eur. Law Rev. 85 (Mar. 30, 2022); Colangelo, supra note 60; Heike Schweitzer, The Art to Make Gatekeeper Positions Contestable and the Challenge to Know What Is Fair: A Discussion of the Digital Markets Act Proposal, 3 ZEuP 503 (May 7, 2021).

[84] DMA, supra note 72, Recital 32. See also Article 12(5).

[85] Ibid..

[86] Ibid., Recital 33 and Article 12(5); see also Recital 62 providing some benchmarks that can serve as a yardstick to determine the fairness of general access conditions (i.e., prices charged or conditions imposed for the same or similar services by other providers of software application stores; prices charged or conditions imposed by the provider of the software application store for different related or similar services or to different types of end users; prices charged or conditions imposed by the provider of the software application store for the same service in different geographic regions; prices charged or conditions imposed by the provider of the software application store for the same service the gatekeeper provides to itself).

[87] Ibid.; see also Monopolkomission, Recommendations for an Effective and Efficient Digital Markets Act, (2021) 15,, recommending that the DMA objective of fairness should address the economic dependence of business users vis-a?-vis a gatekeeper, and hence the asymmetric negotiating power favoring the gatekeeper; see also Gregory S. Crawford, Jacques Cre?mer, David Dinielli, Amelia Fletcher, Paul Heidhues, Monika Schnitzer, Fiona M. Scott Morton, & Katja Seim, Fairness and Contestability in the Digital Markets Act, Yale Digital Regulation Project, Policy Discussion Paper No. 3 (2021), 4-10,, supporting the interpretation of fairness with respect to surplus sharing. According to the authors, since a platform ecosystem is a co-creation of the platform itself and its users, regulation should correct the distortion related to unfair outcomes when users are not rewarded for their contribution to the success of the platform.

[88] DMA, supra note 72, Recital 34.

[89] Ibid.; see also Recital 16 referring to “unfair practices weaking contestability.”; see, instead, Monopolkomission, supra note 87, 16, suggesting to clearly distinguish the objectives pursued by the DMA, which should be understood such that only ecosystem-related questions of contestability are addressed by the DMA when it comes to the intersection of exclusion and fairness with exploitation of business users.

[90] See also DMA, supra note 72, Articles 12(1, 3, 4, and 5), 19(1), 41(3 and 4), and Recitals 15, 69, 77, 79, 93.

[91] Ibid., Articles 1(1 and 5), 18(2), 40(7), 53 (2 and 3), and Recitals 8, 11, 28, 31, 42, 45, 50, 58, 67, 73, 75, 97, 104, 106.

[92] Ibid., Recital 36 regarding Article 5(2), Recital 50 regarding Article 6(4), Recital 51 regarding Article 6(5), Recital 53 regarding Article 6(6), Recital 59 regarding Article 6(9), Recital 61 regarding Article 6(11), Recital 64 regarding Article 7.

[93] Ibid., Recital 45 regarding Article 5(9-10) and Recital 58 regarding Article 6(8).

[94] Ibid., Recital 46; see also European Commission, Commission Sends Statement of Objections to Amazon for the Use of Non-Public Independent Seller Data and Opens Second Investigation into Its E-Commerce Business Practices (Nov. 10, 2020),

[95] DMA, supra note 72, Recital 39 regarding Article 5(3).

[96] Ibid., Recital 40 regarding Article 5(4).

[97] Ibid., Recital 42 regarding Article 5(6).

[98] Ibid., Recital 43 regarding Article 5(7).

[99] Ibid., Recital 44 regarding Article 5(8).

[100] Ibid., Articles 5(6), 5(8), and 6(13); see also Recital 2 referring to the impact on “the fairness of the commercial relationship between [gatekeepers] and their business users and end users.”

[101] Ibid., Recital 5; see also Recital 42 referring to “fair commercial environment.”

[102] Ibid., Recital 39.

[103] Commission Staff Working Document accompanying the Report from the Commission to the Council and the European Parliament Final Report on the E-commerce Sector Inquiry, SWD(2017) 154 final. Conversely, in Germany, the Federal Supreme Court has supported the Bundeskartellamt’s strict approach against narrow price parity clauses used. See Bundesgerichtshof, Case KVR 54/20, (May 18, 2021).

[104] European Commission, Guidelines on Vertical Restraints (2022) OJ C 248/1, para. 374.

[105] Ibid., para. 372.

[106] European Commission, Proposal for a Regulation of the European Parliament and of the Council on Harmonised Rules on Fair Access and Use of Data (Data Act), COM(2022) 68 final; see also Giuseppe Colangelo, European Proposal for a Data Act – A First Assessment, CERRE Assessment Paper (Aug. 30, 2022)

[107] Data Act, supra note 106, Explanatory Memorandum, 2.

[108] Ibid., Recital 6 and Explanatory Memorandum, 1.

[109] European Commission, Inception Impact Assessment – Data Act, Ares (2021) 3527151, 1,,1-2.

[110] Data Act, supra note 106, Explanatory Memorandum, 3.

[111] Ibid., Recital 5.

[112] Ibid., Recital 51 and Explanatory Memorandum, 13

[113] Ibid., Recital 52

[114] Ibid., Article 13(2).

[115] Ibid., Recital 55.

[116] See, e.g., ibid., Article 13(4)(e), according to which a contractual term is presumed unfair if its object or effect is to enable the party that unilaterally imposed the term to terminate the contract with unreasonably short notice, taking into consideration the reasonable possibilities of the other contracting party to switch to an alternative and comparable service and the financial detriment caused by such termination.

[117] Colangelo, supra note 106.

[118] European Commission, supra note 109, 2.

[119] Ibid..

[120] Dunne, supra note 12, 239; see also Massimo Motta, Competition Policy: Theory and Practice, Cambridge University Press, 2004, 26, distinguishing between ex ante equity, which is consistent with competition policy and implies equal initial opportunities of firms in the marketplace, and ex post equity representing equal outcomes of market competition.

[121] Vestager, supra note 4.

[122] CJEU, supra notes 61 and 63; see also Opinion of Advocate General Rantos, Case C?377/20, Servizio Elettrico Nazionale SpA v. Autorità Garante della Concorrenza e del Mercato (Dec. 9, 2021), EU:C:2021:998, para. 45, arguing that if any conduct having an exclusionary effect were automatically classed as anticompetitive, antitrust would become a means for protecting less-capable, less-efficient undertakings and would in no way protect more meritorious undertakings that can serve as a stimulus to a market’s competitiveness.

[123] Vestager, supra note 28.

[124] Directive (EU) 2019/790 of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC, [2019] OJ L 130/92.

[125] Ibid., Recital 3.

[126] Ibid., Article 15.

[127] See Giuseppe Colangelo, Enforcing Copyright Through Antitrust? The Strange Case of News Publishers Against Digital Platforms, 10 J. Antitrust Enforc 133 (Jun. 22, 2022).

[128] Directive 2019/790, supra note 124, Recitals 54 and 55; see also European Commission, Impact Assessment on the Modernisation of EU Copyright Rules, SWD(2016) 301 final, §5.3.1, arguing that the gap in the current EU rules “further weakens the bargaining power of publishers in relation to large online service providers.”

[129] Ibid.; see also Lionel Bently, Martin Kretschmer, Tobias Dudenbostel, Maria Del Carmen Calatrava Moreno, & Alfred Radauer, Strengthening the Position of Press Publishers and Authors and Performers in the Copyright Directive, European Parliament (September 2017)

[130] See, e.g., Susan Athey, Markus Mobius, & Jeno Pal, The Impact of Aggregators on Internet News Consumption, NBER Working Paper No. 28746 (2021),; Joan Calzada & Ricard Gil, What Do News Aggregators Do?, 39 Mark. Sci. 134 (2020); Joint Research Centre for the European Commission, Online News Aggregation and Neighbouring Rights for News Publishers, (2017)

[131] See European Commission, 2030 Digital Compass: the European Way for the Digital Decade, COM/2021/118 final; and European Commission, Proposal for a Decision of the European Parliament  and of the Council Establishing the 2030 Policy Programme “Path to the Digital Decade,” (2021)

[132] See the public statements released in May 2022 by Commissioners Margrethe Vestager ( and Thierry Breton (

[133] Axon Partners Group Consulting, Europe’s Internet Ecosystem: Socio-Economic Benefits of a Fairer Balance Between Tech Giants and Telecom Operators, (2022) Report prepared for the European Telecommunications Network Operators’ Association (ETNO),; see also Frontier Economics, Estimating OTT Traffic-Related Costs on European Telecommunications Networks, (2022) A report for Deutsche Telekom, Orange, Telefonica, & Vodafone, g4-ott-report-stc-data.pdf.

[134] See also the appeal published by the CEOs of Telefo?nica, Deutsche Telekom, Vodafone and Orange, United Appeal of the Four Major European Telecommunications Companies (2022),; and, more recently, the statement released by several CEOs, CEO Statement on the Role of Connectivity in Addressing Current EU Challenges (2022),

[135] European Commission, European Declaration on Digital Rights and Principles for the Digital Decade, COM(2022) 28 final, 3; see also European Council, 2030 Policy Programme ‘Path to the Digital Decade’: The Council Adopts Its Position (2022),

[136] Body of European Regulators for Electronic Communications, BEREC’s Comments on the ETNO Proposal for ITU/WCIT or Similar Initiatives Along These Lines, BoR(12) 120 (2012), 3,; see also Body of European Regulators for Electronic Communications, Report on IP-Interconnection practices in the Context of Net Neutrality, BoR (17) 184 (2017),, finding the internet-protocol-interconnection market to be competitive.

[137] See former Commissioner Neelie Kroes, Adapt or Die: What I Would Do if I Ran a Telecom Company (2014),, arguing that the current situation of European telcos is not the fault of OTTs, given that the latter are the ones driving digital demand: “[EU homes] are demanding greater and greater bandwidth, faster and faster speeds, and are prepared to pay for it. But how many of them would do that, if there were no over the top services? If there were no Facebook, no YouTube, no Netflix, no Spotify?”

[138] Body of European Regulators for Electronic Communications, supra note 136, 4. Concerns about side effects on consumers of the possible introduction of a network infrastructure fee have been raised  by the European consumer organisation BEUC, Connectivity Infrastructure and the Open Internet, (2022); see also the open letter signed by 34 civil-society organisations from 17 countries ( arguing that nothing has changed that would merit a different response to the proposals that have been already discussed over the past 10 years and that charging content and application providers for the use of internet infrastructure would undermine and conflict with core net-neutrality protections; see also David Abecassis, Michael Kende, & Guniz Kama, IP Interconnection on the Internet: A European Perspective for 2022, (2022), finding no evidence for significant changes to the way interconnection works on the internet and arguing that the approach advocated by proponents of network-usage fees would involve complexity and regulatory costs, and risks being detrimental to consumers and businesses in Europe; futhermore, see David Abecassis, Michael Kende, Shahan Osman, Ryan Spence, & Natalie Choi, The Impact of Tech Companies’ Network Investment on the Economics of Broadband ISPs (2022),, reporting significant investments undertaken by content and application providers in Internet infrastructure.

[139] Body of European Regulators for Electronic Communications, supra note 136, 4. In the next months, the BEREC is expected to assess again the impact of the potential sending party network pays principle the on Internet ecosystem: see Body of European Regulators for Electronic Communications, Work Programme 2023, BoR (22) 143 (2022), 26-27,

[140] Regulation (EU) 2015/2120 laying down measures concerning open internet access and amending Directive 2002/22/EC on universal service and users’ rights relating to electronic communications networks and services and Regulation (EU) No 531/2012 on roaming on public mobile communications networks within the Union, (2015) OJ L 310/1.

[141] For a summary of the net-neutrality debate, see Giuseppe Colangelo & Valerio Torti, Offering Zero-Rated Content in the Shadow of Net Neutrality, 5 M&CLR 41 (2021); see also Tobias Kretschmer, In Pursuit of Fairness? Infrastructure Investment in Digital Markets, (2022), arguing that the policy solution at issue would fall short of the principles of efficient risk allocation, time consistency, and net neutrality, and might seem like arbitrarily targeting a group of (largely U.S.-based) firms while letting (at least partly European) newcomers and/or smaller firms enjoy the same externalities at no cost. Indeed, the author notes that a transfer from Big Tech to telecom-infrastructure providers would be equivalent to a tax on success, since it would be based on ex post estimates of benefits from prior investments. Further, a direct and unrestricted transfer may not ensure sufficient infrastructure investment in the future, as it is not conditional on future behavior, but rather it would serve as a windfall profit for past (imprudent) behavior that can finance any kind of activity by telecom-infrastructure providers. Finally, a fair distribution of investment financing would require all complementors to the basic service to pay a share of future investments proportional to the expected benefit from the investments to be undertaken.

[142] Body of European Regulators for Electronic Communications, BEREC preliminary assessment of the underlying assumptions of payments from large CAPs to ISPs, BoR (22) 137 (2022).

[143] Ibid., 4-5.

[144] Ibid., 5.

[145] Ibid., 7-8.

[146] Ibid., 11-14.

[147] Bedoya, supra note 6, 8.

Continue reading
Antitrust & Consumer Protection

The FTC Knows It When It Sees It

TOTM When Congress created the Federal Trade Commission (FTC) in 1914, it charged the agency with condemning “unfair methods of competition.” That’s not the language Congress used in . . .

When Congress created the Federal Trade Commission (FTC) in 1914, it charged the agency with condemning “unfair methods of competition.” That’s not the language Congress used in writing America’s primary antitrust statute, the Sherman Act, which prohibits “monopoliz[ation]” and “restraint[s] of trade.”

Read the full piece here.

Continue reading
Antitrust & Consumer Protection