Showing 9 of 30 Publications in Data Regulation

ICLE Comments to DOJ on Promoting Competition in Artificial Intelligence

Regulatory Comments Executive Summary We thank the U.S. Justice Department Antitrust Division (DOJ) for this invitation to comment (ITC) on “Promoting Competition in Artificial Intelligence.”[1] The International . . .

Executive Summary

We thank the U.S. Justice Department Antitrust Division (DOJ) for this invitation to comment (ITC) on “Promoting Competition in Artificial Intelligence.”[1] The International Center for Law & Economics (ICLE) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

In these comments, we express the view that policymakers’ current concerns about competition in AI industries may be unwarranted. This is particularly true of the notions that data-network effects shield incumbents in AI markets from competition; that Web 2.0’s most successful platforms will be able to leverage their competitive positions to dominate generative-AI markets; that these same platforms may use strategic partnerships with AI firms to insulate themselves from competition; and that generative-AI services occupy narrow markets that leave firms with significant market power.

In fact, we are still far from understanding the boundaries of antitrust-relevant markets in AI. There are three main things that need to be at the forefront of competition authorities’ minds when they think about market definition in AI products and services. First, understand that the “AI market” is not unitary, but is instead composed of many distinct goods and services. Second, and relatedly, look beyond the AI marketing hype to see how this extremely heterogeneous products landscape intersects with an equally variegated consumer-demand landscape.

In other words: AI products and services may, in many instances, be substitutable for non-AI products, which would mean that, for the purposes of antitrust law, AI and non-AI products contend in the same relevant market. Getting this relevant product-market definition right is important in antitrust because wrong market definitions could lead to wrong inferences about market power. While either an overly broad or overly narrow market definition could lead to both over and underenforcement, we believe the former currently represents the bigger threat.

Third, overenforcement in the field of generative AI could paradoxically engender the very harms that policymakers are seeking to avert. As we explain in greater detail below, preventing so-called “big tech” firms from competing in AI markets (for example, by threatening competition intervention whenever they forge strategic relationships with AI startups, launch their own generative-AI services, or embed such services in their existing platforms) may thwart an important source of competition and continued innovation. In short, competition in AI markets is important,[2] but trying naïvely to hold incumbent (in adjacent markets) tech firms back, out of misguided fears they will come to dominate the AI space, is likely to do more harm than good. It is essential to acknowledge how little we know about these nascent markets and that the most important priority at the moment is simply to ask the right questions that will lead to sound competition policy.

The comments proceed as follows. Section I debunks the notion that incumbent tech platforms can use their allegedly superior datasets to overthrow competitors in markets for generative AI. Section II discusses how policymakers should approach strategic partnerships among tech incumbents and AI startups. Section III outlines some of the challenges to defining relevant product markets in AI, and suggests how enforcers could navigate the perils of market definition in the nascent, fast-moving world of AI.

I. Anticompetitive Leveraging in AI Markets

Antitrust enforcers have recently expressed concern that incumbent tech platforms may leverage their existing market positions and resources (particularly their vast datasets) to stifle competitive pressure from AI startups. As this sections explains, however, these fears appear overblown, as well as underpinned by assumptions about data-network effects that are unlikely to play a meaningful role in generative AI. Instead, the competition interventions that policymakers are contemplating would, paradoxically, remove an important competitive threat for today’s most successful AI providers, thereby reducing overall competition in generative-AI markets.

Subsection A summarizes recent calls for competition intervention in generative-AI markets. Subsection B argues that many of these calls are underpinned by fears of data-related incumbency advantages (often referred to as “data-network effects”), including in the context of mergers. Subsection C explains why these effects are unlikely to play a meaningful role in generative-AI markets. Subsection D offers five key takeaways to help policymakers better weigh the tradeoffs inherent to competition-enforcement interventions in generative-AI markets.

A. Calls for Intervention in AI Markets

It was once (and frequently) said that Google’s “data monopoly” was unassailable: “If ‘big data’ is the oil of the information economy, Google has Standard Oil-like monopoly dominance—and uses that control to maintain its dominant position.”[3] Similar claims of data dominance have been attached to nearly all large online platforms, including Facebook (Meta), Amazon, and Uber.[4]

While some of these claims continue even today (for example, “big data” is a key component of the DOJ Google Search and adtech antitrust suits),[5] a shiny new data target has emerged in the form of generative artificial intelligence (AI). The launch of ChatGPT in November 2022, as well as the advent of AI image-generation services like Midjourney and Dall-E, have dramatically expanded the public’s conception of what is—and what might be—possible to achieve with generative-AI technologies built on massive datasets.

While these services remain both in the early stages of mainstream adoption and in the throes of rapid, unpredictable technological evolution, they nevertheless already appear to be on the radar of competition policymakers around the world. Several antitrust enforcers appear to believe that, by acting now, they can avoid the “mistakes” that purportedly were made during the formative years of Web 2.0.[6] These mistakes, critics assert, include failing to appreciate the centrality of data in online markets, as well as letting mergers go unchecked and allowing early movers to entrench their market positions.[7] As Federal Trade Commission (FTC) Chair Lina Khan has put it: “we are still reeling from the concentration that resulted from Web 2.0, and we don’t want to repeat the mis-steps of the past with AI.”[8]

This response from the competition-policy world is deeply troubling. Rather than engage in critical self-assessment and adopt an appropriately restrained stance, the enforcement community appears to be champing at the bit. Rather than assessing their prior assumptions based on the current technological moment, enforcers’ top priority appears to be figuring out how to rapidly and almost reflexively deploy existing competition tools to address the presumed competitive failures presented by generative AI.[9]

It is increasingly common for competition enforcers to argue that so-called “data-network effects” serve not only to entrench incumbents in those markets where the data is collected, but also to confer similar, self-reinforcing benefits in adjacent markets. Several enforcers have, for example, prevented large online platforms from acquiring smaller firms in adjacent markets, citing the risk that they could use their vast access to data to extend their dominance into these new markets.[10]

They have also launched consultations to ascertain the role that data plays in AI competition. For instance, in a recent consultation, the European Commission asked: “What is the role of data and what are its relevant characteristics for the provision of generative AI systems and/or components, including AI models?”[11] Unsurprisingly, the FTC has likewise been hypervigilant about the risks ostensibly posed by incumbents’ access to data. In comments submitted to the U.S. Copyright Office, for example, the FTC argued that:

The rapid development and deployment of AI also poses potential risks to competition. The rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms. These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.[12]

Recently, in the conference that prompts these comments, Jonathan Kanter, assistant U.S. attorney general for antitrust, claimed that:

We also see structures and trends in AI that should give us pause AI relies on massive amounts of data and computing power, which can give already dominant firms a substantial advantage. Powerful networks and feedback effects may enable dominant firms to control these new markets, and existing power in the digital economy may create a powerful incentive to control emerging innovations that will not only impact our economy, but the health and well-being of our society and free expression itself.[13]

On an even more hyperbolic note, Andreas Mundt, the head of Germany’s Federal Cartel Office, called AI a “first-class fire accelerator” for anticompetitive behavior and argued it “will make all the problems only worse.”[14] He further argued that “there’s a great danger that we’ll will get an even deeper concentration of digital markets and power increase at various levels, from chips to the front end.”[15] In short, Mundt is one of many policymakers who believes that AI markets will enable incumbent tech firms to further entrench their market positions.

Certainly, it makes sense that the largest online platforms—including Alphabet, Meta, Apple, and Amazon—should have a meaningful advantage in the burgeoning markets for generative-AI services. After all, it is widely recognized that data is an essential input for generative AI.[16] This competitive advantage should be all the more significant, given that these firms have been at the forefront of AI technology for more than a decade. Over this period, Google’s DeepMind and AlphaGo and Meta’s NLLB-200 have routinely made headlines.[17] Apple and Amazon also have vast experience with AI assistants, and all of these firms deploy AI technologies throughout their platforms.[18]

Contrary to what one might expect, however, the tech giants have, to date, been largely unable to leverage their vast troves of data to outcompete startups like OpenAI and Midjourney. At the time of writing, OpenAI’s ChatGPT appears to be, by far, the most successful chatbot,[19] despite the large tech platforms’ apparent access to far more (and more up-to-date) data.

Moreover, it is important not to neglect the role that open-source models currently play in fostering innovation and competition. As former DOJ Chief Antitrust Economist Susan Athey pointed out in a recent interview, “[the AI industry] may be very concentrated, but if you have two or three high quality — and we have to find out what that means, but high enough quality — open models, then that could be enough to constrain the for-profit LLMs.[20] Open-source models are important because they allow innovative startups to build upon models already trained on large datasets—therefore entering the market without incurring that initial cost. Apparently, there is no lack of open-source models, since companies like xAI, Meta, and Google offer their AI models for free.[21]

There are important lessons to glean from these developments, if only enforcers would stop to reflect. The meteoric rise of consumer-facing AI services should offer competition enforcers and policymakers an opportunity for introspection. As we explain, the rapid emergence of generative-AI technology may undercut many core assumptions of today’s competition-policy debates, which have focused largely on the rueful after-effects of the purported failure of 20th-century antitrust to address the allegedly manifest harms of 21st-century technology. These include the notions that data advantages constitute barriers to entry and can be leveraged to project dominance into adjacent markets; that scale itself is a market failure to be addressed by enforcers; and that the use of consumer data is inherently harmful to those consumers.

B. Data-Network Effects Theory and Enforcement

Proponents of more extensive intervention by competition enforcers into digital markets often cite data-network effects as a source of competitive advantage and barrier to entry (though terms like “economies of scale and scope” may offer more precision).[22] The crux of the argument is that “the collection and use of data creates a feedback loop of more data, which ultimately insulates incumbent platforms from entrants who, but for their data disadvantage, might offer a better product.”[23] This self-reinforcing cycle purportedly leads to market domination by a single firm. Thus, it is argued, e.g., that Google’s “ever-expanding control of user personal data, and that data’s critical value to online advertisers, creates an insurmountable barrier to entry for new competition.[24]

But it is important to note the conceptual problems these claims face. Because data can be used to improve products’ quality and/or to subsidize their use, if possessing data constitutes an entry barrier, then any product improvement or price reduction made by an incumbent could be problematic. This is tantamount to an argument that competition itself is a cognizable barrier to entry. Of course, it would be a curious approach to antitrust if competition were treated as a problem, as it would imply that firms should under-compete—i.e., should forego consumer-welfare enhancements—in order to inculcate a greater number of firms in a given market, simply for its own sake.[25]

Meanwhile, actual economic studies of data-network effects have been few and far between, with scant empirical evidence to support the theory.[26] Andrei Hagiu and Julian Wright’s theoretical paper offers perhaps the most comprehensive treatment of the topic to date.[27] The authors ultimately conclude that data-network effects can be of differing magnitudes and have varying effects on firms’ incumbency advantage.[28] They cite Grammarly (an AI writing-assistance tool) as a potential example: “As users make corrections to the suggestions offered by Grammarly, its language experts and artificial intelligence can use this feedback to continue to improve its future recommendations for all users.”[29]

This is echoed by economists who contend that “[t]he algorithmic analysis of user data and information might increase incumbency advantages, creating lock-in effects among users and making them more reluctant to join an entrant platform.”[30] Crucially, some scholars take this logic a step further, arguing that platforms may use data from their “origin markets” in order to enter and dominate adjacent ones:

First, as we already mentioned, data collected in the origin market can be used, once the enveloper has entered the target market, to provide products more efficiently in the target market. Second, data collected in the origin market can be used to reduce the asymmetric information to which an entrant is typically subject when deciding to invest (for example, in R&D) to enter a new market. For instance, a search engine could be able to predict new trends from consumer searches and therefore face less uncertainty in product design.[31]

This possibility is also implicit in Hagiu and Wright’s paper.[32] Indeed, the authors’ theoretical model rests on an important distinction between “within-user” data advantages (that is, having access to more data about a given user) and “across-user” data advantages (information gleaned from having access to a wider user base). In both cases, there is an implicit assumption that platforms may use data from one service to gain an advantage in another market (because what matters is information about aggregate or individual user preferences, regardless of its origin).

Our review of the economic evidence suggests that several scholars have, with varying degrees of certainty, raised the possibility that incumbents may leverage data advantages to stifle competitors in their primary market or in adjacent ones (be it via merger or organic growth). As we explain below, however, there is ultimately little evidence to support such claims. Policymakers have nonetheless been keenly receptive to these limited theoretical findings, basing multiple decisions on these theories, often with little consideration given to the caveats that accompany them.[33]

Indeed, it is remarkable that, in its section on “[t]he data advantage for incumbents,” the “Furman Report” created for the UK government cited only two empirical economic studies, and they offer directly contradictory conclusions with respect to the question of the strength of data advantages.[34] The report nevertheless concluded that data “may confer a form of unmatchable advantage on the incumbent business, making successful rivalry less likely,”[35] and it adopted without reservation what it deemed “convincing” evidence from non-economists that have no apparent empirical basis.[36]

In the Google/Fitbit merger proceedings, the European Commission found that the combination of data from Google services with that of Fitbit devices would reduce competition in advertising markets:

Giving [sic] the large amount of data already used for advertising purposes that Google holds, the increase in Google’s data collection capabilities, which goes beyond the mere number of active users for which Fitbit has been collecting data so far, the Transaction is likely to have a negative impact on the development of an unfettered competition in the markets for online advertising.[37]

As a result, the Commission cleared the merger only on the condition that Google refrain from using data from Fitbit devices for its advertising platform.[38] The Commission also appears likely to focus on similar issues in its ongoing investigation of Microsoft’s investment in OpenAI.[39]

Along similar lines, in its complaint to enjoin Meta’s purchase of Within Unlimited—makers of the virtual-reality (VR) fitness app Supernatural—the FTC relied on, among other things, the fact that Meta could leverage its data about VR-user behavior to inform its decisions and potentially outcompete rival VR-fitness apps: “Meta’s control over the Quest platform also gives it unique access to VR user data, which it uses to inform strategic decisions.”[40]

The DOJ’s twin cases against Google also implicate data leveraging and data barriers to entry. The agency’s adtech complaint charges that “Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”[41] Similarly, in its Google Search complaint, the agency argued that:

Google’s anticompetitive practices are especially pernicious because they deny rivals scale to compete effectively. General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms.[42]

Finally, updated merger guidelines published in recent years by several competition enforcers cite the acquisition of data as a potential source of competition concerns. For instance, the FTC and DOJ’s 2023 guidelines state that “acquiring data that helps facilitate matching, sorting, or prediction services may enable the platform to weaken rival platforms by denying them that data.”[43] Likewise, the UK Competition and Markets Authority warned against incumbents acquiring firms in order to obtain their data and foreclose other rivals:

Incentive to foreclose rivals…

7.19(e) Particularly in complex and dynamic markets, firms may not focus on short term margins but may pursue other objectives to maximise their long-run profitability, which the CMA may consider. This may include… obtaining access to customer data….[44]

In short, competition authorities around the globe have taken an increasingly aggressive stance on data-network effects. Among the ways this has manifested is in enforcement decisions based on fears that data collected by one platform might confer decisive competitive advantages in adjacent markets. Unfortunately, these concerns rest on little to no empirical evidence, either in the economic literature or the underlying case records.

C. Data-Incumbency Advantages in Generative-AI

Given the assertions detailed in the previous section, it would be reasonable to assume that firms such as Google, Meta, and Amazon should be in pole position to meet the burgeoning demand for generative AI. After all, these firms have not only been at the forefront of the field for the better part of a decade, but they also have access to vast troves of data, the likes of which their rivals could only dream when they launched their own services. Thus, the authors of the Furman Report caution that “to the degree that the next technological revolution centres around artificial intelligence and machine learning, then the companies most able to take advantage of it may well be the existing large companies because of the importance of data for the successful use of these tools.”[45]

To date, however, this is not how things have unfolded (although it bears noting that these technologies remain in flux and the competitive landscape is susceptible to change). The first significantly successful generative-AI service was arguably not from either Meta—which had been working on chatbots for years and had access to, arguably, the world’s largest database of actual chats—or Google. Instead, the breakthrough came from a previously unknown firm called OpenAI.

OpenAI’s ChatGPT service currently accounts for an estimated 60% of visits to online AI tools (though reliable numbers are somewhat elusive).[46] It broke the record for the fastest online service to reach 100 million users (in only a couple of months), more than four times faster than TikTok, the previous record holder.[47] Based on Google Trends data, ChatGPT is nine times more popular worldwide than Google’s own Bard service, and 14 times more popular in the United States.[48] In April 2023, ChatGPT reportedly registered 206.7 million unique visitors, compared to 19.5 million for Google’s Bard.[49] In short, at the time we are writing, ChatGPT appears to be the most popular chatbot. The entry of large players such as Google Bard or Meta AI appear to have had little effect thus far on its leading position.[50]

The picture is similar in the field of AI-image generation. As of August 2023, Midjourney, Dall-E, and Stable Diffusion appear to be the three market leaders in terms of user visits.[51] This is despite competition from the likes of Google and Meta, who arguably have access to unparalleled image and video databases by virtue of their primary platform activities.[52]

This raises several crucial questions: how have these AI upstarts managed to be so successful, and is their success just a flash in the pan before Web 2.0 giants catch up and overthrow them? While we cannot answer either of these questions dispositively, we offer what we believe to be some relevant observations concerning the role and value of data in digital markets.

A first important observation is that empirical studies suggest that data exhibits diminishing marginal returns. In other words, past a certain point, acquiring more data does not confer a meaningful edge to the acquiring firm. As Catherine Tucker put it, following a review of the literature: “Empirically there is little evidence of economies of scale and scope in digital data in the instances where one would expect to find them.”[53]

Likewise, following a survey of the empirical literature on this topic, Geoffrey Manne and Dirk Auer conclude that:

Available evidence suggests that claims of “extreme” returns to scale in the tech sector are greatly overblown. Not only are the largest expenditures of digital platforms unlikely to become proportionally less important as output increases, but empirical research strongly suggests that even data does not give rise to increasing returns to scale, despite routinely being cited as the source of this effect.[54]

In other words, being the firm with the most data appears to be far less important than having enough data. Moreover, this lower bar may be accessible to far more firms than one might initially think possible. Furthermore, obtaining sufficient data could become easier still—that is, the volume of required data could become even smaller—with technological progress. For instance, synthetic data may provide an adequate substitute to real-world data,[55] or may even outperform real-world data.[56] As Thibault Schrepel and Alex Pentland surmise:

[A]dvances in computer science and analytics are making the amount of data less relevant every day. In recent months, important technological advances have allowed companies with small data sets to compete with larger ones.[57]

Indeed, past a certain threshold, acquiring more data might not meaningfully improve a service, where other improvements (such as better training methods or data curation) could have a large impact. In fact, there is some evidence that excessive data impedes a service’s ability to generate results appropriate for a given query: “[S]uperior model performance can often be achieved with smaller, high-quality datasets than massive, uncurated ones. Data curation ensures that training datasets are devoid of noise, irrelevant instances, and duplications, thus maximizing the efficiency of every training iteration.”[58]

Consider, for instance, a user who wants to generate an image of a basketball. Using a model trained on an indiscriminate range and number of public photos in which a basketball appears surrounded by copious other image data, the user may end up with an inordinately noisy result. By contrast, a model trained with a better method on fewer, more carefully selected images could readily yield far superior results.[59] In one important example:

The model’s performance is particularly remarkable, given its small size. “This is not a large language model trained on the whole Internet; this is a relatively small transformer trained for these tasks,” says Armando Solar-Lezama, a computer scientist at the Massachusetts Institute of Technology, who was not involved in the new study…. The finding implies that instead of just shoving ever more training data into machine-learning models, a complementary strategy might be to offer AI algorithms the equivalent of a focused linguistics or algebra class.[60]

Platforms’ current efforts are thus focused on improving the mathematical and logical reasoning of large language models (LLMs), rather than maximizing training datasets.[61] Two points stand out. The first is that firms like OpenAI rely largely on publicly available datasets—such as GSM8K—to train their LLMs.[62] Second, the real challenge to creating innovative AI lies not so much in collecting data, but in creating innovative AI-training processes and architectures:

[B]uilding a truly general reasoning engine will require a more fundamental architectural innovation. What’s needed is a way for language models to learn new abstractions that go beyond their training data and have these evolving abstractions influence the model’s choices as it explores the space of possible solutions.

We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.[63]

Furthermore, it is worth noting that the data most relevant to startups in a given market may not be those held by large incumbent platforms in other markets. They might instead be data specific to the market in which the startup is active or, even better, to the given problem it is attempting to solve:

As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use—they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market and mounted their successful challenges—not before.[64]

The bottom line is that data is not the be-all and end-all that many in competition circles make it out to be. While data may often confer marginal benefits, there is little evidence that these benefits are ultimately decisive.[65] As a result, incumbent platforms’ access to vast numbers of users and troves of data in their primary markets might only marginally affect their competitiveness in AI markets.

A related observation is that firms’ capabilities and other features of their products arguably play a more important role than the data they own.[66] Examples of this abound in digital markets. Google overthrew Yahoo in search, despite initially having access to far fewer users and far less data. Google and Apple overcame Microsoft in the smartphone operating-system market, despite having comparatively tiny ecosystems (at the time) to leverage. TikTok rose to prominence despite intense competition from incumbents like Instagram, which had much larger userbases. In each of these cases, important product-design decisions (such as the PageRank algorithm, recognizing the specific needs of mobile users,[67] and TikTok’s clever algorithm) appear to have played far more significant roles than the firms’ initial user and data endowments (or lack thereof).

All of this suggests that the early success of OpenAI likely has more to do with its engineering decisions than with what data it did or did not possess. Going forward, OpenAI and its rivals’ relative abilities to offer and monetize compelling use cases by offering custom versions of their generative-AI technologies will arguably play a much larger role than (and contribute to) their ownership of data.[68] In other words, the ultimate challenge is arguably to create a valuable platform, of which data ownership is a consequence, not a cause.

It is also important to note that, in those instances where it is valuable, data does not just fall from the sky. Instead, it is through smart business and engineering decisions that firms can generate valuable information (which does not necessarily correlate with owning more data). For instance, OpenAI’s success with ChatGPT is often attributed to its more efficient algorithms and training models, which arguably have enabled the service to improve more rapidly than its rivals.[69] Likewise, the ability of firms like Meta and Google to generate valuable data for advertising arguably depends more on design decisions that elicit the right data from users, rather than the raw number of users in their networks.

Put differently, setting up a business so as to gather and organize the right information is more important than simply owning vast troves of data.[70] Even in those instances where high-quality data is an essential parameter of competition, it does not follow that having vaster databases or more users on a platform necessarily leads to better information for the platform. Indeed, if data ownership consistently conferred a significant competitive advantage, these new AI firms would not be where they are today.

This does not, of course, mean that data is worthless. Rather, it means that competition authorities should not assume that the mere possession of data is a dispositive competitive advantage, absent compelling empirical evidence to support such a finding. In this light, the current wave of decisions and competition-policy pronouncements that rely on data-related theories of harm are premature.

D. Five Key Takeaways: Reconceptualizing the Role of Data in Generative-AI Competition

As we explain above, data network effects are not the source of barriers to entry that they are sometimes made out to be. The picture is far more nuanced. Indeed, as economist Andres Lerner demonstrated almost a decade ago (and the assessment is only truer today):

Although the collection of user data is generally valuable for online providers, the conclusion that such benefits of user data lead to significant returns to scale and to the entrenchment of dominant online platforms is based on unsupported assumptions. Although, in theory, control of an “essential” input can lead to the exclusion of rivals, a careful analysis of real-world evidence indicates that such concerns are unwarranted for many online businesses that have been the focus of the “big data” debate.[71]

While data can be an important part of the competitive landscape, incumbents’ data advantages are far less pronounced than today’s policymakers commonly assume. In that respect, five primary lessons emerge:

  1. Data can be (very) valuable, but beyond a certain threshold, those benefits tend to diminish. In other words, having the most data is less important than having enough;
  2. The ability to generate valuable information does not depend on the number of users or the amount of data a platform has previously acquired;
  3. The most important datasets are not always proprietary;
  4. Technological advances and platforms’ engineering decisions affect their ability to generate valuable information, and this effect swamps those that stem from the amount of data they own; and
  5. How platforms use data is arguably more important than what data or how much data they own.

These lessons have important ramifications for policy debates over the competitive implications of data in technologically evolving areas.

First, it is not surprising that startups, rather than incumbents, have taken an early lead in generative AI (and in Web 2.0 before it). After all, if data-incumbency advantages are small or even nonexistent, then smaller and more nimble players may have an edge over established tech platforms. This is all the more likely given that, despite significant efforts, the biggest tech platforms were unable to offer compelling generative-AI chatbots and image-generation services before the emergence of ChatGPT, Dall-E, Midjourney, etc.

This suggests that, in a process akin to Clayton Christensen’s “innovator’s dilemma,”[72] something about the incumbent platforms’ existing services and capabilities might have been holding them back in this emerging industry. Of course, this does not necessarily mean that those same services or capabilities could not become an advantage when the generative-AI industry starts addressing issues of monetization and scale.[73] But it does mean that assumptions about a firm’s market power based primarily on its possession of data are likely to be off the mark.

Another important implication is that, paradoxically, policymakers’ efforts to prevent Web 2.0 platforms from competing freely in generative-AI markets may ultimately backfire and lead to less, not more, competition. Indeed, OpenAI is currently acquiring a sizeable lead in generative AI. While competition authorities might like to think that other startups will emerge and thrive in this space, it is important not to confuse those desires with reality. While there currently exists a vibrant AI-startup ecosystem, there is at least a case to be made that significant competition for today’s AI leaders will come from incumbent Web 2.0 platforms—although nothing is certain at this stage.

Policymakers should beware not to stifle that competition on the misguided assumption that competitive pressure from large incumbents is somehow less valuable to consumers than that which originates from smaller firms. This is particularly relevant in the context of merger control. An acquisition (or an “acqui-hire”) by a “big tech” company does not only, in principle, entail a minor risk to harm competition (it is not a horizontal merger),[74] but could create a stronger competitor to the current market leaders.

Finally, even if there were a competition-related market failure to be addressed in the field of generative AI (which is anything but clear), the remedies under contemplation may do more harm than good. Some of the solutions that have been put forward have highly ambiguous effects on consumer welfare. Scholars have shown that, e.g., mandated data sharing—a solution championed by EU policymakers, among others—may sometimes dampen competition in generative AI.[75] This is also true of legislation like the General Data Protection Regulation (GDPR), which makes it harder for firms to acquire more data about consumers—assuming such data is, indeed, useful to generative-AI services.[76]

In sum, it is a flawed understanding of the economics and practical consequences of large agglomerations of data that has led competition authorities to believe data-incumbency advantages are likely to harm competition in generative AI—or even in the data-intensive Web 2.0 markets that preceded it. Indeed, competition or regulatory intervention to “correct” data barriers and data network and scale effects is liable to do more harm than good.

II. Merger Policy and AI

Policymakers have expressed particular concern about the anticompetitive potential of deals wherein AI startups obtain funding from incumbent tech firms, even in cases where these strategic partnerships cannot be considered mergers in the antitrust sense (because there is no control exercised by one firm over the other). To date, there is no evidence to support differentiated scrutiny for mergers involving AI firms or, in general, firms working with information technology. The view that so-called “killer acquisitions,” for instance, pose a significant competition risk in AI markets is not supported by solid evidence.[77] To the contrary, there is reason to believe these acquisitions bolster competition by allowing larger firms to acquire capabilities relevant to innovation, and by increasing incentives to invest for startup founders.[78]

Companies with “deep pockets” that invest in AI startups may provide those firms the resources to compete with prevailing market leaders. Firms like Amazon, Google, Meta, and Microsoft, for instance, have been investing to create their own microchips capable of building AI systems, aiming to be less dependent on Nvidia.[79] The tributaries of this flow of funds could serve to enhance competition at all levels of the AI industry.[80]

A. Existing AI Partnerships Are Unlikely to Be Anticompetitive

Some jurisdictions have also raised concerns regarding recent partnerships among big tech firms and AI “unicorns,”[81] in particular, Amazon’s partnership with Anthropic; Microsoft’s partnership with Mistral AI; and Microsoft’s hiring of former Inflection AI employees (including, notably, founder Mustafa Suleyman) and related arrangements with the company. Publicly available information, however, suggests that these transactions may not warrant merger-control investigation, let alone the heightened scrutiny that comes with potential Phase II proceedings. At the very least, given the AI industry’s competitive landscape, there is little to suggest these transactions merit closer scrutiny than similar deals in other sectors.

Overenforcement in the field of generative AI could paradoxically engender the very harms that policymakers are seeking to avert. Preventing big tech firms from competing in these markets (for example, by threatening competition intervention as soon as they build strategic relationships with AI startups) may thwart an important source of competition needed to keep today’s leading generative-AI firms in check. In short, while competition in AI markets is important,[82] trying naïvely to hold incumbent (in adjacent markets) tech firms back, out of misguided fears they will come to dominate this space, is likely to do more harm than good.

At a more granular level, there are important reasons to believe these kinds of agreements will have no negative impact on competition and may, in fact, benefit consumers—e.g., by enabling those startups to raise capital and deploy their services at an even larger scale. In other words, they do not bear any of the prima facie traits of “killer acquisitions,” or even of the acquisition of “nascent potential competitors.”[83]

Most importantly, these partnerships all involve the acquisition of minority stakes and do not entail any change of control over the target companies. Amazon, for instance, will not have “ownership control” of Anthropic. The precise amount of shares acquired has not been made public, but a reported investment of $4 billion in a company valued at $18.4 billion does not give Amazon a majority stake or sufficient voting rights to control the company or its competitive strategy. [84] It has also been reported that the deal will not give Amazon any seats on the Anthropic board or special voting rights (such as the power to veto some decisions).[85] There is thus little reason to believe Amazon has acquired indirect or de facto control over Anthropic.

Microsoft’s investment in Mistral AI is even smaller, in both absolute and relative terms. Microsoft is reportedly investing just $16 million in a company valued at $2.1 billion.[86] This represents less than 1% of Mistral’s equity, making it all but impossible for Microsoft to exert any significant control or influence over Mistral AI’s competitive strategy. There have similarly been no reports of Microsoft acquiring seats on Mistral AI’s board or any special voting rights. We can therefore be confident that the deal will not affect competition in AI markets.

Much the same applies to Microsoft’s dealings with Inflection AI. Microsoft hired two of the company’s three founders (which currently does not fall under the scope of merger laws), and also paid $620 million for nonexclusive rights to sell access to the Inflection AI model through its Azure Cloud.[87] Admittedly, the latter could entail (depending on deal’s specifics) some limited control over Inflection AI’s competitive strategy, but there is currently no evidence to suggest this will be the case.

Finally, none of these deals entail any competitively significant behavioral commitments from the target companies. There are no reports of exclusivity agreements or other commitments that would restrict third parties’ access to these firms’ underlying AI models. Again, this means the deals are extremely unlikely to negatively impact the competitive landscape in these markets.

B. AI Partnerships Increase Competition

As discussed in the previous section, the AI partnerships that have recently grabbed antitrust headlines are unlikely to harm competition. They do, however, have significant potential to bolster competition in generative-AI markets by enabling new players to scale up rapidly and to challenge more established players by leveraging the resources of incumbent tech platforms.

The fact that AI startups willingly agree to the aforementioned AI partnerships suggests this source of funding presents unique advantages for them, or they would have pursued capital through other venues. The question for antitrust policymakers is whether this advantage is merely an anticompetitive premium, paid by big tech platforms to secure monopoly rents, or whether the investing firms are bringing something else to the table. As we discussed in the previous section, there is little reason to believe these partnerships are driven by anticompetitive motives. More importantly, however, these deals may present important advantages for AI startups that, in turn, are likely to boost competition in these burgeoning markets.

To start, partnerships with so-called big tech firms are likely a way for AI startups to rapidly obtain equity financing. While this lies beyond our area of expertise, there is ample economic literature to suggest that debt and equity financing are not equivalent for firms.[88] Interestingly for competition policy, there is evidence to suggest firms tend to favor equity over debt financing when they operate in highly competitive product markets.[89]

Furthermore, there may be reasons that AI startups to turn to incumbent big tech platforms to obtain financing, rather than to other partners (though there is evidence these firms are also raising significant amounts of money from other sources).[90] In short, big tech platforms have a longstanding reputation for deep pockets, as well as a healthy appetite for risk. Because of the relatively small amounts at stake—at least, relative to the platforms’ market capitalizations—these firms may be able to move faster than rivals, for whom investments of this sort may present more significant risks. This may be a key advantage in the fast-paced world of generative AI, where obtaining funding and scaling rapidly could be the difference between becoming the next GAFAM or an also-ran.

Partnerships with incumbent tech platforms may also create valuable synergies that enable startups to extract better terms than would otherwise be the case (because the deal creates more surplus for parties to distribute among themselves). Potential synergies include better integrating generative-AI services into existing platforms; several big tech platforms appear to see the inevitable integration of AI into their services as a challenge similar to the shift from desktop to mobile internet, which saw several firms thrive, while others fell by the wayside.[91]

Conversely, incumbent tech platforms may have existing infrastructure that AI startups can use to scale up faster and more cheaply than would otherwise be the case. Running startups’ generative-AI services on top of this infrastructure may enable much faster deployment of generative-AI technology.[92] Importantly, if these joint strategies entail relationship-specific investments on the part of one or both partners, then big tech platforms taking equity positions in AI startups may be an important facilitator to prevent holdup.[93] Both of these possibilities are perfectly summed up by Swami Sivasubramanian, Amazon’s vice president of Data and AI, when commenting on Amazon’s partnership with Anthropic:

Anthropic’s visionary work with generative AI, most recently the introduction of its state-of-the art Claude 3 family of models, combined with Amazon’s best-in-class infrastructure like AWS Tranium and managed services like Amazon Bedrockfurther unlocks exciting opportunities for customers to quickly, securely, and responsibly innovate with generative AI. Generative AI is poised to be the most transformational technology of our time, and we believe our strategic collaboration with Anthropic will further improve our customers’ experiences, and look forward to what’s next.[94]

All of this can be expected to have a knock-on effect on innovation and competition in generative-AI markets. To put it simply, a leading firm like OpenAI might welcome the prospect of competition authorities blocking the potential funding of one of its rivals. It may also stand to benefit if incumbent tech firms are prevented from rapidly upping their generative-AI game via partnerships with other AI startups. In short, preventing AI startups from obtaining funding from big tech platforms could not only arrest those startups’ growth, but also harm long-term competition in the burgeoning AI industry.

III. Market Definition in AI

The question of market definition, long a cornerstone of antitrust analysis, is of particular importance and complexity in the context of AI. The difficulty in defining relevant markets accurately stems not only from the novelty of AI technologies, but from their inherent heterogeneity and the myriad ways they intersect with existing markets and business models. In short, it is not yet clear how to determine the boundaries of markets for AI-powered products. Indeed, traditional approaches to market definition will ultimately provide the correct tools to accomplish this task, but, as we discuss below, we do not yet know the right questions to ask.

Regulators and policymakers must develop a nuanced understanding of AI markets, one that moves beyond broad generalizations and marketing hyperbole to examine the specific characteristics of these emerging technologies and their impacts on various product and service markets.

There are three main things that need to be at the forefront of competition authorities’ minds when they think about market definition in AI products and services. First, they must understand that AI is not a single thing, but is a composite category composed of many distinct goods and services. Second, and related to looking beyond the AI marketing hype, they must recognize how the extremely heterogeneous products landscape of “AI” intersects with an equally variegated consumer-demand landscape. Finally, they must acknowledge how little we know about these nascent markets, and that the most important priority at the moment is simply to ask the right questions that will lead to sound competition policy.

A. AI Is Difficult to Define and Not Monolithic

The task of defining AI for the purposes of antitrust analysis is fraught with complexity, stemming from the multifaceted nature of AI technologies and their diverse applications across industries. It is imperative to recognize that AI does not constitute a monolithic entity or a singular market, but rather encompasses a heterogeneous array of technologies, techniques, and applications that defy simplistic categorization.[95]

At its core, the “AI Stack” comprises multiple layers of interrelated yet distinct technological components. At the foundational level, we find specialized hardware such as semiconductors, graphics processing units (GPUs), and tensor processing units (TPUs), as well as other specialized chipsets designed to accelerate the computationally intensive tasks associated with AI. These hardware components, while critical to AI functionality, also serve broader markets beyond AI applications (e.g., crypto and gaming), complicating efforts to delineate clear market boundaries.

The data layer presents another dimension of complexity. AI systems rely on vast quantities of both structured and unstructured data for training and operation.[96] The sourcing, curation, and preparation of this data constitute distinct markets within the AI ecosystem, each with its own competitive dynamics and potential barriers to entry.

Moving up the stack, we encounter the algorithmic layer, where a diverse array of machine-learning techniques—including, but not limited to, supervised learning, unsupervised learning, and reinforcement learning[97]—are employed. These algorithmic approaches, while fundamental to AI functionality, are not uniform in their application or market impact. Different AI applications may utilize distinct combinations of these techniques,[98] potentially serving disparate markets and consumer needs.

At the application level, the heterogeneity of AI becomes most apparent. From natural-language processing and computer vision to predictive analytics and autonomous vehicles, AI technologies manifest in a multitude of forms, each potentially constituting a distinct relevant market for antitrust purposes. Moreover, these AI applications can intersect with and compete against non-AI solutions, further blurring the boundaries of what might be considered an “AI market.”

The deployment models for AI technologies add yet another layer of complexity to the task of defining antitrust-relevant markets. Cloud-based AI services, edge-computing solutions, and on-premises AI deployments may each serve different market segments and face distinct competitive pressures. The ability of firms to make “build or buy” decisions regarding AI capabilities further complicates the delineation of clear market boundaries.[99]

B. Look Beyond the Marketing Hype

The application of antitrust principles to AI markets necessitates a rigorous analytical approach that transcends superficial categorizations and marketing rhetoric. It is imperative for enforcement authorities to eschew preconceived notions and popular narratives surrounding AI, and to focus instead on empirical evidence and careful economic analysis, in order to accurately assess competitive dynamics in AI-adjacent markets.

The allure of AI as a revolutionary technology has led to a proliferation of marketing claims and industry hype[100] that often may obscure the true nature and capabilities of AI systems. This obfuscation presents a significant challenge for antitrust authorities, who must disentangle factual competitive realities from speculative or exaggerated assertions about AI’s market impact. This task is further complicated by the rapid pace of technological advancement in the field, which can render even recent market analyses obsolete.

A particularly pernicious misconception that must be addressed is the notion that AI technologies operate in a competitive vacuum, distinct from and impervious to competition from non-AI alternatives. This perspective risks leading antitrust authorities to define markets too narrowly, potentially overlooking significant competitive constraints from traditional technologies or human-driven services.

Consider, for instance, the domain of natural-language processing. While AI-powered language models have made significant strides in recent years, they often compete directly with human translators, content creators, and customer-service representatives. Similarly, in the realm of data analysis, AI systems may vie for market share not only with other AI solutions, but also with traditional statistical methods and human analysts. Failing to account for these non-AI competitors in market-definition exercises could result in a distorted view of market power and competitive dynamics.

Moreover, the tendency to treat AI as a monolithic entity obscures the reality that many AI-powered products and services are, in fact, hybrid solutions that combine AI components with traditional software and human oversight.[101] This hybridization further complicates market-definition efforts, as it becomes necessary to assess the degree to which the AI element of a product or service contributes to its market position and substitutability.

C. Current Lack of Knowledge About Relevant Markets

It is crucial to acknowledge at this juncture the profound limitations in our current understanding of how AI technologies will ultimately shape competitive landscapes across various industries. This recognition of our informational constraints should inform a cautious and empirically grounded approach to market definition in the context of AI.

The dynamic nature of AI development renders many traditional metrics for market definition potentially unreliable or prematurely restrictive. Market share, often a cornerstone of antitrust analysis, may prove particularly volatile in AI markets, where technological breakthroughs can rapidly alter competitive positions. Moreover, the boundaries between distinct AI applications and markets remain fluid, with innovations in one domain frequently finding unexpected applications in others, and thereby further complicating efforts to delineate stable market boundaries.

In this context, Jonathan Barnett’s observations regarding the dangers of preemptive antitrust approaches in nascent markets are particularly salient.[102] Barnett argues persuasively that, at the early stages of a market’s development, uncertainty concerning the competitive effects of certain business practices is likely to be especially high.[103] This uncertainty engenders a significant risk of false-positive error costs, whereby preemptive intervention may inadvertently suppress practices that are either competitively neutral or potentially procompetitive.[104]

The risk of regulatory overreach is particularly acute in the realm of AI, where the full spectrum of potential applications and competitive dynamics remains largely speculative. Premature market definition and subsequent enforcement actions based on such definitions could stifle innovation and impede the natural evolution of AI technologies and business models.

Further complicating matters is the fact that what constitutes a relevant product in AI markets is often ambiguous and subject to rapid change. The modular nature of many AI systems, where components can be combined and reconfigured to serve diverse functions, challenges traditional notions of product markets. For instance, a foundational language model might serve as a critical input for a wide array of downstream applications, from chatbots to content-generation tools, each potentially constituting a distinct product market. The boundaries between these markets, and the extent to which they overlap or remain distinct, are likely to remain in flux in the near future.

Given these uncertainties, antitrust authorities must adopt a posture of epistemic humility when approaching market definition in the context of AI. This approach of acknowledged uncertainty and adaptive analysis does not imply regulatory paralysis. Rather, it calls for a more nuanced and dynamic form of antitrust oversight, one that remains vigilant to potential competitive harms while avoiding premature or overly rigid market definitions that could impede innovation.

Market definition should reflect our best understanding of both AI and AI markets. Since this understanding is still very much in an incipient phase, antitrust authorities should view their current efforts not as definitive pronouncements on the structure of AI markets, but as iterative steps in an ongoing process of learning and adaptation. By maintaining this perspective, regulators can hope to strike a balance between addressing legitimate competitive concerns and fostering an environment conducive to continued innovation and dynamic competition in the AI sector.

D. Key Questions to Ask

Finally, the most important function for enforcement authorities to play at the moment is to ask the right questions that will help to optimally develop an analytical framework of relevant markets in subsequent competition analyses. This framework should be predicated on a series of inquiries designed to elucidate the true nature of competitive dynamics in AI-adjacent markets. While the specific contours of relevant markets may remain elusive, the process of rigorous questioning can provide valuable insights and guide enforcement decisions.

Two fundamental questions emerge as critical starting points for any attempt to define relevant markets in AI contexts.

First, “Who are the consumers, and what is the product or service?” This seemingly straightforward inquiry belies a complex web of considerations in AI markets. The consumers of AI technologies and services are often not end-users, but rather, intermediaries that participate in complex value chains. For instance, the market for AI chips encompasses not only direct purchasers like cloud-service providers, but also downstream consumers of AI-powered applications. Similarly, the product or service in question may not be a discrete AI technology, but rather a bundle of AI and non-AI components, or even a service powered by AI but indistinguishable to the end user from non-AI alternatives.

The heterogeneity of AI consumers and products necessitates a granular approach to market definition. Antitrust authorities must carefully delineate between different levels of the AI value chain, considering the distinct competitive dynamics at each level. This may involve separate analyses for markets in AI inputs (such as specialized hardware or training data), AI development tools, and AI-powered end-user applications.

Second, and perhaps more crucially, “Does AI fundamentally transform the product or service in a way that creates a distinct market?” This question is at the heart of the challenge in defining AI markets. It requires a nuanced assessment of the degree to which AI capabilities alter the nature of a product or service from the perspective of consumers.

In some cases, AI’s integration into products or services may represent merely an incremental improvement, not warranting the delineation of a separate market. For example, AI-enhanced spell-checking in word-processing software might not constitute a distinct market from traditional spell-checkers if consumers do not perceive a significant functional difference.

Conversely, in other cases, AI may enable entirely new functionalities or levels of performance that create distinct markets. Large language models capable of generating human-like text, for instance, might be considered to operate in a market separate from traditional writing aids or information-retrieval tools (or not, depending on the total costs and benefits of the option).

The analysis must also consider the potential for AI to blur the boundaries between previously distinct markets. As AI systems become more versatile, they may compete across multiple traditional product categories, challenging conventional market definitions.

In addressing these questions, antitrust authorities should consider several additional factors:

  1. The degree of substitutability between AI and non-AI solutions, from the perspective of both direct purchasers and end-users.
  2. The extent to which AI capabilities are perceived as essential or differentiating factors by consumers in the relevant market.
  3. The potential for rapid evolution in AI capabilities and consumer preferences, which may necessitate dynamic market definitions.
  4. The presence of switching costs or lock-in effects, which could influence market boundaries.
  5. The geographic scope of AI markets, which may transcend traditional national or regional boundaries.

It is crucial to note that these questions do not yield simple or static answers. Rather, they serve as analytical tools to guide ongoing assessment of AI markets. Antitrust authorities must be prepared to revisit and refine their market definitions as technological capabilities evolve and market dynamics shift.

Moreover, the process of defining relevant markets in the context of AI should not be viewed as an end in itself, but as a means to understand competitive dynamics and to inform enforcement decisions. In some cases, traditional market-definition exercises may prove insufficient, necessitating alternative analytical approaches that focus on competitive effects or innovation harms.

By embracing this questioning approach, antitrust authorities can develop a more nuanced and adaptable framework for market definition in AI contexts. This approach would acknowledge the complexities and uncertainties inherent in AI markets, while providing a structured methodology to assess competitive dynamics. As our understanding of AI markets deepens, this framework will need to evolve further, ensuring that antitrust enforcement remains responsive to the unique challenges posed by artificial-intelligence technologies.

[1] Press Release, Justice Department and Stanford University to Cohost Workshop “Promoting Competition in Artificial Intelligence”, U.S. Justice Department (May 21, 2024), https://www.justice.gov/opa/pr/justice-department-and-stanford-university-cohost-workshop-promoting-competition-artificial.

[2] Artificial intelligence is, of course, not a market (at least not a relevant antitrust market). Within the realm of what is called “AI,” companies offer myriad products and services, and specific relevant markets would need to be defined before assessing harm to competition in specific cases.

[3] Nathan Newman, Taking on Google’s Monopoly Means Regulating Its Control of User Data, Huffington Post (Sep. 24, 2013), http://www.huffingtonpost.com/nathan-newman/taking-on-googlesmonopol_b_3980799.html.

[4] See, e.g., Lina Khan & K. Sabeel Rahman, Restoring Competition in the U.S. Economy, in Untamed: How to Check Corporate, Financial, and Monopoly Power (Nell Abernathy, Mike Konczal, & Kathryn Milani, eds., 2016), at 23. (“From Amazon to Google to Uber, there is a new form of economic power on display, distinct from conventional monopolies and oligopolies…, leverag[ing] data, algorithms, and internet-based technologies… in ways that could operate invisibly and anticompetitively.”); Mark Weinstein, I Changed My Mind—Facebook Is a Monopoly, Wall St. J. (Oct. 1, 2021), https://www.wsj.com/articles/facebook-is-monopoly-metaverse-users-advertising-platforms-competition-mewe-big-tech-11633104247 (“[T]he glue that holds it all together is Facebook’s monopoly over data…. Facebook’s data troves give it unrivaled knowledge about people, governments—and its competitors.”).

[5] See, generally, Abigail Slater, Why “Big Data” Is a Big Deal, The Reg. Rev. (Nov. 6, 2023), https://www.theregreview.org/2023/11/06/slater-why-big-data-is-a-big-deal; Amended Complaint at ¶36, United States v. Google, 1:20-cv-03010- (D.D.C. 2020); Complaint at ¶37, United States v. Google, 1:23-cv-00108 (E.D. Va. 2023), https://www.justice.gov/opa/pr/justice-department-sues-google-monopolizing-digital-advertising-technologies (“Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”).

[6] See, e.g., Press Release, Commission Launches Calls for Contributions on Competition in Virtual Worlds and Generative AI, European Commission (Jan. 9, 2024), https://ec.europa.eu/commission/presscorner/detail/en/IP_24_85; Krysten Crawford, FTC’s Lina Khan Warns Big Tech over AI, SIEPR (Nov. 3, 2020), https://siepr.stanford.edu/news/ftcs-lina-khan-warns-big-tech-over-ai (“Federal Trade Commission Chair Lina Khan delivered a sharp warning to the technology industry in a speech at Stanford on Thursday: Antitrust enforcers are watching what you do in the race to profit from artificial intelligence.”) (emphasis added).

[7] See, e.g., John M. Newman, Antitrust in Digital Markets, 72 Vand. L. Rev. 1497, 1501 (2019) (“[T]he status quo has frequently failed in this vital area, and it continues to do so with alarming regularity. The laissez-faire approach advocated for by scholars and adopted by courts and enforcers has allowed potentially massive harms to go unchecked.”); Bertin Martins, Are New EU Data Market Regulations Coherent and Efficient?, Bruegel Working Paper 21/23 (2023), https://www.bruegel.org/working-paper/are-new-eu-data-market-regulations-coherent-and-efficient (“Technical restrictions on access to and re-use of data may result in failures in data markets and data-driven services markets.”); Valéria Faure-Muntian, Competitive Dysfunction: Why Competition Law Is Failing in a Digital World, The Forum Network (Feb. 24, 2021), https://www.oecd-forum.org/posts/competitive-dysfunction-why-competition-law-is-failing-in-a-digital-world.

[8] See Rana Foroohar, The Great US-Europe Antitrust Divide, Financial Times (Feb. 5, 2024), https://www.ft.com/content/065a2f93-dc1e-410c-ba9d-73c930cedc14.

[9] See, e.g., Press Release, European Commission, supra note 6.

[10] See infra, Section I.B. Commentators have also made similar claims; see, e.g., Ganesh Sitaram & Tejas N. Narechania, It’s Time for the Government to Regulate AI. Here’s How, Politico (Jan. 15, 2024) (“All that cloud computing power is used to train foundation models by having them “learn” from incomprehensibly huge quantities of data. Unsurprisingly, the entities that own these massive computing resources are also the companies that dominate model development. Google has Bard, Meta has LLaMa. Amazon recently invested $4 billion into one of OpenAI’s leading competitors, Anthropic. And Microsoft has a 49 percent ownership stake in OpenAI — giving it extraordinary influence, as the recent board struggles over Sam Altman’s role as CEO showed.”).

[11] Press Release, European Commission, supra note 6.

[12] Comment of U.S. Federal Trade Commission to the U.S. Copyright Office, Artificial Intelligence and Copyright, Docket No. 2023-6 (Oct. 30, 2023), at 4, https://www.ftc.gov/legal-library/browse/advocacy-filings/comment-federal-trade-commission-artificial-intelligence-copyright (emphasis added).

[13] Jonathan Kanter, Remarks at the Promoting Competition in AI Conference (May 30, 2024), https://youtu.be/yh–1AGf3aU?t=424.

[14] Karin Matussek, AI Will Fuel Antitrust Fires, Big Tech’s German Nemesis Warns, Bloomberg (Jun. 26, 2024), https://www.bloomberg.com/news/articles/2024-06-26/ai-will-fuel-antitrust-fires-big-tech-s-german-nemesis-warns?srnd=technology-vp.

[15] Id.

[16] See, e.g., Joe Caserta, Holger Harreis, Kayvaun Rowshankish, Nikhil Srinidhi, & Asin Tavakoli, The Data Dividend: Fueling Generative AI, McKinsey Digital (Sep. 15, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-data-dividend-fueling-generative-ai (“Your data and its underlying foundations are the determining factors to what’s possible with generative AI.”).

[17] See, e.g., Tim Keary, Google DeepMind’s Achievements and Breakthroughs in AI Research, Techopedia (Aug. 11, 2023), https://www.techopedia.com/google-deepminds-achievements-and-breakthroughs-in-ai-research; see, e.g., Will Douglas Heaven, Google DeepMind Used a Large Language Model to Solve an Unsolved Math Problem, MIT Technology Review (Dec. 14, 2023), https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set; see also, A Decade of Advancing the State-of-the-Art in AI Through Open Research, Meta (Nov. 30, 2023), https://about.fb.com/news/2023/11/decade-of-advancing-ai-through-open-research; see also, 200 Languages Within a Single AI Model: A Breakthrough in High-Quality Machine Translation, Meta, https://ai.meta.com/blog/nllb-200-high-quality-machine-translation (last visited Jan. 18, 2023).

[18] See, e.g., Jennifer Allen, 10 Years of Siri: The History of Apple’s Voice Assistant, Tech Radar (Oct. 4, 2021), https://www.techradar.com/news/siri-10-year-anniversary; see also Evan Selleck, How Apple Is Already Using Machine Learning and AI in iOS, Apple Insider (Nov. 20, 2023), https://appleinsider.com/articles/23/09/02/how-apple-is-already-using-machine-learning-and-ai-in-ios; see also, Kathleen Walch, The Twenty Year History Of AI At Amazon, Forbes (Jul. 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/07/19/the-twenty-year-history-of-ai-at-amazon.

[19] See infra Section I.C.

[20] Josh Sisco, POLITICO PRO Q&A: Exit interview with DOJ Chief Antitrust Economist Susan Athey, Politico Pro (Jul. 2, 2024), https://subscriber.politicopro.com/article/2024/07/politico-pro-q-a-exit-interview-with-doj-chief-antitrust-economist-susan-athey-00166281.

[21] Belle Lin, Open-Source Companies Are Sharing Their AI Free. Can They Crack OpenAI’s Dominance?, Wall St. J. (Mar. 21, 2024), https://www.wsj.com/articles/open-source-companies-are-sharing-their-ai-free-can-they-crack-openais-dominance-26149e9c.

[22] See, e.g., Cédric Argenton & Jens Prüfer, Search Engine Competition with Network Externalities, 8 J. Comp. L. & Econ. 73, 74 (2012).

[23] John M. Yun, The Role of Big Data in Antitrust, in The Global Antitrust Institute Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg, eds., Nov. 11, 2020) at 233, https://gaidigitalreport.com/2020/08/25/big-data-and-barriers-to-entry/#_ftnref50; see also, e.g., Robert Wayne Gregory, Ola Henfridsson, Evgeny Kaganer, & Harris Kyriakou, The Role of Artificial Intelligence and Data Network Effects for Creating User Value, 46 Acad. of Mgmt. Rev. 534 (2020), final pre-print version at 4, http://wrap.warwick.ac.uk/134220) (“A platform exhibits data network effects if, the more that the platform learns from the data it collects on users, the more valuable the platform becomes to each user.”); see also, Karl Schmedders, José Parra-Moyano, & Michael Wade, Why Data Aggregation Laws Could be the Answer to Big Tech Dominance, Silicon Republic (Feb. 6, 2024), https://www.siliconrepublic.com/enterprise/data-ai-aggregation-laws-regulation-big-tech-dominance-competition-antitrust-imd.

[24] Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. Reg. 401, 409 (2014) (emphasis added); see also id. at 420 & 423 (“While there are a number of network effects that come into play with Google, [“its intimate knowledge of its users contained in its vast databases of user personal data”] is likely the most important one in terms of entrenching the company’s monopoly in search advertising…. Google’s overwhelming control of user data… might make its dominance nearly unchallengeable.”).

[25] See also Yun, supra note 23 at 229 (“[I]nvestments in big data can create competitive distance between a firm and its rivals, including potential entrants, but this distance is the result of a competitive desire to improve one’s product.”).

[26] For a review of the literature on increasing returns to scale in data (this topic is broader than data-network effects) see Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo Mason L. Rev. 1281, 1344 (2021).

[27] Andrei Hagiu & Julian Wright, Data-Enabled Learning, Network Effects, and Competitive Advantage, 54 RAND J. Econ. 638 (2023).

[28] Id. at 639. The authors conclude that “Data-enabled learning would seem to give incumbent firms a competitive advantage. But how strong is this advantage and how does it differ from that obtained from more traditional mechanisms… .”

[29] Id.

[30] Bruno Jullien & Wilfried Sand-Zantman, The Economics of Platforms: A Theory Guide for Competition Policy, 54 Info. Econ. & Pol’y 10080, 101031 (2021).

[31] Daniele Condorelli & Jorge Padilla, Harnessing Platform Envelopment in the Digital World, 16 J. Comp. L. & Pol’y 143, 167 (2020).

[32] See Hagiu & Wright, supra note 27.

[33] For a summary of these limitations, see generally Catherine Tucker, Network Effects and Market Power: What Have We Learned in the Last Decade?, Antitrust (2018) at 72, available at https://sites.bu.edu/tpri/files/2018/07/tucker-network-effects-antitrust2018.pdf; see also Manne & Auer, supra note 26, at 1330.

[34] See Jason Furman, Diane Coyle, Amelia Fletcher, Derek McAuley, & Philip Marsden (Dig. Competition Expert Panel), Unlocking Digital Competition (2019) at 32-35 (“Furman Report”), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[35] Id. at 34.

[36] Id. at 35. To its credit, it should be noted, the Furman Report does counsel caution before mandating access to data as a remedy to promote competition. See id. at 75. That said, the Furman Report maintains that such a remedy should remain on the table because “the evidence suggests that large data holdings are at the heart of the potential for some platform markets to be dominated by single players and for that dominance to be entrenched in a way that lessens the potential for competition for the market.” Id. The evidence, however, does not show this.

[37] Case COMP/M.9660 — Google/Fitbit, Commission Decision (Dec. 17, 2020) (Summary at O.J. (C 194) 7), available at https://ec.europa.eu/competition/mergers/cases1/202120/m9660_3314_3.pdf, at 455,

[38] Id. at 896.

[39] See Natasha Lomas, EU Checking if Microsoft’s OpenAI Investment Falls Under Merger Rules, TechCrunch (Jan. 9, 2024), https://techcrunch.com/2024/01/09/openai-microsoft-eu-merger-rules.

[40] Amended Complaint at 11, Meta/Zuckerberg/Within, Fed. Trade Comm’n. (2022) (No. 605837), available at https://www.ftc.gov/system/files/ftc_gov/pdf/D09411%20-%20AMENDED%20COMPLAINT%20FILED%20BY%20COUNSEL%20SUPPORTING%20THE%20COMPLAINT%20-%20PUBLIC%20%281%29_0.pdf.

[41] Amended Complaint (D.D.C), supra note 5 at ¶37.

[42] Amended Complaint (E.D. Va), supra note 5 at ¶8.

[43] Merger Guidelines, US Dep’t of Justice & Fed. Trade Comm’n (2023) at 25, available at https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[44] Merger Assessment Guidelines, Competition and Mkts. Auth (2021) at ¶7.19(e), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051823/MAGs_for_publication_2021_–_.pdf.

[45] Furman Report, supra note 34, at ¶4.

[46] See, e.g., Chris Westfall, New Research Shows ChatGPT Reigns Supreme in AI Tool Sector, Forbes (Nov. 16, 2023), https://www.forbes.com/sites/chriswestfall/2023/11/16/new-research-shows-chatgpt-reigns-supreme-in-ai-tool-sector/?sh=7de5de250e9c; Sujan Sarkar, AI Industry Analysis: 50 Most Visited AI Tools and Their 24B+ Traffic Behavior, Writerbuddy (last visited, Jul. 15, 2024), https://writerbuddy.ai/blog/ai-industry-analysis.

[47] See Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base, Reuters (Feb. 2, 2023), https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01; Google: The AI Race Is On, App Economy Insights (Feb. 7, 2023), https://www.appeconomyinsights.com/p/google-the-ai-race-is-on.

[48] See Google Trends, https://trends.google.com/trends/explore?date=today%205-y&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024) and https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024).

[49] See David F. Carr, As ChatGPT Growth Flattened in May, Google Bard Rose 187%, Similarweb Blog (Jun. 5, 2023), https://www.similarweb.com/blog/insights/ai-news/chatgpt-bard.

[50] See Press Release, Introducing New AI Experiences Across Our Family of Apps and Devices, Meta (Sep. 27, 2023), https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools; Sundar Pichai, An Important Next Step on Our AI Journey, Google Keyword Blog (Feb. 6, 2023), https://blog.google/technology/ai/bard-google-ai-search-updates.

[51] See Ion Prodan, 14 Million Users: Midjourney’s Statistical Success, Yon (Aug. 19, 2023), https://yon.fun/midjourney-statistics; see also Andrew Wilson, Midjourney Statistics: Users, Polls, & Growth [Oct 2023], ApproachableAI (Oct. 13, 2023), https://approachableai.com/midjourney-statistics.

[52] See Hema Budaraju, New Ways to Get Inspired with Generative AI in Search, Google Keyword Blog (Oct. 12, 2023), https://blog.google/products/search/google-search-generative-ai-october-update; Imagine with Meta AI, Meta (last visited Jan. 12, 2024), https://imagine.meta.com.

[53] Catherine Tucker, Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility, 54 Rev. Indus. Org. 683, 686 (2019).

[54] Manne & Auer, supra note 26, at 1345.

[55] See, e.g., Stefanie Koperniak, Artificial Data Give the Same Results as Real Data—Without Compromising Privacy, MIT News (Mar. 3, 2017), https://news.mit.edu/2017/artificial-data-give-same-results-as-real-data-0303 (“[Authors] describe a machine learning system that automatically creates synthetic data—with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. While the use of authentic data can cause significant privacy concerns, this synthetic data is completely different from that produced by real users—but can still be used to develop and test data science algorithms and models.”).

[56] See, e.g., Rachel Gordon, Synthetic Imagery Sets New Bar in AI Training Efficiency, MIT News (Nov. 20, 2023), https://news.mit.edu/2023/synthetic-imagery-sets-new-bar-ai-training-efficiency-1120 (“By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional ‘real-image’ training methods.).

[57] Thibault Schrepel & Alex ‘Sandy’ Pentland, Competition Between AI Foundation Models: Dynamics and Policy Recommendations, MIT Connection Science Working Paper (Jun. 2023), at 8.

[58] Igor Susmelj, Optimizing Generative AI: The Role of Data Curation, Lightly (last visited Jan. 15, 2024), https://www.lightly.ai/post/optimizing-generative-ai-the-role-of-data-curation.

[59] See, e.g., Xiaoliang Dai, et al., Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack, ArXiv (Sep. 27, 2023) at 1, https://ar5iv.labs.arxiv.org/html/2309.15807 (“[S]upervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality.”); see also, Hu Xu, et al., Demystifying CLIP Data, ArXiv (Sep. 28, 2023), https://arxiv.org/abs/2309.16671.

[60] Lauren Leffer, New Training Method Helps AI Generalize like People Do, Sci. Am. (Oct. 26, 2023), https://www.scientificamerican.com/article/new-training-method-helps-ai-generalize-like-people-do (discussing Brendan M. Lake & Marco Baroni, Human-Like Systematic Generalization Through a Meta-Learning Neural Network, 623 Nature 115 (2023)).

[61] Timothy B. Lee, The Real Research Behind the Wild Rumors about OpenAI’s Q* Project, Ars Technica (Dec. 8, 2023), https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project.

[62] Id.; see also GSM8K, Papers with Code (last visited Jan. 18, 2023), https://paperswithcode.com/dataset/gsm8k; MATH Dataset, GitHub (last visited Jan. 18, 2024), https://github.com/hendrycks/math.

[63] Lee, supra note 61.

[64] Geoffrey Manne & Ben Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (Mar. 26, 2015), https://truthonthemarket.com/2015/03/26/debunking-the-myth-of-a-data-barrier-to-entry-for-online-services (citing Andres V. Lerner, The Role of ‘Big Data’ in Online Platform Competition (Aug. 26, 2014), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2482780.).

[65] See Catherine Tucker, Digital Data as an Essential Facility: Control, CPI Antitrust Chron. (Feb. 2020), at 11 (“[U]ltimately the value of data is not the raw manifestation of the data itself, but the ability of a firm to use this data as an input to insight.”).

[66] Or, as John Yun put it, data is only a small component of digital firms’ production function. See Yun, supra note 23, at 235 (“Second, while no one would seriously dispute that having more data is better than having less, the idea of a data-driven network effect is focused too narrowly on a single factor improving quality. As mentioned in supra Section I.A, there are a variety of factors that enter a firm’s production function to improve quality.”).

[67] Luxia Le, The Real Reason Windows Phone Failed Spectacularly, History–Computer (Aug. 8, 2023), https://history-computer.com/the-real-reason-windows-phone-failed-spectacularly.

[68] Introducing the GPT Store, Open AI (Jan. 10, 2024), https://openai.com/blog/introducing-the-gpt-store.

[69] See Michael Schade, How ChatGPT and Our Language Models Are Developed, OpenAI, https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed; Sreejani Bhattacharyya, Interesting Innovations from OpenAI in 2021, AIM (Jan. 1, 2022), https://analyticsindiamag.com/interesting-innovations-from-openai-in-2021; Danny Hernadez & Tom B. Brown, Measuring the Algorithmic Efficiency of Neural Networks, ArXiv (May 8, 2020), https://arxiv.org/abs/2005.04305.

[70] See Yun, supra note 23 at 235 (“Even if data is primarily responsible for a platform’s quality improvements, these improvements do not simply materialize with the presence of more data—which differentiates the idea of data-driven network effects from direct network effects. A firm needs to intentionally transform raw, collected data into something that provides analytical insights. This transformation involves costs including those associated with data storage, organization, and analytics, which moves the idea of collecting more data away from a strict network effect to more of a ‘data opportunity.’”).

[71] Lerner, supra note 64, at 4-5 (emphasis added).

[72] See Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (2013).

[73] See David J. Teece, Dynamic Capabilities and Strategic Management: Organizing for Innovation and Growth (2009).

[74] Antitrust merger enforcement has long assumed that horizontal mergers are more likely to cause problems for consumers than vertical mergers. See: Geoffrey A. Manne, Dirk Auer, Brian Albrecht, Eric Fruits, Daniel J. Gilman, & Lazar Radic, Comments of the International Center for Law and Economics on the FTC & DOJ Draft Merger Guidelines, (Sep. 18, 2023), https://laweconcenter.org/resources/comments-of-the-international-center-for-law-and-economics-on-the-ftc-doj-draft-merger-guidelines.

[75] See Hagiu & Wright, supra note 27, at 27 (“We use our dynamic framework to explore how data sharing works: we find that it in-creases consumer surplus when one firm is sufficiently far ahead of the other by making the laggard more competitive, but it decreases consumer surplus when the firms are sufficiently evenly matched by making firms compete less aggressively, which in our model means subsidizing consumers less.”); see also Lerner, supra note 64.

[76] See, e.g., Hagiu & Wright, id. (“We also use our model to highlight an unintended consequence of privacy policies. If such policies reduce the rate at which firms can extract useful data from consumers, they will tend to increase the incumbent’s competitive advantage, reflecting that the entrant has more scope for new learning and so is affected more by such a policy.”); Jian Jia, Ginger Zhe Jin, & Liad Wagman, The Short-Run Effects of the General Data Protection Regulation on Technology Venture Investment, 40 Marketing Sci. 593 (2021) (finding GDPR reduced investment in new and emerging technology firms, particularly in data-related ventures); James Campbell, Avi Goldfarb, & Catherine Tucker, Privacy Regulation and Market Structure, 24 J. Econ. & Mgmt. Strat. 47 (2015) (“Consequently, rather than increasing competition, the nature of transaction costs implied by privacy regulation suggests that privacy regulation may be anti-competitive.”).

[77] See Jonathan M. Barnett, “Killer Acquisitions” Reexamined: Economic Hyperbole in the Age of Populist Antitrust, 3 U. Chi. Bus. L. Rev. 39 (2023).

[78] Id. at 85. (“At the same time, these transactions enhance competitive conditions by supporting the profit expectations that elicit VC investment in the startups that deliver the most transformative types of innovation to the biopharmaceutical ecosystem (and, in some cases, mature into larger firms that can challenge incumbents).)”

[79] Cade Metz, Karen Weise, & Mike Isaac, Nvidia’s Big Tech Rivals Put Their Own A.I. Chips on the Table, N.Y. Times (Jan. 29, 2024), https://www.nytimes.com/2024/01/29/technology/ai-chips-nvidia-amazon-google-microsoft-meta.html.

[80] See, e.g., Chris Metinko, Nvidia’s Big Tech Rivals Put Their Own A.I. Chips on the Table, CrunchBase (Jun. 12, 2024), https://news.crunchbase.com/ai/msft-nvda-lead-big-tech-startup-investment.

[81] CMA Seeks Views on AI Partnerships and Other Arrangements, Competition and Mkts. Auth. (Apr. 24, 2024), https://www.gov.uk/government/news/cma-seeks-views-on-ai-partnerships-and-other-arrangements.

[82] As noted infra, companies offer myriad “AI” products and services, and specific relevant markets would need to be defined before assessing harm to competition in specific cases.

[83] Start-ups, Killer Acquisitions and Merger Control, OECD (2020), available at https://web-archive.oecd.org/2020-10-16/566931-start-ups-killer-acquisitions-and-merger-control-2020.pdf.

[84] Kate Rooney & Hayden Field, Amazon Spends $2.75 Billion on AI Startup Anthropic in Its Largest Venture Investment Yet, CNBC (Mar. 27, 2024), https://www.cnbc.com/2024/03/27/amazon-spends-2point7b-on-startup-anthropic-in-largest-venture-investment.html.

[85] Id.

[86] Tom Warren, Microsoft Partners with Mistral in Second AI Deal Beyond OpenAI, The Verge (Feb. 26, 2024), https://www.theverge.com/2024/2/26/24083510/microsoft-mistral-partnership-deal-azure-ai.

[87] Mark Sullivan, Microsoft’s Inflection AI Grab Likely Cost More Than $1 Billion, Says An Insider (Exclusive), Fast Company  (Mar. 26, 2024), https://www.fastcompany.com/91069182/microsoft-inflection-ai-exclusive; see also, Mustafa Suleyman, DeepMind and Inflection Co-Founder, Joins Microsoft to Lead Copilot, Microsoft Corporate Blogs (Mar. 19, 2024), https://blogs.microsoft.com/blog/2024/03/19/mustafa-suleyman-deepmind-and-inflection-co-founder-joins-microsoft-to-lead-copilot; Krystal Hu & Harshita Mary Varghese, Microsoft Pays Inflection $ 650 Mln in Licensing Deal While Poaching Top Talent, Source Says, Reuters (Mar. 21, 2024), https://www.reuters.com/technology/microsoft-agreed-pay-inflection-650-mln-while-hiring-its-staff-information-2024-03-21; The New Inflection: An Important Change to How We’ll Work, Inflection (Mar. 19, 2024), https://inflection.ai/the-new-inflection; Julie Bort, Here’s How Microsoft Is Providing a ‘Good Outcome’ for Inflection AI VCs, as Reid Hoffman Promised, Tech Crunch (Mar. 21, 2024), https://techcrunch.com/2024/03/21/microsoft-inflection-ai-investors-reid-hoffman-bill-gates.

[88]  See, e.g., Paul Marsh, The Choice Between Equity and Debt: An Empirical Study, 37 The J. of Finance 121, 142 (1982) (“First, it demonstrates that companies are heavily influenced by market conditions and the past history of security prices in choosing between equity and debt. Indeed, these factors appeared to be far more significant in our model than, for example, other variables such as the company’s existing financial structure. Second, this study provides evidence that companies do appear to make their choice of financing instrument as though they had target levels in mind for both the long term debt ratio, and the ratio of short term to total debt. Finally, the results are consistent with the notion that these target levels are themselves functions of company size, bankruptcy risk, and asset composition.”); see also, Armen Hovakimian, Tim Opler, & Sheridan Titman, The Debt-Equity Choice, 36 J. of Financial and Quantitative Analysis 1, 3(2001) (“Our results suggest that, although pecking order considerations affect corporate debt ratios in the short-run, firms tend to make financing choices that move them toward target debt ratios that are consistent with tradeoff models of capital structure choice. For example, our findings confirm that more profitable firms have, on average, lower leverage ratios. But we also find that more profitable firms are more likely to issue debt rather than equity and are more likely to repurchase equity rather than retire debt. Such behavior is consistent with our conjecture that the most profitable firms become under-levered and that firms’ financing choices tend to offset these earnings-driven changes in their capital structures.”): see also, Sabri Boubaker, Wael Rouatbi, & Walid Saffar, The Role of Multiple Large Shareholders in the Choice of Debt Source, 46 Financial Management 241, 267 (2017) (“Our analysis shows that firms controlled by more than one large shareholder tend to rely more heavily on bank debt financing. Moreover, we find that the proportion of bank debt in total debt is significantly higher for firms with higher contestability of the largest controlling owner’s power.”).

[89] Sabri Boubaker, Walid Saffar, & Syrine Sassi, Product Market Competition and Debt Choice, 49 J. of Corp. Finance 204, 208 (2018). (“Our findings that firms substitute away from bank debt when faced with intense market pressure echo the intuition in previous studies that the disciplinary force of competition substitutes for the need to discipline firms through other forms of governance.”).

[90] See, e.g., George Hammond, Andreessen Horowitz Raises $7.2bn and Sets Sights on AI Start-ups, Financial Times (Apr. 16, 2024), https://www.ft.com/content/fdef2f53-f8f7-4553-866b-1c9bfdbeea42; Elon Musk’s xAI Says It Raised $6 Billion to Develop Artificial Intelligence, Moneywatch (May. 27, 2024), https://www.cbsnews.com/news/elon-musk-xai-6-billion; Krystal Hu, AI Search Startup Genspark Raises $60 Million in Seed Round to Challenge Google, Reuters (Jun. 18, 2024), https://www.reuters.com/technology/artificial-intelligence/ai-search-startup-genspark-raises-60-million-seed-round-challenge-google-2024-06-18; Visa to Invest $100 Million in Generative AI for Commerce and Payments, PMYNTS (Oct. 2, 2023), https://www.pymnts.com/artificial-intelligence-2/2023/visa-to-invest-100-million-in-generative-ai-for-commerce-and-payments.

[91] See, e.g., Eze Vidra, Is Generative AI the Biggest Platform Shift Since Cloud and Mobile?, VC Cafe (Mar. 6, 2023), https://www.vccafe.com/2023/03/06/is-generative-ai-the-biggest-platform-shift-since-cloud-and-mobile. See also, OpenAI and Apple Announce Partnership to Integrate ChatGPT into Apple Experiences, OpenAI (Jun. 10, 2024), https://openai.com/index/openai-and-apple-announce-partnership (“Apple is integrating ChatGPT into experiences within iOS, iPadOS, and macOS, allowing users to access ChatGPT’s capabilities—including image and document understanding—without needing to jump between tools.”). See also, Yusuf Mehdi, Reinventing Search With a new AI-powered Microsoft Bing and Edge, Your Copilot for the Web, Microsoft Official Blog (Feb. 7, 2023), https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web (“‘AI will fundamentally change every software category, starting with the largest category of all – search,’ said Satya Nadella, Chairman and CEO, Microsoft. ‘Today, we’re launching Bing and Edge powered by AI copilot and chat, to help people get more from search and the web.’”).

[92] See, e.g., Amazon and Anthropic Deepen Their Shared Commitment to Advancing Generative AI, Amazon (Mar. 27, 2024), https://www.aboutamazon.com/news/company-news/amazon-anthropic-ai-investment (“Global organizations of all sizes, across virtually every industry, are already using Amazon Bedrock to build their generative AI applications with Anthropic’s Claude AI. They include ADP, Amdocs, Bridgewater Associates, Broadridge, CelcomDigi, Clariant, Cloudera, Dana-Farber Cancer Institute, Degas Ltd., Delta Air Lines, Druva, Enverus, Genesys, Genomics England, GoDaddy, HappyFox, Intuit, KT, LivTech, Lonely Planet, LexisNexis Legal & Professional, M1 Finance, Netsmart, Nexxiot, Parsyl, Perplexity AI, Pfizer, the PGA TOUR, Proto Hologram, Ricoh USA, Rocket Companies, and Siemens.”).

[93] Ownership of another firm’s assets is widely seen as a solution to contractual incompleteness. See, e.g., Sanford J. Grossman & Oliver D. Hart, The Costs and Benefits of Ownership: A Theory of Vertical and Lateral Integration, 94 J. Polit. Econ. 691, 716 (1986) (“When it is too costly for one party to specify a long list of the particular rights it desires over another party’s assets, then it may be optimal for the first party to purchase all rights except those specifically mentioned in the contract. Ownership is the purchase of these residual rights of control.”).

[94] See Amazon Staff, supra note 92.

[95] As the National Security Commission on Artificial Intelligence has observed: “AI is not a single technology breakthrough… The race for AI supremacy is not like the space race to the moon. AI is not even comparable to a general-purpose technology like electricity. However, what Thomas Edison said of electricity encapsulates the AI future: “It is a field of fields … it holds the secrets which will reorganize the life of the world.” Edison’s astounding assessment came from humility. All that he discovered was “very little in comparison with the possibilities that appear.” National Security Commission on Artificial Intelligence, Final Report, 7 (2021), available at https://www.dwt.com/-/media/files/blogs/artificial-intelligence-law-advisor/2021/03/nscai-final-report–2021.pdf.

[96] See, e.g., Structured vs Unstructured Data, IBM Cloud Education (Jun. 29, 2021), https://www.ibm.com/think/topics/structured-vs-unstructured-data; Dongdong Zhang, et al., Combining Structured and Unstructured Data for Predictive Models: A Deep Learning Approach, BMC Medical Informatics and Decision Making (Oct. 29, 2020), https://link.springer.com/article/10.1186/s12911-020-01297-6 (describing generally the use of both structured and unstructured data in predictive models for health care).

[97] For a somewhat technical discussion of all three methods, see generally Eric Benhamou, Similarities Between Policy Gradient Methods (PGM) in Reinforcement Learning (RL) and Supervised Learning (SL), SSRN (2019), https://ssrn.com/abstract=3391216.

[98] Id.

[99] For a discussion of the “buy vs build” decisions firms employing AI undertake, see Jonathan M. Barnett, The Case Against Preemptive Antitrust in the Generative Artificial Intelligence Ecosystem, in Artificial Intelligence and Competition Policy (Alden Abbott and Thibault Schrepel eds., 2024), at 3-6.

[100] See, e.g., Melissa Heikkilä & Will Douglas Heaven, What’s Next for AI in 2024, MIT Tech. Rev. (Jan. 4, 2024), https://www.technologyreview.com/2024/01/04/1086046/whats-next-for-ai-in-2024 (Runway hyping Gen-2 as a major film-production tool that, to date, still demonstrates serious limitations). LLMs, impressive as they are, have been touted as impending replacements for humans across many job categories, but still demonstrate many serious limitations that may ultimately limit their use cases. See, e.g., Melissa Malec, Large Language Models: Capabilities, Advancements, And Limitations, HatchWorksAI (Jun. 14, 2024), https://hatchworks.com/blog/gen-ai/large-language-models-guide.

[101] See, e.g., Hybrid AI: A Comprehensive Guide to Applications and Use Cases, SoluLab, https://www.solulab.com/hybrid-ai (last visited Jul. 12, 2024); Why Hybrid Intelligence Is the Future of Artificial Intelligence at McKinsey, McKinsey & Co. (Apr. 29, 2022), https://www.mckinsey.com/about-us/new-at-mckinsey-blog/hybrid-intelligence-the-future-of-artificial-intelligence; Vahe Andonians, Harnessing Hybrid Intelligence: Balancing AI Models and Human Expertise for Optimal Performance, Cognaize (Apr. 11, 2023), https://blog.cognaize.com/harnessing-hybrid-intelligence-balancing-ai-models-and-human-expertise-for-optimal-performance; Salesforce Artificial Intelligence, Salesforce, https://www.salesforce.com/artificial-intelligence (last visited Jul. 12, 2024) (combines traditional CRM and algorithms with AI modules); AI Overview, Adobe, https://www.adobe.com/ai/overview.html (last visited Jul. 12, 2024) (Adobe packages generative AI tools into its general graphic-design tools).

[102] Barnett supra note 99.

[103] Id. at 7-8.

[104] Id.

Continue reading
Antitrust & Consumer Protection

A Competition Law & Economics Analysis of Sherlocking

ICLE White Paper Abstract Sherlocking refers to an online platform’s use of nonpublic third-party business data to improve its own business decisions—for instance, by mimicking the successful products . . .

Abstract

Sherlocking refers to an online platform’s use of nonpublic third-party business data to improve its own business decisions—for instance, by mimicking the successful products and services of edge providers. Such a strategy emerges as a form of self-preferencing and, as with other theories about preferential access to data, it has been targeted by some policymakers and competition authorities due to the perceived competitive risks originating from the dual role played by hybrid platforms (acting as both referees governing their platforms, and players competing with the business they host). This paper investigates the competitive implications of sherlocking, maintaining that an outright ban is unjustified. First, the paper shows that, by aiming to ensure platform neutrality, such a prohibition would cover scenarios (i.e., the use of nonpublic third-party business data to calibrate business decisions in general, rather than to adopt a pure copycat strategy) that should be analyzed separately. Indeed, in these scenarios, sherlocking may affect different forms of competition (inter-platform v. intra-platform competition). Second, the paper argues that, in either case, the practice’s anticompetitive effects are questionable and that the ban is fundamentally driven by a bias against hybrid and vertically integrated players.

I. Introduction

The dual role some large digital platforms play (as both intermediary and trader) has gained prominence among the economic arguments used to justify the recent wave of regulation hitting digital markets around the world. Many policymakers have expressed concern about potential conflicts of interest among companies that have adopted this hybrid model and that also control important gateways for business users. In other words, the argument goes, some online firms act not only as regulators who set their platforms’ rules and as referees who enforce those rules, but also as market players who compete with their business users. This raises the fear that large platforms could reserve preferential treatment for their own services and products, to the detriment of downstream rivals and consumers. That, in turn, has led to calls for platform-neutrality rules.

Toward this aim, essentially all of the legislative initiatives undertaken around the world in recent years to enhance competition in digital markets have included anti-discrimination provisions that target various forms of self-preferencing. Self-preferencing, it has been said, serves as the symbol of the current competition-policy zeitgeist in digital markets.[1] Indeed, this conduct is considered functional to leveraging strategies that would grant gatekeepers the chance to entrench their power in core markets and extend it into associated markets.[2]

Against this background, so-called “sherlocking” has emerged as one form of self-preferencing. The term was coined roughly 20 years ago, after Apple updated its own app Sherlock (a search tool on its desktop-operating system) to mimic a third-party application called Watson, which was created by Karelia Software to complement the Apple tool’s earlier version.[3] According to critics of self-preferencing generally and sherlocking in particular, biased intermediation and related conflicts of interest allow gatekeepers to exploit their preferential access to business users’ data to compete against them by replicating successful products and services. The implied assumption is that this strategy is relevant to competition policy, even where no potential intellectual-property rights (IPRs) are infringed and no slavish imitation sanctionable under unfair-competition laws is detected. Indeed, under such theories, sherlocking would already be prevented by the enforcement of these rules.

To tackle perceived misuse of gatekeepers’ market position, the European Union’s Digital Markets Act (DMA) introduced a ban on sherlocking.[4] Similar concerns have also motivated requests for intervention in the United States,[5] Australia,[6] and Japan.[7] In seeking to address at least two different theories of gatekeepers’ alleged conflicts of interest, these proposed bans on exploiting access to business users’ data are not necessarily limited to the risk of product imitation, but may include any business decision whatsoever that a platform may make while relying on that data.

In parallel with the regulatory initiatives, the conduct at-issue has also been investigated in some antitrust proceedings, which appear to seek the very same twofold goal. In particular, in November 2020, the European Commission sent a statement of objections to Amazon that argued the company had infringed antitrust rules through the systematic use of nonpublic business data from independent retailers who sell on the Amazon online marketplace in order to benefit Amazon’s own retail business, which directly competes with those retailers.[8] A similar investigation was opened by the UK Competition and Markets Authority (CMA) in July 2022.[9]

Further, as part of the investigation opened into Apple’s App Store rule requiring developers to use Apple’s in-app purchase mechanism to distribute paid apps and/or paid digital content, the European Commission also showed interest in evaluating whether Apple’s conduct might disintermediate competing developers from relevant customer data, while Apple obtained valuable data about those activities and its competitors’ offers.[10] The European Commission and UK CMA likewise launched an investigation into Facebook Marketplace, with accusations that Meta used data gathered from advertisers in order to compete with them in markets where the company is active, such as classified ads.[11]

There are two primary reasons these antitrust proceedings are relevant. First, many of the prohibitions envisaged in regulatory interventions (e.g., DMA) clearly took inspiration from the antitrust investigations, thus making it important to explore the insights that competition authorities may provide to support an outright ban. Second, given that regulatory intervention will be implemented alongside competition rules (especially in Europe) rather than displace them,[12] sherlocking can be assessed at both the EU and national level against dominant players that are not eligible for “gatekeeper” designation under the DMA. For those non-gatekeeper firms, the practice may still be investigated by antitrust authorities and assessed before courts, aside from the DMA’s per se prohibition. And, of course, investigations and assessments of sherlocking could also be made even in those jurisdictions where there isn’t an outright ban.

The former sis well-illustrated by the German legislature’s decision to empower its national competition authority with a new tool to tackle abusive practices that are similar and functionally equivalent to the DMA.[13] Indeed, as of January 2021, the Bundeskartellamt may identify positions of particular market relevance (undertakings of “paramount significance for competition across markets”) and assess their possible anticompetitive effects on competition in those areas of digital ecosystems in which individual companies may have a gatekeeper function. Both the initiative’s aims and its list of practices are similar to the DMA. They are distinguished primarily by the fact that the German list is exhaustive, and the practices at-issue are not prohibited per se, but are subject to a reversal of the burden of proof, allowing firms to provide objective justifications. For the sake of this analysis, within the German list, one provision prohibits designated undertakings from “demanding terms and conditions that permit … processing data relevant for competition received from other undertakings for purposes other than those necessary for the provision of its own services to these undertakings without giving these undertakings sufficient choice as to whether, how and for what purpose such data are processed.”[14]

Unfortunately, none of the above-mentioned EU antitrust proceedings have concluded with a final decision that addresses the merits of sherlocking. This precludes evaluating whether the practice would have survived before the courts. Regarding the Apple investigation, the European Commission dropped the case over App Store rules and issued a new statement of objections that no longer mentions sherlocking.[15] Further, the European Commission and the UK CMA accepted the commitments offered by Amazon to close those investigations.[16] The CMA likewise accepted the commitments offered by Meta.[17]

Those outcomes can be explained by the DMA’s recent entry into force. Indeed, because of the need to comply with the new regulation, players designated as gatekeepers likely have lost interest in challenging antitrust investigations that target the very same conduct prohibited by the DMA.[18] After all, given that the DMA does not allow any efficiency defense against the listed prohibitions, even a successful appeal against an antitrust decision would be a pyrrhic victory. From the opposite perspective, the same applies to the European Commission, which may decide to save time, costs, and risks by dropping an ongoing case against a company designated as a gatekeeper under the DMA, knowing that the conduct under investigation will be prohibited in any case.

Nonetheless, despite the lack of any final decision on sherlocking, these antitrust assessments remain relevant. As already mentioned, the DMA does not displace competition law and, in any case, dominant platforms not designated as gatekeepers under the DMA still may face antitrust investigations over sherlocking. This applies even more for jurisdictions, such as the United States, that are evaluating DMA-like legislative initiatives (e.g., the American Innovation and Choice Online Act, or “AICOA”).

Against this background, drawing on recent EU cases, this paper questions the alleged anticompetitive implications of sherlocking, as well as claims that the practice fails to comply with existing antitrust rules.

First, the paper illustrates that prohibitions on the use of nonpublic third-party business data would cover two different theories that should be analyzed separately. Whereas a broader case involves all the business decisions adopted by a dominant platform because of such preferential access (e.g., the launch of new products or services, the development or cessation of existing products or services, the calibration of pricing and management systems), a more specific case deals solely with the adoption of a copycat strategy. By conflating these theories in support of a blanket ban that condemns any use of nonpublic third-party business data, EU antitrust authorities are fundamentally motivated by the same policy goal pursued by the DMA—i.e., to impose a neutrality regime on large online platforms. The competitive implications differ significantly, however, as adopting copycat strategies may only affect intra-brand competition, while using said data to improve other business decisions could also affect inter-platform competition.

Second, the paper shows that, in both of these scenarios, the welfare effects of sherlocking are unclear. Notably, exploiting certain data to better understand the market could help a platform to develop new products and services, to improve existing products and services, or more generally to be more competitive with respect to both business users and other platforms. As such outcomes would benefit consumers in terms of price and quality, any competitive advantage achieved by the hybrid platform could be considered unlawful only if it is not achieved on the merits. In a similar vein, if sherlocking is used by a hybrid platform to deliver replicas of its business users’ products and services, that would likely provide short-term procompetitive effects benefitting consumers with more choice and lower prices. In this case, the only competitive harm that would justify an antitrust intervention resides in (uncertain) negative long-term effects on innovation.

As a result, in any case, an outright ban of sherlocking, such as is enshrined in the DMA, is economically unsound since it would clearly harm consumers.

The paper is structured as follows. Section II describes the recent antitrust investigations of sherlocking, illustrating the various scenarios that might include the use of third-party business data. Section III investigates whether sherlocking may be considered outside the scope of competition on the merits for bringing competitive advantages to platforms solely because of their hybrid business model. Section IV analyzes sherlocking as a copycat strategy by investigating the ambiguous welfare effects of copying in digital markets and providing an antitrust assessment of the practice at issue. Section V concludes.

II. Antitrust Proceedings on Sherlocking: Platform Neutrality and Copycat Competition

Policymakers’ interest in sherlocking is part of a larger debate over potentially unfair strategies that large online platforms may deploy because of their dual role as an unavoidable trading partner for business users and a rival in complementary markets.

In this scenario, as summarized in Table 1, the DMA outlaws sherlocking, establishing that to “prevent gatekeepers from unfairly benefitting from their dual role,”[19] they are restrained from using, in competition with business users, “any data that is not publicly available that is generated or provided by those business users in the context of their use of the relevant core platform services or of the services provided together with, or in support of, the relevant core platform services, including data generated or provided by the customers of those business users.”[20] Recital 46 further clarifies that the “obligation should apply to the gatekeeper as a whole, including but not limited to its business unit that competes with the business users of a core platform service.”

A similar provision was included in the American Innovation and Choice Online Act (AICOA), which was considered, but not ultimately adopted, in the 117th U.S. Congress. AICOA, however, would limit the scope of the ban to the offer of products or services that would compete with those offered by business users.[21] Concerns about copycat strategies were also reported in the U.S. House of Representatives’ investigation of the state of competition in digital markets as supporting the request for structural-separation remedies and line-of-business restrictions to eliminate conflicts of interest where a dominant intermediary enters markets that place it in competition with dependent businesses.[22] Interestingly, however, in the recent complaint filed by the U.S. Federal Trade Commission (FTC) and 17 state attorneys general against Amazon that accuses the company of having deployed an interconnected strategy to block off every major avenue of competition (including price, product selection, quality, and innovation), there is no mention of sherlocking among the numerous unfair practices under investigation.[23]

Evaluating regulatory-reform proposals for digital markets, the Australian Competition and Consumer Commission (ACCC) also highlighted the risk of sherlocking, arguing that it could have an adverse effect on competition, notably on rivals’ ability to compete, when digital platforms exercise their strong market position to utilize nonpublic data to free ride on the innovation efforts of their rivals.[24] Therefore, the ACCC suggested adopting service-specific codes to address self-preferencing by, for instance, imposing data-separation requirements to restrain dominant app-store providers from using commercially sensitive data collected from the app-review process to develop their own apps.[25]

Finally, on a comparative note, it is also useful to mention the proposals advanced by the Japanese Fair Trade Commission (JFTC) in its recent market-study report on mobile ecosystems.[26] In order to ensure equal footing among competitors, the JFTC specified that its suggestion to prevent Google and Apple from using nonpublic data generated by other developers’ apps aims at pursuing two purposes. Such a ban would, indeed, concern not only use of the data for the purpose of developing competing apps, products, and services, but also its use for developing their own apps, products, and services.

TABLE 1: Legislative Initiatives and Proposals to Ban Sherlocking

As previously anticipated, sherlocking recently emerged as an antitrust offense in three investigations launched by the European Commission and the UK CMA.

In the first case, Amazon’s alleged reliance on marketplace sellers’ nonpublic business data has been claimed to distort fair competition on its platform and prevent effective competition. In its preliminary findings, the Commission argued that Amazon takes advantage of its hybrid business model, leveraging its access to nonpublic third-party sellers’ data (e.g., the number of ordered and shipped units of products; sellers’ revenues on the marketplace; the number of visits to sellers’ offers; data relating to shipping, to sellers’ past performance, and to other consumer claims on products, including the activated guarantees) to adjust its retail offers and strategic business decisions to the detriment of third-party sellers, which are direct competitors on the marketplace.[27] In particular, the Commission was concerned that Amazon uses such data for its decision to start and end sales of a product, for its pricing system, for its inventory-planning and management system, and to identify third-party sellers that Amazon’s vendor-recruitment teams should approach to invite them to become direct suppliers to Amazon Retail. To address the data-use concern, Amazon committed not to use nonpublic data relating to, or derived from, independent sellers’ activities on its marketplace for its retail business and not to use such data for the purposes of selling branded goods, as well as its private-label products.[28]

A parallel investigation ended with similar commitments in the UK.[29] According to the UK CMA, Amazon’s access to and use of nonpublic seller data could result in a competitive advantage for Amazon Retail arising from its operation of the marketplace, rather than from competition on the merits, and may lead to relevant adverse effects on competition. Notably, it was alleged this could result in a reduction in the scale and competitiveness of third-party sellers on the Amazon Marketplace; a reduction in the number and range of product offers from third-party sellers on the Amazon Marketplace; and/or less choice for consumers, due to them being offered lower quality goods and/or paying higher prices than would otherwise be the case.

It is also worth mentioning that, by determining that Amazon is an undertaking of paramount significance for competition across markets, the Bundeskartellamt emphasized the competitive advantage deriving from Amazon’s access to nonpublic data, such as Glance Views, sales figures, sale quantities, cost components of products, and reorder status.[30] Among other things, with particular regard to Amazon’s hybrid role, the Bundeskartellamt noted that the preferential access to competitively sensitive data “opens up the possibility for Amazon to optimize its own-brand assortment.”[31]

A second investigation involved Apple and its App Store rule.[32] According to the European Commission, the mandatory use of Apple’s own proprietary in-app purchase system (IAP) would, among other things, grant Apple full control over the relationship its competitors have with customers, thus disintermediating those competitors from customer data and allowing Apple to obtain valuable data about the activities and offers of its competitors.

Finally, Meta faced antitrust proceedings in both the EU and the UK.[33] The focus was on Facebook Marketplace—i.e., an online classified-ads service that allows users to advertise goods for sale. According to the European Commission and the CMA, Meta unilaterally imposes unfair trading conditions on competing online-classified ads services that advertise on Facebook or Instagram. These terms and conditions, which authorize Meta to use ads-related data derived from competitors for the benefit of Facebook Marketplace, are considered unjustified, as they impose an unnecessary burden on competitors and only benefit Facebook Marketplace. The suspicion is that Meta has used advertising data from Facebook Marketplace competitors for the strategic planning, product development, and launch of Facebook Marketplace, as well as for Marketplace’s operation and improvement.

Overall, these investigations share many features. The concerns about third-party business-data use, as well as about other forms of self-preferencing, revolve around the competitive advantages that accrue to a dominant platform because of its dual role. Such advantages are considered unfair, as they are not the result of the merits of a player, but derived purely and simply from its role as an important gateway to reach end users. Moreover, this access to valuable business data is not reciprocal. The feared risk is the marginalization of business users competing with gatekeepers on the gatekeepers’ platforms and, hence, the alleged harm to competition is the foreclosure of rivals in complementary markets (horizontal foreclosure).

The focus of these investigations was well-illustrated by the European Commission’s decision on Amazon’s practice.[34] The Commission’s concern was about the “data delta” that Amazon may exploit, namely the additional data related to third-party sellers’ listings and transactions that are not available to, and cannot be replicated by, the third-party sellers themselves, but are available to and used by Amazon Retail for its own retail operations.[35] Contrary to Amazon Retail—which, according to Commission’s allegations, would have full access to and would use such individual, real-time data of all its third-party sellers to calibrate its own retail decisions—sellers would have access only to their own individual listings and sales data. As a result, the Commission came to the (preliminary) conclusion that real-time access to and use of such volume, variety, and granularity of non-publicly available data from its retail competitors generates a significant competitive advantage for Amazon Retail in each of the different decisional processes that drive its retail operations.[36]

On a closer look, however, while antitrust authorities seem to target the use of nonpublic third-party business data as a single theory of harm, their allegations cover two different scenarios along the lines of what has already been examined with reference to the international legislative initiatives and proposals. Indeed, the Facebook Marketplace case does not involve an allegation of copying, as Meta is accused of gathering data from its business users to launch and improve its ads service, instead of reselling goods and services.

FIGURE 1: Sherlocking in Digital Markets

As illustrated above in Figure 1, while the claim in the latter scenario is that the preferential data use would help dominant players calibrate business decisions in general, the former scenario instead involves the use of such data for a pure copycat strategy of an entire product or service, or some of its specific features.

In both scenarios the aim of the investigations is to ensure platform neutrality. Accordingly, as shown by the accepted commitments, the envisaged solution for antitrust authorities is to impose  data-separation requirements to restrain dominant platforms from using third-party commercially sensitive data. Putting aside that these investigations concluded with commitments from the firms, however, their chances of success before a court differ significantly depending on whether they challenge a product-imitation strategy, or any business decision adopted because of the “data delta.”

A. Sherlocking and Unconventional Theories of Harm for Digital Markets

Before analyzing how existing competition-law rules could be applied to the various scenarios involving the use of third-party business data, it is worth providing a brief overview of the framework in which the assessment of sherlocking is conducted. As competition in the digital economy is increasingly a competition among ecosystems,[37] a lively debate has emerged on the capacity of traditional antitrust analysis to adequately capture the peculiar features of digital markets. Indeed, the combination of strong economies of scale and scope; indirect network effects; data advantages and synergies across markets; and portfolio effects all facilitate ecosystem development all contribute to making digital markets highly concentrated, prone to tipping, and not easily contestable.[38] As a consequence, it’s been suggested that addressing these distinctive features of digital markets requires an overhaul of the antitrust regime.

Such discussions require the antitrust toolkit and theories of harm to illustrate whether and how a particular practice, agreement, or merger is anticompetitive. Notably, at issue is whether traditional antitrust theories of harm are fit for purpose or whether novel theories of harm should be developed in response to the emerging digital ecosystems. The latter requires looking at the competitive impact of expanding, protecting, or strengthening an ecosystem’s position, and particularly whether such expansion serves to exploit a network of capabilities and to control access to key inputs and components.[39]

A significant portion of recent discussions around developing novel theories of harm to better address the characteristics of digital-business models and markets has been devoted to the topic of merger control—in part a result of the impressive number of acquisitions observed in recent years.[40] In particular, the focus has been on analyzing conglomerate mergers that involve acquiring a complementary or unrelated asset, which have traditionally been assumed to raise less-significant competition concerns.

In this regard, an ecosystem-based theory seems to have guided the Bundeskartellamt in its assessment of Meta’s acquisition of Kustomer[41] and by the CMA in Microsoft/Activision.[42] A more recent example is the European Commission’s decision to prohibit the proposed Booking/eTraveli merger, where the Commission explicitly noted that the transaction would have allowed Booking to expand its travel-services ecosystem.[43] The Commission’s concerns were related primarily to the so-called “envelopment” strategy, in which a prominent platform within a specific market broadens its range of services into other markets where there is a significant overlap of customer groups already served by the platform.[44]

Against this background, putative self-preferencing harms represent one of the European Commission’s primary (albeit contentious)[45] attempts to develop new theories of harm built on conglomerate platforms’ ability to bundle services or use data from one market segment to inform product development in another.[46] Originally formulated in the Google Shopping decision,[47] the theory of harm of (leveraging through) self-preferencing has subsequently inspired the DMA, which targets different forms of preferential treatment, including sherlocking.

In particular, it is asserting that platform may use self-preferencing to adopt a leveraging strategy with a twofold anticompetitive effect—that is, excluding or impeding rivals from competing with the platform (defensive leveraging) and extending the platform’s market power into associated markets (offensive leveraging). These goals can be pursued because of the unique role that some large digital platforms play. That is, they not only enjoy strategic market status by controlling ecosystems of integrated complementary products and services, which are crucial gateways for business users to reach end users, but they also perform a dual role as both a critical intermediary and a player active in complementors’ markets. Therefore, conflicts of interests may provide incentives for large vertically integrated platforms to favor their own products and services over those of their competitors.[48]

The Google Shopping theory of harm, while not yet validated by the Court of Justice of the European Union (CJEU),[49] has also found its way into merger analysis, as demonstrated by the European Commission’s recent assessment of iRobot/Amazon.[50] In its statement of objections, the Commission argued that the proposed acquisition of iRobot may give Amazon the ability and incentive to foreclose iRobot’s rivals by engaging in several foreclosing strategies to prevent them from selling robot vacuum cleaners (RVCs) on Amazon’s online marketplace and/or at degrading such rivals’ access to that marketplace. In particular, the Commission found that Amazon could deploy such self-preferencing strategies as delisting rival RVCs; reducing rival RVCs’ visibility in both organic and paid results displayed in Amazon’s marketplace; limiting access to certain widgets or commercially attractive labels; and/or raising the costs of iRobot’s rivals to advertise and sell their RVCs on Amazon’s marketplace.[51]

Sherlocking belongs to this framework of analysis and can be considered a form of self-preferencing, specifically because of the lack of reciprocity in accessing sensitive data.[52] Indeed, while gatekeeper platforms have access to relevant nonpublic third-party business data as a result of their role as unavoidable trading partners, they leverage this information exclusively, without sharing it with third-party sellers, thus further exacerbating an already uneven playing field.[53]

III. Sherlocking for Competitive Advantage: Hybrid Business Model, Neutrality Regimes, and Competition on the Merits

Insofar as prohibitions of sherlocking center on the competitive advantages that platforms enjoy because of their dual role—thereby allowing some players to better calibrate their business decisions due to their preferential access to business users’ data—it should be noted that competition law does not impose a general duty to ensure a level playing field.[54] Further, a competitive advantage does not, in itself, amount to anticompetitive foreclosure under antitrust rules. Rather, foreclosure must not only be proved (in terms of actual or potential effects) but also assessed against potential benefits for consumers in terms of price, quality, and choice of new goods and services.[55]

Indeed, not every exclusionary effect is necessarily detrimental to competition.[56] Competition on the merits may, by definition, lead to the departure from the market or the marginalization of competitors that are less efficient and therefore less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation.[57] Automatically classifying any conduct with exclusionary effects were as anticompetitive could well become a means to protect less-capable, less-efficient undertakings and would in no way protect more meritorious undertakings—thereby potentially hindering a market’s competitiveness.[58]

As recently clarified by the CJEU regarding the meaning of “competition on the merits,” any practice that, in its implementation, holds no economic interest for a dominant undertaking except that of eliminating competitors must be regarded as outside the scope of competition on the merits.[59] Referring to the cases of margin squeezes and essential facilities, the CJEU added that the same applies to practices that a hypothetical equally efficient competitor is unable to adopt because that practice relies on using resources or means inherent to the holding of such a dominant position.[60]

Therefore, while antitrust cases on sherlocking set out to ensure a level playing field and platform neutrality, and therefore center on the competitive advantages that a platform enjoys because of its dual role, mere implementing a hybrid business model does not automatically put such practices outside the scope of competition on the merits. The only exception, according to the interpretation provided in Bronner, is the presence of an essential facility—i.e., an input whose access should be considered indispensable, as there are no technical, legal, or economic obstacles capable of making it impossible, or even unreasonably difficult, to duplicate it.[61]

As a result, unless it is proved that the hybrid platform is an essential facility, sherlocking and other forms of self-preferencing cannot be considered prima facie outside the scope of competition on the merits, or otherwise unlawful. Rather, any assessment of sherlocking demands the demonstration of anticompetitive effects, which in turn requires finding an impact on efficient firms’ ability and incentive to compete. In the scenario at-issue, for instance, the access to certain data may allow a platform to deliver new products or services; to improve existing products or services; or more generally to compete more efficiently not only with respect to the platform’s business users, but also against other platforms. Such an increase in both intra-platform and inter-platform competition would benefit consumers in terms of lower prices, better quality, and a wider choice of new or improved goods and services—i.e., competition on the merits.[62]

In Facebook Marketplace, the European Commission and UK CMA challenged the terms and conditions governing the provision of display-advertising and business-tool services to which Meta required its business customers to sign up.[63] In their view, Meta abused its dominant position by imposing unfair trading conditions on its advertising customers, which authorized Meta to use ads-related data derived from the latter in a way that could afford Meta a competitive advantage on Facebook Marketplace that would not have arisen from competition on the merits. Notably, antitrust authorities argued that Meta’s terms and conditions were unjustified, disproportionate, and unnecessary to provide online display-advertising services on Meta’s platforms.

Therefore, rather than directly questioning the platform’s dual role or hybrid business model, the European Commission and UK CMA decided to rely on traditional case law which considers unfair those clauses that are unjustifiably unrelated to the purpose of the contract, unnecessarily limit the parties’ freedom, are disproportionate, or are unilaterally imposed or seriously opaque.[64] This demonstrates that, outside the harm theory of the unfairness of terms and conditions, a hybrid platform’s use of nonpublic third-party business data to improve its own business decisions is generally consistent with antitrust provisions. Hence, an outright ban would be unjustified.

IV. Sherlocking to Mimic Business Users’ Products or Services

The second, and more intriguing, sherlocking scenario is illustrated by the Amazon Marketplace investigations and regards the original meaning of sherlocking—i.e., where a data advantage is used by a hybrid platform to mimic its business users’ products or services.

Where sherlocking charges assert that the practice allows some platforms to use business users’ data to compete against them by replicating their products or services, it should not be overlooked that the welfare effects of such a copying strategy are ambiguous. While the practice could benefit consumers in the short term by lowering prices and increasing choice, it may discourage innovation over the longer term if third parties anticipate being copied whenever they deliver successful products or services. Therefore, the success of an antitrust investigation essentially relies on demonstrating a harm to innovation that would induce business users to leave the market or stop developing their products and services. In other words, antitrust authorities should be able to demonstrate that, by allowing dominant platforms to free ride on their business guests’ innovation efforts, sherlocking would negatively affect rivals’ ability to compete.

A. The Welfare Effects of Copying

The tradeoff between the short- and long-term welfare effects of copying has traditionally been analyzed in the context of the benefits and costs generated by intellectual-property protection.[65] In particular, the economic literature investigating the optimal life of patents[66] and copyrights[67] focuses on the efficient balance between dynamic benefits associated with innovation and the static costs of monopoly power granted by IPRs.

More recently, product imitation has instead been investigated in the different scenario of digital markets, where dominant platforms adopting a hybrid business model may use third-party sellers’ market data to design and promote their own products over their rivals’ offerings. Indeed, some studies report that large online platforms may attempt to protect their market position by creating “kill zones” around themselves—i.e., by acquiring, copying, or eliminating their rivals.[68] In such a novel setting, the welfare effects of copying are assessed regardless of the presence and the potential enforcement of IPRs, but within a strategy aimed at excluding rivals by exploiting the dual role of both umpire and player to get preferential access to sensitive data and free ride on their innovative efforts.[69]

Even in this context, however, a challenging tradeoff should be considered. Indeed, while in the short term, consumers may benefit from the platform’s imitation strategy in terms of lower prices and higher quality, they may be harmed in the longer term if third parties are discouraged from delivering new products and services. As a result, while there is empirical evidence on hybrid platforms successfully entering into third parties’ adjacent market segments, [70] the extant academic literature finds the welfare implications of such moves to be ambiguous.

A first strand of literature attempts to estimate the welfare impact of the hybrid business model. Notably, Andre Hagiu, Tat-How Teh, and Julian Wright elaborated a model to address the potential implications of an outright ban on platforms’ dual mode, finding that such a structural remedy may harm consumer surplus and welfare even where the platform would otherwise engage in product imitation and self-preferencing.[71] According to the authors, banning the dual mode does not restore the third-party seller’s innovation incentives or the effective price competition between products, which are the putative harms caused by imitation and self-preferencing. Therefore, the authors’ evaluation was that interventions specifically targeting product imitation and self-preferencing were preferable.

Germa?n Gutie?rrez suggested that banning the dual model would generate hardly any benefits for consumers, showing that, in the Amazon case, interventions that eliminate either the Prime program or product variety are likely to decrease welfare.[72]

Further, analyzing Amazon’s business model, Federico Etro found that the platform and consumers’ incentives are correctly aligned, and that Amazon’s business model of hosting sellers and charging commissions prevents the company from gaining through systematic self?preferencing for its private-label and first-party products.[73] In the same vein, on looking at its business model and monetization strategy, Patrick Andreoli-Versbach and Joshua Gans argued that Amazon does not have an obvious incentive to self-preference.[74] Indeed, Amazon’s profitability data show that, on average, the company’s operating margin is higher on third-party sales than on first-party retail sales.

Looking at how modeling details may yield different results with regard to the benefits and harms of the hybrid business model, Simon Anderson and O?zlem Bedre-Defoile maintain that the platform’s choice to sell its own products benefits consumers by lowering prices when a monopoly platform hosts competitive fringe sellers, regardless of the platform’s position as a gatekeeper, whether sellers have an alternate channel to reach consumers, or whether alternate channels are perfect or imperfect substitutes for the platform channel.[75] On the other hand, the authors argued that platform product entry might harm consumers when a big seller with market power sells on its own channel and also on the platform. Indeed, in that case, the platform setting a seller fee before the big seller prices its differentiated products introduces double markups on the big seller’s platform-channel price and leaves some revenue to the big seller.

Studying whether Amazon engages in self-preferencing on its marketplace by favoring its own brands in search results, Chiara Farronato, Andrey Fradkin, and Alexander MacKay demonstrate empirically that Amazon brands remain about 30% cheaper and have 68% more reviews than other similar products.[76] The authors acknowledge, however, that their findings do not imply that consumers are hurt by Amazon brands’ position in search results.

Another strand of literature specifically tackles the welfare effects of sherlocking. In particular, Erik Madsen and Nikhil Vellodi developed a theoretical framework to demonstrate that a ban on insider imitation can either stifle or stimulate innovation, depending on the nature of innovation.[77] Specifically, the ban could stimulate innovation for experimental product categories, while reducing innovation in incremental product markets, since the former feature products with a large chance of superstar demand and the latter generate mostly products with middling demand.

Federico Etro maintains that the tradeoffs at-issue are too complex to be solved with simple interventions, such as bans on dual mode, self-preferencing, or copycatting.[78] Indeed, it is difficult to conclude that Amazon entry is biased to expropriate third-party sellers or that bans on dual mode, self-preferencing, or copycatting would benefit consumers, because they either degrade services and product variety or induce higher prices or commissions.

Similar results are provided by Jay Pil Choi, Kyungmin Kim, and Arijit Mukherjee, who developed a tractable model of a platform-run marketplace where the platform charges a referral fee to the sellers for access to the marketplace, and may also subsequently launch its own private-label product by copying a seller.[79] The authors found that a policy to either ban hybrid mode or only prohibit information use for the launch of private-label products may produce negative welfare implications.

Further, Radostina Shopova argues that, when introducing a private label, the marketplace operator does not have incentive to distort competition and foreclose the outside seller, but does have an incentive to lower fees charged to the outside seller and to vertically differentiate its own product in order to protect the seller’s channel.[80] Even when the intermediary is able to perfectly mimic the quality of the outside seller and monopolize its product space, the intermediary prefers to differentiate its offer and chooses a lower quality for the private-label product. Accordingly, as the purpose of private labels is to offer a lower-quality version of products aimed at consumers with a lower willingness to pay, a marketplace operator does not have an incentive to distort competition in favor of its own product and foreclose the seller of the original higher-quality product.

In addition, according to Jean-Pierre Dubé, curbing development of private-label programs would harm consumers and Amazon’s practices amount to textbook retailing, as they follow an off-the-shelf approach to managing private-label products that is standard for many retail chains in the West.[81] As a result, singling out Amazon’s practices would set a double standard.

Interestingly, such findings about predictors and effects of Amazon’s entry in competition with third-party merchants on its own marketplace are confirmed by the only empirical study developed so far. In particular, analyzing the Home & Kitchen department of Germany’s version of Amazon Marketplace between 2016 and 2021, Gregory S. Crawford, Matteo Courthoud, Regina Seibel, and Simon Zuzek’s results suggest that Amazon’s entry strategy was more consistent with making Marketplace more attractive to consumers than expropriating third-party merchants.[82] Notably, the study showed that, comparing Amazon’s entry decisions with those of the largest third-party merchants, Amazon tends to enter low-growth and low-quality products, which is consistent with a strategy that seeks to make Marketplace more attractive by expanding variety, lessening third-party market power, and/or enhancing product availability. The authors therefore found that Amazon’s entry on Amazon Marketplace demonstrated no systematic adverse effects and caused a mild market expansion.

Massimo Motta and Sandro Shelegia explored interactions between copying and acquisitions, finding that the former (or the threat of copying) can modify the outcome of an acquisition negotiation.[83] According to their model, there could be both static and dynamic incentives for an incumbent to introduce a copycat version of a complementary product. The static rationale consists of lowering the price of the complementary product in order to capture more rents from it, while the dynamic incentive consists of harming a potential rival’s prospects of developing a substitute. The latter may, in turn, affect the direction the entrant takes toward innovation. Anticipating the incumbent’s copying strategy, the entrant may shift resources from improvements to compete with the incumbent’s primary product to developing complementary products.

Jingcun Cao, Avery Haviv, and Nan Li analyzed the opposite scenario—i.e., copycats that seek to mimic the design and user experience of incumbents’ successful products.[84] The authors find empirically that, on average, copycat apps do not have a significant effect on the demand for incumbent apps and that, as with traditional counterfeit products, they may generate a positive demand spillover toward authentic apps.

Massimo Motta also investigated the potential foreclosure effects of platforms adopting a copycat strategy committed to non-discriminatory terms of access for third parties (e.g., Apple App Store, Google Play, and Amazon Marketplace).[85] Notably, according to Motta, when a third-party seller is particularly successful and the platform is unable to raise fees and commissions paid by that seller, the platform may prefer to copy its product or service to extract more profits from users, rather than rely solely on third-party sales. The author acknowledged, however, that even though this practice may create an incentive for self-preferencing, it does not necessarily have anticompetitive effects. Indeed, the welfare effects of the copying strategy are a priori ambiguous.[86] While, on the one hand, the platform’s copying of a third-party product benefits consumers by increasing variety and competition among products, on the other hand, copying might be wasteful for society, in that it entails a fixed cost and may discourage innovation if rivals anticipate that they will be systematically copied whenever they have a successful product.[87] Therefore, introducing a copycat version of a product offered by a firm in an adjacent market might be procompetitive.

B. Antitrust Assessment: Competition, Innovation, and Double Standards

The economic literature has demonstrated that the rationale and welfare effects of sherlocking by hybrid platforms are definitively ambiguous. Against concerns about rivals’ foreclosure, some studies provide a different narrative, illustrating that such a strategy is more consistent with making the platform more attractive to consumers (by differentiating the quality and pricing of the offer) than expropriating business users.[88] Furthermore, copies, imitations, and replicas undoubtedly benefit consumers with more choice and lower prices.

Therefore, the only way to consider sherlocking anticompetitive is by demonstrating long-term deterrent effects on innovation (i.e., reducing rivals’ incentives to invest in new products and services) outweigh consumers’ short-term advantages.[89] Moreover, deterrent effects must not be merely hypothetical, as a finding of abuse cannot be based on a mere possibility of harm.[90] In any case, such complex tradeoffs are at odds with a blanket ban.[91]

Moreover, assessments of the potential impact of sherlocking on innovation cannot disregard the role of IPRs—which are, by definition, the main primary to promote innovation. From this perspective, intellectual-property protection is best characterized as another form of tradeoff. Indeed, the economic rationale of IPRs (in particular, of patents and copyrights) involves, among other things, a tradeoff between access and incentives—i.e., between short-term competitive restrictions and long-term innovative benefits.[92]

According to the traditional incentive-based theory of intellectual property, free riding would represent a dangerous threat that justifies the exclusive rights granted by intellectual-property protection. As a consequence, so long as copycat expropriation does not infringe IPRs, it should be presumed legitimate and procompetitive. Indeed, such free riding is more of an intellectual-property issue than a competitive concern.

In addition, to strike a fair balance between restricting competition and providing incentives to innovation, the exclusive rights granted by IPRs are not unlimited in terms of duration, nor in terms of lawful (although not authorized) uses of the protected subject matter. Under the doctrine of fair use, for instance, reverse engineering represents a legitimate way to obtain information about a firm’s product, even if the intended result is to produce a directly competing product that may steer customers away from the initial product and the patented invention.

Outside of reverse engineering, copying is legitimately exercised once IPRs expire, when copycat competitors can reproduce previously protected elements. As a result of the competitive pressure exerted by new rivals, holders of expired IPRs may react by seeking solutions designed to block or at least limit the circulation of rival products. They could, for example, request other IPRs to cover aspects or functionalities different from those previously protected. They could also bring (sometimes specious) legal action for infringement of the new IPR or for unfair competition by slavish imitation. For these reasons, there have been occasions where copycat competitors have received protection from antitrust authorities against sham litigation brought by IPR holders concerned about losing margins due to pricing pressure from copycats.[93]

Finally, within the longstanding debate on the intersection of intellectual-property protection and competition, EU antitrust authorities have traditionally been unsympathetic toward restrictions imposed by IPRs. The success of the essential-facility doctrine (EFD) is the most telling example of this attitude, as its application in the EU has been extended to IPRs. As a matter of fact, the EFD represents the main antitrust tool for overseeing intellectual property in the EU.[94]

After Microsoft, EU courts have substantially dismantled one of the “exceptional circumstances” previously elaborated in Magill and specifically introduced for cases involving IPRs, with the aim of safeguarding a balance between restrictions to access and incentives to innovate. Whereas the CJEU established in Magill that refusal to grant an IP license should be considered anticompetitive if it prevents the emergence of a new product for which there is potential consumer demand, in Microsoft, the General Court considered such a requirement met even when access to an IPR is necessary for rivals to merely develop improved products with added value.

Given this background, recent competition-policy concerns about sherlocking are surprising. To briefly recap, the practice at-issue increases competition in the short term, but may affect incentives to innovate in the long-term. With regard to the latter, however, the practice neither involves products protected by IPRs nor constitutes a slavish imitation that may be caught under unfair-competition laws.

The case of Amazon, which has received considerable media coverage, is illustrative of the relevance of IP protection. Amazon has been accused of cloning batteries, power strips, wool runner shoes, everyday sling bags, camera tripods, and furniture.[95] One may wonder what kind of innovation should be safeguarded in these cases against potential copies. Admittedly, such examples appear consistent with the findings of the already-illustrated empirical study conducted by Crawford et al. indicating that Amazon tends to enter low-quality products in order to expand variety on the Marketplace and to make it more attractive to consumers.

Nonetheless, if an IPR is involved, right holders are provided with proper means to protect their products against infringement. Indeed, one of the alleged targeted companies (Williams-Sonoma) did file a complaint for design and trademark infringement, claiming that Amazon had copied a chair (Orb Dining Chair) sold by its West Elm brand. According to Williams-Sonoma, the Upholstered Orb Office Chair—which Amazon began selling under its Rivet brand in 2018—was so similar that the ordinary observer would be confused by the imitation.[96] If, instead, the copycat strategy does not infringe any IPR, the potential impact on innovation might not be considered particularly worrisome—at least at first glance.

Further, neither the degree to which third-party business data is unavailable nor the degree to which they are relevant in facilitating copying are clear cut. For instance, in the case of Amazon, public product reviews supply a great deal of information[97] and, regardless of the fact that a third party is selling a product on the Marketplace, anyone can obtain an item for the purposes of reverse engineering.[98]

In addition, antitrust authorities are used to intervening against opportunistic behavior by IPR holders. European competition authorities, in particular, have never before seemed particularly responsive to the motives of inventors and creators versus the need to encourage maximum market openness.

It should also be noted that cloning is a common strategy in traditional markets (e.g., food products)[99] and has been the subject of longstanding controversies between high-end fashion brands and fast-fashion brands (e.g., Zara, H&M).[100] Furthermore, brick-and-mortar retailers also introduce private labels and use other brands’ sales records in deciding what to produce.[101]

So, what makes sherlocking so different and dangerous when deployed in digital markets as to push competition authorities to contradict themselves?[102]

The double standard against sherlocking reflects the same concern and pursues the same goal of the various other attempts to forbid any form of self-preferencing in digital markets. Namely, antitrust investigations of sherlocking are fundamentally driven by the bias against hybrid and vertically integrated players. The investigations rely on the assumption that conflicts of interest have anticompetitive implications and that, therefore, platform neutrality should be promoted to ensure the neutrality of the competitive process.[103] Accordingly, hostility toward sherlocking may involve both of the illustrated scenarios—i.e., the use of nonpublic third-party business data either in adopting any business decision, or just copycat strategies, in particular.

As a result, however, competition authorities end up challenging a specific business model, rather than the specific practice at-issue, which brings undisputed competitive benefits in terms of lower prices and wider consumer choice, and which should therefore be balanced against potential exclusionary risks. As the CJEU has pointed out, the concept of competition on the merits:

…covers, in principle, a competitive situation in which consumers benefit from lower prices, better quality and a wider choice of new or improved goods and services. Thus, … conduct which has the effect of broadening consumer choice by putting new goods on the market or by increasing the quantity or quality of the goods already on offer must, inter alia, be considered to come within the scope of competition on the merits.[104]

Further, in light of the “as-efficient competitor” principle, competition on the merits may lead to “the departure from the market, or the marginalization of, competitors that are less efficient and so less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation.”[105]

It has been correctly noted that the “as-efficient competitor” principle is a reminder of what competition law is about and how it differs from regulation.[106] Competition law aims to protect a process, rather than engineering market structures to fulfill a particular vision of how an industry is to operate.[107] In other words, competition law does not target firms on the basis of size or status and does not infer harm from (market or bargaining) power or business model. Therefore, neither the dual role played by some large online platforms nor their preferential access to sensitive business data or their vertical integration, by themselves, create a competition problem. Competitive advantages deriving from size, status, power, or business model cannot be considered per se outside the scope of competition on the merits.

Some policymakers have sought to resolve these tensions in how competition law regards sherlocking by introducing or envisaging an outright ban. These initiatives and proposals have clearly been inspired by antitrust investigations, but they did so for the wrong reasons. Instead of taking stock of the challenging tradeoffs between short-term benefits and long-term risks that an antitrust assessment of sherlocking requires, they blamed competition law for not providing effective tools to achieve the policy goal of platform neutrality.[108] Therefore, the regulatory solution is merely functional to bypass the traditional burden of proof of antitrust analysis and achieve what competition-law enforcement cannot provide.

V. Conclusion

The bias against self-preferencing strikes again. Concerns about hybrid platforms’ potential conflicts of interest have led policymakers to seek prohibitions to curb different forms of self-preferencing, making the latter the symbol of the competition-policy zeitgeist in digital markets. Sherlocking shares this fate. Indeed, the DMA outlaws any use of business users’ nonpublic data and similar proposals have been advanced in the United States, Australia, and Japan. Further, like other forms of self-preferencing, such regulatory initiatives against sherlocking have been inspired by previous antitrust proceedings.

Drawing on these antitrust investigations, the present research shows the extent to which an outright ban on sherlocking is unjustified. Notably, the practice at-issue includes two different scenarios: the broad case in which a gatekeeper exploits its preferential access to business users’ data to better calibrate all of its business decisions and the narrow case in which such data is used to adopt a copycat strategy. In either scenario, the welfare effects and competitive implications of sherlocking are unclear.

Indeed, the use of certain data by a hybrid platform to improve business decisions generally should be classified as competition on the merits, and may yield an increase in both intra-platform (with respect to business users) and inter-platform (with respect to other platforms) competition. This would benefit consumers in terms of lower prices, better quality, and a wider choice of new or improved goods and services. In a similar vein, if sherlocking is used to deliver replicas of business users’ products or services, the anti-competitiveness of such a strategy may only result from a cumbersome tradeoff between short-term benefits (i.e., lower prices and wider choice) and negative long-term effects on innovation.

An implicit confirmation of the difficulties encountered in demonstrating the anti-competitiveness of sherlocking comes from the recent complaint issued by the FTC against Amazon.[109] Current FTC Chairwoman Lina Khan devoted a significant portion of her previous academic career to questioning Amazon’s practices (including the decision to introduce its own private labels inspired by third-party products)[110] and to supporting the adoption of structural-separation remedies to tackle platforms’ conflicts of interest that induce them to exploit their “systemic informational advantage (gleaned from competitors)” to thwart rivals and strengthen their own position by introducing replica products.[111] Despite these premises and although the FTC’s complaint targets numerous practices belonging to what has been described as an interconnected strategy to block off every major avenue of competition, however, sherlocking is surprisingly off the radar.

Regulatory initiatives to ban sherlocking in order to ensure platform neutrality with respect to business users and a level playing field among rivals would sacrifice undisputed procompetitive benefits on the altar of policy goals that competition rules are not meant to pursue. Sherlocking therefore appears to be a perfect case study of the side effects of unwarranted interventions in digital markets.

[1] Giuseppe Colangelo, Antitrust Unchained: The EU’s Case Against Self-Preferencing, 72 GRUR International 538 (2023).

[2] Jacques Cre?mer, Yves-Alexandre de Montjoye, & Heike Schweitzer, Competition Policy for the Digital Era (2019), 7, https://op.europa.eu/en/publication-detail/-/publication/21dc175c-7b76-11e9-9f05-01aa75ed71a1/language-en (all links last accessed 3 Jan. 2024); UK Digital Competition Expert Panel, Unlocking Digital Competition, (2019) 58, available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[3] You’ve Been Sherlocked, The Economist (2012), https://www.economist.com/babbage/2012/07/13/youve-been-sherlocked.

[4] Regulation (EU) 2022/1925 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (2022), OJ L 265/1, Article 6(2).

[5] U.S. S. 2992, American Innovation and Choice Online Act (AICOA) (2022), Section 3(a)(6), available at https://www.klobuchar.senate.gov/public/_cache/files/b/9/b90b9806-cecf-4796-89fb-561e5322531c/B1F51354E81BEFF3EB96956A7A5E1D6A.sil22713.pdf. See also U.S. House of Representatives, Subcommittee on Antitrust, Commercial, and Administrative Law, Investigation of Competition in Digital Markets, Majority Staff Reports and Recommendations (2020), 164, 362-364, 378, available at https://democrats-judiciary.house.gov/uploadedfiles/competition_in_digital_markets.pdf.

[6] Australian Competition and Consumer Commission, Digital Platform Services Inquiry Report on Regulatory Reform (2022), 125, https://www.accc.gov.au/about-us/publications/serial-publications/digital-platform-services-inquiry-2020-2025/digital-platform-services-inquiry-september-2022-interim-report-regulatory-reform.

[7] Japan Fair Trade Commission, Market Study Report on Mobile OS and Mobile App Distribution (2023), https://www.jftc.go.jp/en/pressreleases/yearly-2023/February/230209.html.

[8] European Commission, 10 Nov. 2020, Case AT.40462, Amazon Marketplace; see Press Release, Commission Sends Statement of Objections to Amazon for the Use of Non-Public Independent Seller Data and Opens Second Investigation into Its E-Commerce Business Practices, European Commission (2020), https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2077.

[9] Press Release, CMA Investigates Amazon Over Suspected Anti-Competitive Practices, UK Competition and Markets Authority (2022), https://www.gov.uk/government/news/cma-investigates-amazon-over-suspected-anti-competitive-practices.

[10] European Commission, 16 Jun. 2020, Case AT.40716, Apple – App Store Practices.

[11] Press Release, Commission Sends Statement of Objections to Meta over Abusive Practices Benefiting Facebook Marketplace, European Commission (2022), https://ec.europa.eu/commission/presscorner/detail/en/ip_22_7728; Press Release, CMA Investigates Facebook’s Use of Ad Data, UK Competition and Markets Authority (2021), https://www.gov.uk/government/news/cma-investigates-facebook-s-use-of-ad-data.

[12] DMA, supra note 4, Recital 10 and Article 1(6).

[13] GWB Digitalization Act, 18 Jan. 2021, Section 19a. On risks of overlaps between the DMA and the competition law enforcement, see Giuseppe Colangelo, The European Digital Markets Act and Antitrust Enforcement: A Liaison Dangereuse, 47 European Law Review 597.

[14] GWB, supra note 13, Section 19a (2)(4)(b).

[15] Press Release, Commission Sends Statement of Objections to Apple Clarifying Concerns over App Store Rules for Music Streaming Providers, European Commission (2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_1217.

[16] European Commission, 20 Dec. 2022, Case AT.40462; Press Release, Commission Accepts Commitments by Amazon Barring It from Using Marketplace Seller Data, and Ensuring Equal Access to Buy Box and Prime, European Commission (2022), https://ec.europa.eu/commission/presscorner/detail/en/ip_22_7777; UK Competition and Markets Authority, 3 Nov. 2023, Case No. 51184, https://www.gov.uk/cma-cases/investigation-into-amazons-marketplace.

[17] UK Competition and Markets Authority, 3 Nov. 2023, Case AT.51013, https://www.gov.uk/cma-cases/investigation-into-facebooks-use-of-data.

[18] See, e.g., Gil Tono & Lewis Crofts (2022), Amazon Data Commitments Match DMA Obligations, EU’s Vestager Say, mLex (2022), https://mlexmarketinsight.com/news/insight/amazon-data-commitments-match-dma-obligation-eu-s-vestager-says (reporting that Commissioner Vestager stated that Amazon’s data commitments definitively appear to match what would be asked within the DMA).

[19] DMA, supra note 4, Recital 46.

[20] Id., Article 6(2) (also stating that, for the purposes of the prohibition, non-publicly available data shall include any aggregated and non-aggregated data generated by business users that can be inferred from, or collected through, the commercial activities of business users or their customers, including click, search, view, and voice data, on the relevant core platform services or on services provided together with, or in support of, the relevant core platform services of the gatekeeper).

[21] AICOA, supra note 5.

[22] U.S. House of Representatives, supra note 5; see also Lina M. Khan, The Separation of Platforms and Commerce, 119 Columbia Law Review 973 (2019).

[23] U.S. Federal Trade Commission, et al. v. Amazon.com, Inc., Case No. 2:23-cv-01495 (W.D. Wash., 2023).

[24] Australian Competition and Consumer Commission, supra note 6, 125.

[25] Id., 124.

[26] Japan Fair Trade Commission, supra note 7, 144.

[27] European Commission, supra note 8. But see also Amazon, Supporting Sellers with Tools, Insights, and Data (2021), https://www.aboutamazon.eu/news/policy/supporting-sellers-with-tools-insights-and-data (claiming that the company is just using aggregate (rather than individual) data: “Just like our third-party sellers and other retailers across the world, Amazon also uses data to run our business. We use aggregated data about customers’ experience across the store to continuously improve it for everyone, such as by ensuring that the store has popular items in stock, customers are finding the products they want to purchase, or connecting customers to great new products through automated merchandising.”)

[28] European Commission, supra note 16.

[29] UK Competition and Markets Authority, supra notes 9 and 16.

[30] Bundeskartellamt, 5 Jul. 2022, Case B2-55/21, paras. 493, 504, and 518.

[31] Id., para. 536.

[32] European Commission, supra note 10.

[33] European Commission, supra note 11; UK Competition and Markets Authority, supra note 11.

[34] European Commission, supra note 16. In a similar vein, see also UK Competition and Markets Authority, supra note 16, paras. 4.2-4.7.

[35] European Commission, supra note 16, para. 111.

[36] Id., para. 123.

[37] Cre?mer, de Montjoye, & Schweitzer, supra note 2, 33-34.

[38] See, e.g., Marc Bourreau, Some Economics of Digital Ecosystems, OECD Hearing on Competition Economics of Digital Ecosystems (2020), https://www.oecd.org/daf/competition/competition-economics-of-digital-ecosystems.htm; Amelia Fletcher, Digital Competition Policy: Are Ecosystems Different?, OECD Hearing on Competition Economics of Digital Ecosystems (2020).

[39] See, e.g., Cristina Caffarra, Matthew Elliott, & Andrea Galeotti, ‘Ecosystem’ Theories of Harm in Digital Mergers: New Insights from Network Economics, VoxEU (2023), https://cepr.org/voxeu/columns/ecosystem-theories-harm-digital-mergers-new-insights-network-economics-part-1 (arguing that, in merger control, the implementation of an ecosystem theory of harm would require assessing how a conglomerate acquisition can change the network of capabilities (e.g., proprietary software, brand, customer-base, data) in order to evaluate how easily competitors can obtain alternative assets to those being acquired); for a different view, see Geoffrey A. Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 George Mason Law Review 1281(2021).

[40] See, e.g., Viktoria H.S.E. Robertson, Digital merger control: adapting theories of harm, (forthcoming) European Competition Journal; Caffarra, Elliott, & Galeotti, supra note 39; OECD, Theories of Harm for Digital Mergers (2023), available at www.oecd.org/daf/competition/theories-of-harm-for-digital-mergers-2023.pdf; Bundeskartellamt, Merger Control in the Digital Age – Challenges and Development Perspectives (2022), available at https://www.bundeskartellamt.de/SharedDocs/Publikation/EN/Diskussions_Hintergrundpapiere/2022/Working_Group_on_Competition_Law_2022.pdf?__blob=publicationFile&v=2; Elena Argentesi, Paolo Buccirossi, Emilio Calvano, Tomaso Duso, Alessia Marrazzo, & Salvatore Nava, Merger Policy in Digital Markets: An Ex Post Assessment, 17 Journal of Competition Law & Economics 95 (2021); Marc Bourreau & Alexandre de Streel, Digital Conglomerates and EU Competition Policy (2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3350512.

[41] Bundeskartellamt, 11 Feb. 2022, Case B6-21/22, https://www.bundeskartellamt.de/SharedDocs/Entscheidung/EN/Fallberichte/Fusionskontrolle/2022/B6-21-22.html;jsessionid=C0837BD430A8C9C8E04D133B0441EB95.1_cid362?nn=4136442.

[42] UK Competition and Markets Authority, Microsoft / Activision Blizzard Merger Inquiry (2023), https://www.gov.uk/cma-cases/microsoft-slash-activision-blizzard-merger-inquiry.

[43] See European Commission, Commission Prohibits Proposed Acquisition of eTraveli by Booking (2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_4573 (finding that a flight product is a crucial growth avenue in Booking’s ecosystem, which revolves around its hotel online-travel-agency (OTA) business, as it would generate significant additional traffic to the platform, thus allowing Booking to benefit from existing customer inertia and making it more difficult for competitors to contest Booking’s position in the hotel OTA market).

[44] Thomas Eisenmann, Geoffrey Parker, & Marshall Van Alstyne, Platform Envelopment, 32 Strategic Management Journal 1270 (2011).

[45] See, e.g., Colangelo, supra note 1, and Pablo Iba?n?ez Colomo, Self-Preferencing: Yet Another Epithet in Need of Limiting Principles, 43 World Competition 417 (2020) (investigating whether and to what extent self-preferencing could be considered a new standalone offense in EU competition law); see also European Commission, Digital Markets Act – Impact Assessment Support Study (2020), 294, https://op.europa.eu/en/publication-detail/-/publication/0a9a636a-3e83-11eb-b27b-01aa75ed71a1/language-en (raising doubts about the novelty of this new theory of harm, which seems similar to the well-established leveraging theories of harm of tying and bundling, and margin squeeze).

[46] European Commission, supra note 45, 16.

[47] European Commission, 27 Jun. 2017, Case AT.39740, Google Search (Shopping).

[48] See General Court, 10 Nov. 2021, Case T-612/17, Google LLC and Alphabet Inc. v. European Commission, ECLI:EU:T:2021:763, para. 155 (stating that the general principle of equal treatment obligates vertically integrated platforms to refrain from favoring their own services as opposed to rival ones; nonetheless, the ruling framed self-preferencing as discriminatory abuse).

[49] In the meantime, however, see Opinion of the Advocate General Kokott, 11 Jan. 2024, Case C-48/22 P, Google v. European Commission, ECLI:EU:C:2024:14, paras. 90 and 95 (arguing that the self-preferencing of which Google is accused constitutes an independent form of abuse, albeit one that exhibits some proximity to cases involving margin squeezing).

[50] European Commission, Commission Sends Amazon Statement of Objections over Proposed Acquisition of iRobot (2023), https://ec.europa.eu/commission/presscorner/detail/en/IP_23_5990.

[51] The same concerns and approach have been shared by the CMA, although it reached a different conclusion, finding that the new merged entity would not have incentive to self-preference its own branded RVCs: see UK Competition and Markets Authority, Amazon / iRobot Merger Inquiry – Clearance Decision (2023), paras. 160, 188, and 231, https://www.gov.uk/cma-cases/amazon-slash-irobot-merger-inquiry.

[52] See European Commission, supra note 45, 304.

[53] Id., 313-314 (envisaging, among potential remedies, the imposition of a duty to make all data used by the platform for strategic decisions available to third parties); see also Désirée Klinger, Jonathan Bokemeyer, Benjamin Della Rocca, & Rafael Bezerra Nunes, Amazon’s Theory of Harm, Yale University Thurman Arnold Project (2020), 19, available at https://som.yale.edu/sites/default/files/2022-01/DTH-Amazon.pdf.

[54] Colangelo, supra note 1; see also Oscar Borgogno & Giuseppe Colangelo, Platform and Device Neutrality Regime: The New Competition Rulebook for App Stores?, 67 Antitrust Bulletin 451 (2022).

[55] See Court of Justice of the European Union (CJEU), 12 May 2022, Case C-377/20, Servizio Elettrico Nazionale SpA v. Autorità Garante della Concorrenza e del Mercato, ECLI:EU:C:2022:379; 19 Apr. 2018, Case C-525/16, MEO v. Autoridade da Concorrência, ECLI:EU:C:2018:270; 6 Sep. 2017, Case C-413/14 P, Intel v. Commission, ECLI:EU:C:2017:632; 6 Oct. 2015, Case C-23/14, Post Danmark A/S v. Konkurrencerådet (Post Danmark II), ECLI:EU:C:2015:651; 27 Mar. 2012, Case C-209/10, Post Danmark A/S v Konkurrencera?det (Post Danmark I), ECLI: EU:C:2012:172; for a recent overview of the EU case law, see also Pablo Iba?n?ez Colomo, The (Second) Modernisation of Article 102 TFEU: Reconciling Effective Enforcement, Legal Certainty and Meaningful Judicial Review, SSRN (2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4598161.

[56] CJEU, Intel, supra note 55, paras. 133-134.

[57] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 73.

[58] Opinion of Advocate General Rantos, 9 Dec. 2021, Case C?377/20, Servizio Elettrico Nazionale SpA v. Autorità Garante della Concorrenza e del Mercato, ECLI:EU:C:2021:998, para. 45.

[59] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 77.

[60] Id., paras. 77, 80, and 83.

[61] CJEU, 26 Nov.1998, Case C-7/97, Oscar Bronner GmbH & Co. KG v. Mediaprint Zeitungs- und Zeitschriftenverlag GmbH & Co. KG, Mediaprint Zeitungsvertriebsgesellschaft mbH & Co. KG and Mediaprint Anzeigengesellschaft mbH & Co. KG, ECLI:EU:C:1998:569.

[62] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 85.

[63] European Commission, supra note 11; UK Competition and Markets Authority, supra note 17, paras. 2.6, 4.3, and 4.7.

[64] See, e.g., European Commission, Case COMP D3/34493, DSD, para. 112 (2001) OJ L166/1; affirmed in GC, 24 May 2007, Case T-151/01, DerGru?nePunkt – Duales System DeutschlandGmbH v. European Commission, ECLI:EU:T:2007:154 and CJEU, 16 Jul. 2009, Case C-385/07 P, ECLI:EU:C:2009:456; European Commission, Case IV/31.043, Tetra Pak II, paras. 105–08, (1992) OJ L72/1; European Commission, Case IV/29.971, GEMA III, (1982) OJ L94/12; CJUE, 27 Mar. 1974, Case 127/73, Belgische Radio en Televisie e socie?te? belge des auteurs, compositeurs et e?diteurs v. SV SABAM and NV Fonior, ECLI:EU:C:1974:25, para. 15; European Commission, Case IV/26.760, GEMA II, (1972) OJ L166/22; European Commission, Case IV/26.760, GEMA I, (1971) OJ L134/15.

[65] See, e.g., Richard A. Posner, Intellectual Property: The Law and Economics Approach, 19 The Journal of Economic Perspectives 57 (2005).

[66] See, e.g., Richard Gilbert & Carl Shapiro, Optimal Patent Length and Breadth, 21 The RAND Journal of Economics 106 (1990); Pankaj Tandon, Optimal Patents with Compulsory Licensing, 90 Journal of Political Economy 470 (1982); Frederic M. Scherer, Nordhaus’ Theory of Optimal Patent Life: A Geometric Reinterpretation, 62 American Economic Review 422 (1972); William D. Nordhaus, Invention, Growth, and Welfare: A Theoretical Treatment of Technological Change, Cambridge, MIT Press (1969).

[67] See, e.g., Hal R. Varian, Copying and Copyright, 19 The Journal of Economic Perspectives 121 (2005); William R. Johnson, The Economics of Copying, 93 Journal of Political Economy 158 (1985); Stephen Breyer, The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies, and Computer Programs, 84 Harvard Law Review 281 (1970).

[68] Sai Krishna Kamepalli, Raghuram Rajan, & Luigi Zingales, Kill Zone, NBER Working Paper No. 27146 (2022), http://www.nber.org/papers/w27146; Massimo Motta & Sandro Shelegia, The “Kill Zone”: Copying, Acquisition and Start-Ups’ Direction of Innovation, Barcelona GSE Working Paper Series Working Paper No. 1253 (2021), https://bse.eu/research/working-papers/kill-zone-copying-acquisition-and-start-ups-direction-innovation; U.S. House of Representatives, Subcommittee on Antitrust, Commercial, and Administrative Law, supra note 8, 164; Stigler Committee for the Study of Digital Platforms, Market Structure and Antitrust Subcommittee (2019) 54, https://research.chicagobooth.edu/stigler/events/single-events/antitrust-competition-conference/digital-platforms-committee; contra, see Geoffrey A. Manne, Samuel Bowman, & Dirk Auer, Technology Mergers and the Market for Corporate Control, 86 Missouri Law Review 1047 (2022).

[69] See also Howard A. Shelanski, Information, Innovation, and Competition Policy for the Internet, 161 University of Pennsylvania Law Review 1663 (2013), 1999 (describing as “forced free riding” the situation occurring when a platform appropriates innovation by other firms that depend on the platform for access to consumers).

[70] See Feng Zhu & Qihong Liu, Competing with Complementors: An Empirical Look at Amazon.com, 39 Strategic Management Journal 2618 (2018).

[71] Andrei Hagiu, Tat-How Teh, and Julian Wright, Should Platforms Be Allowed to Sell on Their Own Marketplaces?, 53 RAND Journal of Economics 297 (2022), (the model assumes that there is a platform that can function as a seller and/or a marketplace, a fringe of small third-party sellers that all sell an identical product, and an innovative seller that has a better product in the same category as the fringe sellers and can invest more in making its product even better; further, the model allows the different channels (on-platform or direct) and the different sellers to offer different values to consumers; therefore, third-party sellers (including the innovative seller) can choose whether to participate on the platform’s marketplace, and whenever they do, can price discriminate between consumers that come to it through the marketplace and consumers that come to it through the direct channel).

[72] See Germa?n Gutie?rrez, The Welfare Consequences of Regulating Amazon (2022), available at http://germangutierrezg.com/Gutierrez2021_AMZ_welfare.pdf (building an equilibrium model where consumers choose products on the Amazon platform, while third-party sellers and Amazon endogenously set prices of products and platform fees).

[73] See Federico Etro, Product Selection in Online Marketplaces, 30 Journal of Economics & Management Strategy 614 (2021), (relying on a model where a marketplace such as Amazon provides a variety of products and can decide, for each product, whether to monetize sales by third-party sellers through a commission or become a seller on its platform, either by commercializing a private label version or by purchasing from a vendor and resell as a first party retailer; as acknowledged by the author, a limitation of the model is that it assumes that the marketplace can set the profit?maximizing commission on each product; if this is not the case, third-party sales would be imperfectly monetized, which would increase the relative profitability of entry).

[74] Patrick Andreoli-Versbach & Joshua Gans, Interplay Between Amazon Store and Logistics, SSRN (2023) https://ssrn.com/abstract=4568024.

[75] Simon Anderson & O?zlem Bedre-Defolie, Online Trade Platforms: Hosting, Selling, or Both?, 84 International Journal of Industrial Organization 102861 (2022).

[76] Chiara Farronato, Andrey Fradkin, & Alexander MacKay, Self-Preferencing at Amazon: Evidence From Search Rankings, NBER Working Paper No. 30894 (2023), http://www.nber.org/papers/w30894.

[77] See Erik Madsen & Nikhil Vellodi, Insider Imitation, SSRN (2023) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3832712 (introducing a two-stage model where the platform publicly commits to an imitation policy and the entrepreneur observes this policy and chooses whether to innovate: if she chooses not to, the game ends and both players earn profits normalized to zero; otherwise, the entrepreneur pays a fixed innovation cost to develop the product, which she then sells on a marketplace owned by the platform).

[78] Federico Etro, The Economics of Amazon, SSRN (2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4307213.

[79] Jay Pil Choi, Kyungmin Kim, & Arijit Mukherjee, “Sherlocking” and Information Design by Hybrid Platforms, SSRN (2023), https://ssrn.com/abstract=4332558 (the model assumes that the platform chooses its referral fee at the beginning of the game and that the cost of entry is the same for both the seller and the platform).

[80] Radostina Shopova, Private Labels in Marketplaces, 89 International Journal of Industrial Organization 102949 (2023), (the model assumes that the market structure is given exogenously and that the quality of the seller’s product is also exogenous; therefore, the paper does not investigate how entry by a platform affects the innovation incentives of third-party sellers).

[81] Jean-Pierre Dube?, Amazon Private Brands: Self-Preferencing vs Traditional Retailing, SSRN (2022) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4205988.

[82] Gregory S. Crawford, Matteo Courthoud, Regina Seibel, & Simon Zuzek, Amazon Entry on Amazon Marketplace, CEPR Discussion Paper No. 17531 (2022), https://cepr.org/publications/dp17531.

[83] Motta & Shelegia, supra note 68.

[84] Jingcun Cao, Avery Haviv, & Nan Li, The Spillover Effects of Copycat Apps and App Platform Governance, SSRN (2023), https://ssrn.com/abstract=4250292.

[85] Massimo Motta, Self-Preferencing and Foreclosure in Digital Markets: Theories of Harm for Abuse Cases, 90 International Journal of Industrial Organization 102974 (2023).

[86] Id.

[87] Id.

[88] See, e.g., Crawford, Courthoud, Seibel, & Zuzek, supra note 82; Etro, supra note 78; Shopova, supra note 80.

[89] Motta, supra note 85.

[90] Servizio Elettrico Nazionale, supra note 55, paras. 53-54; Post Danmark II, supra note 55, para. 65.

[91] Etro, supra note 78; see also Herbert Hovenkamp, The Looming Crisis in Antitrust Economics, 101 Boston University Law Review 489 (2021), 543, (arguing that: “Amazon’s practice of selling both its own products and those of rivals in close juxtaposition almost certainly benefits consumers by permitting close price comparisons. When Amazon introduces a product such as AmazonBasics AAA batteries in competition with Duracell, prices will go down. There is no evidence to suggest that the practice is so prone to abuse or so likely to harm consumers in other ways that it should be categorically condemned. Rather, it is an act of partial vertical integration similar to other practices that the antitrust laws have confronted and allowed in the past.”)

[92] On the more complex economic rationale of intellectual property, see, e.g., William M. Landes & Richard A. Posner, The Economic Structure of Intellectual Property Law, Cambridge, Harvard University Press (2003).

[93] See, e.g., Italian Competition Authority, 18 Jul. 2023 No. 30737, Case A538 – Sistemi di sigillatura multidiametro per cavi e tubi, (2023) Bulletin No. 31.

[94] See CJEU, 6 Apr. 1995, Joined Cases C-241/91 P and 242/91 P, RTE and ITP v. Commission, ECLI:EU:C:1995:98; 29 Apr. 2004, Case C-418/01, IMS Health GmbH & Co. OHG v. NDC Health GmbH & Co. GH, ECLI:EU:C:2004:257; General Court, 17 Sep. 2007, Case T-201/04, Microsoft v. Commission, ECLI:EU:T:2007:289; CJEU, 16 Jul. 2015, Case C-170/13, Huawei Technologies Co. Ltd v. ZTE Corp., ECLI:EU:C:2015:477.

[95] See, e.g., Dana Mattioli, How Amazon Wins: By Steamrolling Rivals and Partners, Wall Street Journal (2022), https://www.wsj.com/articles/amazon-competition-shopify-wayfair-allbirds-antitrust-11608235127; Aditya Kalra & Steve Stecklow, Amazon Copied Products and Rigged Search Results to Promote Its Own Brands, Documents Show, Reuters (2021), https://www.reuters.com/investigates/special-report/amazon-india-rigging.

[96] Williams-Sonoma, Inc. v. Amazon.Com, Inc., Case No. 18-cv-07548 (N.D. Cal., 2018). The suit was eventually dismissed, as the parties entered into a settlement agreement: Williams-Sonoma, Inc. v. Amazon.Com, Inc., Case No. 18-cv-07548-AGT (N.D. Cal., 2020).

[97] Amazon Best Sellers, https://www.amazon.com/Best-Sellers/zgbs.

[98] Hovenkamp, supra note 91, 2015-2016.

[99] Nicolas Petit, Big Tech and the Digital Economy, Oxford, Oxford University Press (2020), 224-225.

[100] For a recent analysis, see Zijun (June) Shi, Xiao Liu, Dokyun Lee, & Kannan Srinivasan, How Do Fast-Fashion Copycats Affect the Popularity of Premium Brands? Evidence from Social Media, 60 Journal of Marketing Research 1027 (2023).

[101] Lina M. Khan, Amazon’s Antitrust Paradox, 126 Yale Law Journal 710 (2017), 782.

[102] See Massimo Motta &Martin Peitz, Intervention Triggers and Underlying Theories of Harm, in Market Investigations. A New Competition Tool for Europe? (M. Motta, M. Peitz, & H. Schweitzer, eds.), Cambridge, Cambridge University Press (2022), 16, 59 (arguing that, while it is unclear to what extent products or ideas are worth protecting and/or can be protected from sherlocking and whether such cloning is really harmful to consumers, this is clearly an area where an antitrust investigation for abuse of dominant position would not help).

[103] Khan, supra note 101, 780 and 783 (arguing that Amazon’s conflicts of interest tarnish the neutrality of the competitive process and that the competitive implications are clear, as Amazon is exploiting the fact that some of its customers are also its rivals).

[104] Servizio Elettrico Nazionale, supra note 55, para. 85.

[105] Post Danmark I, supra note 55, para. 22.

[106] Iba?n?ez Colomo, supra note 55, 21-22.

[107] Id.

[108] See, e.g., DMA, supra note 4, Recital 5 (complaining that the scope of antitrust provisions is “limited to certain instances of market power, for example dominance on specific markets and of anti-competitive behaviour, and enforcement occurs ex post and requires an extensive investigation of often very complex facts on a case by case basis.”).

[109] U.S. Federal Trade Commission, et al. v. Amazon.com, Inc., supra note 23.

[110] Khan, supra note 101.

[111] Khan, supra note 22, 1003, referring to Amazon, Google, and Meta.

Continue reading
Antitrust & Consumer Protection

Gus Hurwitz on Sports and Cord-Cutting

Presentations & Interviews ICLE Director of Law & Economics Programs Gus Hurwitz was a guest on The Cyberlaw Podcast, where he discussed big news for cord-cutting sports fans, . . .

ICLE Director of Law & Economics Programs Gus Hurwitz was a guest on The Cyberlaw Podcast, where he discussed big news for cord-cutting sports fans, Amazon’s ad-data deal with Reach, a novel Federal Trade Commission case brought against Blackbaud, the Federal Communications Commission’s ban on AI-generated voice cloning in robocalls, and South Korea’s pause on implementation of its anti-monopoly platform act. Audio of the full episode is embedded below.

Continue reading
Telecommunications & Regulated Utilities

Schrems III: Gauging the Validity of the GDPR Adequacy Decision for the United States

ICLE Issue Brief Executive Summary The EU Court of Justice’s (CJEU)  July 2020 Schrems II decision generated significant uncertainty, as well as enforcement actions in various EU countries, . . .

Executive Summary

The EU Court of Justice’s (CJEU)  July 2020 Schrems II decision generated significant uncertainty, as well as enforcement actions in various EU countries, as it questioned the lawfulness of transferring data to the United States under the General Data Protection Regulation (GDPR)[1] while relying on “standard contractual clauses.”

President Joe Biden signed an executive order in October 2022 establishing a new data-protection framework to address this uncertainty. The European Commission responded in July 2023 by adopting an “Adequacy Decision” under Article 45(3) of the GDPR, formally deeming U.S. data-protection commitments to be adequate.

A member of the French Parliament has already filed the first legal challenge to the Adequacy Decision and another from Austrian privacy activist Max Schrems is expected soon.

This paper discusses key legal issues likely to be litigated:

  1. The legal standard of an “adequate level of protection” for personal data. Although we know that the “adequate level” and “essential equivalence” of protection do not necessarily mean identical protection, the precise degree of flexibility remains an open question that the EU Court may need to clarify to a much greater extent.
  2. The issue of proportionality of “bulk” data collection by the U.S. government. It examines whether the objectives pursued can be considered legitimate under EU law and, if so, whether the existing CJEU precedents preclude such collection from being considered proportionate under the GDPR.
  3. The problem of effective redress—a cornerstone of the Schrems II decision. This paper explores debates around Article 47 of the EU Charter of Fundamental Rights, whether the new U.S. framework offers redress through an impartial tribunal, and whether EU persons can effectively access the redress procedure.
  4. The issue of access to information about U.S. intelligence agencies’ data-processing activities.

I.        Introduction

Since the EU Court of Justice’s (CJEU) Schrems II decision,[2] it has been precarious whether transfers of personal data from the EU to the United States are lawful. It’s true that U.S. intelligence-collection rules and practices have changed since 2016, when the European Commission issued its assessment in the “Privacy Shield Decision” and to which facts the CJEU limited its reasoning. There has, however, also been a vocal movement among NGOs, European politicians, and—recently—national data-protection authorities to treat Schrems II as if it conclusively decided that exports of personal data to the United States could not be justified through standard contractual clauses (“SCC”) in most contexts (i.e., when data can be accessed in the United States). This interpretation has now led to a series of enforcement actions by national authorities in Austria, France, and likely in several other member states (notably in the “Google Analytics” cases, as well as the French “Doctolib/Amazon Web Services” case).[3]

Aiming to address this precarious situation, the White House adopted a new data-protection framework for intelligence-collection activities. On Oct. 7, 2022, President Joe Biden signed an executive order codifying that framework,[4] which had been awaited since U.S. and EU officials reached an agreement in principle on a new data-privacy framework in March 2022.[5] The European Commission responded by preparing a draft “Adequacy Decision” for the United States under Article 45(3) of the General Data Protection Regulation (GDPR), which was released in December 2022.[6] In July 2023, the European Commission formally adopted the Adequacy Decision.[7]

The first legal challenge to the decision has already been filed by Philippe Latombe, a member of the French Parliament and a commissioner of the French Data Protection Authority (CNIL).[8] Latombe is acting in his personal capacity, not as a French MP or a member of CNIL. He chose a direct action for annulment under Article 263 of the Treaty on the Functioning of the European Union (TFEU), which means that his case faces strict admissibility conditions. Based on precedent, it would not be surprising if the EU courts refuse to consider its merits.[9] Regarding the substance of Latombe’s action, he described it in very general terms in his press release (working translation from French):

The text resulting from these negotiations violates the Charter of Fundamental Rights of the Union, due to the insufficient guarantees of respect for private and family life with regard to the bulk collection of personal data, and the General Data Protection Regulation (GDPR), due to the absence of guarantees of a right to an effective remedy and access to an impartial tribunal, the absence of a framework for automated decisions or lack of guarantees relating to the security of the data processed: all violations of our law which I develop in the 33-page brief (+ 283 pages of annexes) filed with the TJUE yesterday.[10]

Latombe also complained about the Adequacy Decision being published only in English.[11] Irrespective of the legal merits of that complaint, however, it is already moot because the Adequacy Decision was subsequently published in the Official Journal of the European Union in all official EU languages.[12]

Reportedly, Max Schrems also plans to bring a legal challenge against the Adequacy Decision,[13] as he has successfully done with the two predecessors of the current EU-US framework.[14] This time, however, Schrems plans to begin the suit in the Austrian courts, hoping for a speedy preliminary reference to the EU Court of Justice (“CJEU”).[15]

This paper aims to present and discuss the key legal issues surrounding the European Commission’s Adequacy Decision, which are likely to be the subject of litigation. In Section II, I begin by problematizing the applicable legal standard of an “adequate level of protection” of personal data in a third country, noting that this issue remains open for the CJEU to address. This makes it more challenging to assess the Adequacy Decision’s chances before the Court and suggests that the conclusive tone adopted by some commentators is premature.

I then turn, in Section III, to the question of proportionality of bulk data collection by the U.S. government. I consider whether the objectives for which U.S. intelligence agencies collect personal data may constitute “legitimate objectives” under EU law. Secondly, I discuss whether bulk collection of personal data may be done in a way that does not jeopardize adequacy under the GDPR.

The second part of Section III is devoted to the problem of effective redress, which was the critical issue on which the CJEU relied in making its Schrems II decision. I note some confusion among the commentators about the precise role of Article 47 of the EU Charter of Fundamental Rights for a third-country adequacy assessment under the GDPR. I then outline the disagreement between the Commission and some commentators on whether the new U.S. data-protection framework provides redress through an independent and impartial tribunal with binding powers.

Finally, I discuss the issue of access to information about U.S. intelligence agencies’ data-processing activities.

II.      The Applicable Legal Standard: What Does ‘Adequacy’ Mean?

The overarching legal question that the CJEU will likely need to answer is whether the United States “ensures an adequate level of protection for personal data essentially equivalent to that guaranteed in the European Union by the GDPR, read in the light of Articles 7 and 8 of the [EU Charter of Fundamental Rights].”[16]

The words “essentially equivalent” are not to be found in the GDPR’s provision on adequacy decisions—i.e., in its Article 45, which merely refers to an “adequate level of protection” of personal data in a third country. Instead, we find them in the GDPR’s recital 104: “[t]he third country should offer guarantees ensuring an adequate level of protection essentially equivalent to that ensured within the Union (…).” This phrasing goes back to the CJEU’s Schrems I decision,[17] where the Court interpreted the old Data Protection Directive (Directive 95/46).[18] In Schrems I, the Court stated:

The word ‘adequate’ in Article 25(6) of Directive 95/46 admittedly signifies that a third country cannot be required to ensure a level of protection identical to that guaranteed in the EU legal order. However, as the Advocate General has observed in point 141 of his Opinion, the term ‘adequate level of protection’ must be understood as requiring the third country in fact to ensure, by reason of its domestic law or its international commitments, a level of protection of fundamental rights and freedoms that is essentially equivalent to that guaranteed within the European Union by virtue of Directive 95/46 read in the light of the Charter.[19]

As Christakis, Propp, & have Swire noted,[20] the critical point that “a third country cannot be required to ensure a level of protection identical to that guaranteed in the EU legal order” was also accepted by the Advocate General Øe in Schrems II.[21]

In 2020, the European Data Protection Board (EDPB) issued recommendations “on the European Essential Guarantees for surveillance measures.”[22] The recommendations aim to “form part of the assessment to conduct in order to determine whether a third country provides a level of protection essentially equivalent to that guaranteed within the EU.”[23] The EDPB’s document is, of course, not a source of law binding the Court of Justice, but it attempts to interpret the law in light of the CJEU’s jurisprudence. The Court is free not to follow the EDPB’s legal interpretation, and thus the importance of the recommendations should not be overstated, either in favor or against the Adequacy Decision.

While we know that the “adequate level” and “essential equivalence” of protection do not necessarily mean identical protection, the precise degree of flexibility remains an open question—and one that the EU Court may need to clarify to a much greater extent.

III.    Arguments Likely to Be Made Against the Adequacy Decision

A.     Proportionality and Bulk Data Collection

Under Article 52(1) of the EU Charter of Fundamental Rights, restrictions on the right to privacy and the protection of personal data must meet several conditions. They must be “provided for by law” and “respect the essence” of the right. Moreover, “subject to the principle of proportionality, limitations may be made only if they are necessary” and meet one of the objectives recognized by EU law or “the need to protect the rights and freedoms of others.”

The October 2022 executive order supplemented the phrasing “as tailored as possible” present in 2014’s Presidential Policy Directive on Signals Intelligence Activities (PPD-28) with language explicitly drawn from EU law: mentions of the “necessity” and “proportionality” of signals-intelligence activities related to “validated intelligence priorities.”[24]

Doubts have been raised, however, as to whether this is sufficient. I consider two potential issues. First, whether the objectives for which U.S. intelligence agencies collect personal data may constitute “legitimate objectives” under EU law. Second, whether the bulk collection of personal data may be done in a way that does not jeopardize adequacy under the GDPR.

1.        Legitimate objectives

In his analysis of the adequacy under EU law of the new U.S. data-protection framework, Douwe Korff argues that:

The purposes for which the Presidential Executive Order allows the use of signal intelligence and bulk data collection capabilities are clearly not limited to what the EU Court of Justice regards as legitimate national security purposes.[25]

Korff’s concern is that the legitimate objectives listed in the executive order are too broad and could be interpreted to include, e.g., criminal or economic threats, which do not rise to the level of “national security” as defined by the CJEU.[26] Korff referred to the EDPB Recommendations, which reference CJEU decisions in La Quadrature du Net and Privacy International. Unlike Korff, however, the EDPB stresses that those CJEU decisions were “in relation to the law of a Member State and not to a third country law.”[27]

In contrast, in Schrems II, the Court did not consider legitimate objectives when assessing whether a third country provides adequate protection. In its recommendations, the EDPB discussed the legal material that was available, i.e., the CJEU decisions on intra-EU matters. Still, this approach can be taken too far without sufficient care. Just because some guidance is available (on intra-EU issues), it does not follow that it applies to data transfers outside the EU. It is instructive to consider, in this context, what Advocate General Øe said in Schrems II:

It also follows from that judgment [Schrems I – MB], in my view, that the law of the third State of destination may reflect its own scale of values according to which the respective weight of the various interests involved may diverge from that attributed to them in the EU legal order. Moreover, the protection of personal data that prevails within the European Union meets a particularly high standard by comparison with the level of protection in force in the rest of the world. The ‘essential equivalence’ test should therefore in my view be applied in such a way as to preserve a certain flexibility in order to take the various legal and cultural traditions into account. That test implies, however, if it is not to be deprived of its substance, that certain minimum safeguards and general requirements for the protection of fundamental rights that follow from the Charter and the ECHR have an equivalent in the legal order of the third country of destination.[28]

Hence, exclusive focus on what the EU law requires within the EU—however convenient this method may be—may be misleading in assessing the adequacy of a third country under Article 45.

Aside from the lack of direct guidance on the question of legitimate objectives under Article 45 GDPR, there is a second reason not to be too quick to conclude that the U.S. framework fails on this point. As the Commission noted in the Adequacy Decision:

(…) the legitimate objectives laid down in EO 14086 cannot by themselves be relied upon by intelligence agencies to justify signals intelligence collection but must be further substantiated, for operational purposes, into more concrete priorities for which signals intelligence may be collected. In other words, actual collection can only take place to advance a more specific priority. Such priorities are established through a dedicated process aimed at ensuring compliance with the applicable legal requirements, including those relating to privacy and civil liberties.[29]

It may be a formalistic mistake to consider the list of “legitimate objectives” in isolation from such additional requirements and process. The assessment of third-country adequacy cannot be constrained by the mere choice of words, even if they seem to correspond to an established concept in EU law. (Note that this also applies to “necessity” and “proportionality” as used in the executive order.)

2.        Can bulk collection be ‘adequate’?

As Max Schrems’ organization NOYB stated in response to the executive order’s publication:

(…) there is no indication that US mass surveillance will change in practice. So-called “bulk surveillance” will continue under the new Executive Order (see Section 2 (c)(ii)) and any data sent to US providers will still end up in programs like PRISM or Upstream, despite of the CJEU declaring US surveillance laws and practices as not “proportionate” (under the European understanding of the word) twice.[30]

Korff echoed this view, noting, e.g.:

(…) – the EO [Executive Order – MB] does not stand in the way of the indiscriminate bulk collection of e-communications content data that the EU Court held does not respect the “essence” of data protection and privacy and that therefore, under EU law, must always be prohibited, even in relation to national security issues (as narrowly defined);

– the EO allows for indiscriminate bulk collection of e-communications metadata outside of the extreme scenarios in which the EU Court only, exceptionally, allows it in Europe; and

– the EO allows for indiscriminate bulk collection of those and other data for broadly defined not national security-related purposes in relation to which such collection is regarded as clearly not “necessary” or “proportionate” under EU law.[31]

The Schrems II Court indeed held that U.S. law and practices do not “[correlate] to the minimum safeguards resulting, under EU law, from the principle of proportionality.”[32] As, however, the EDPB noted in its opinion on a draft of the Adequacy Decision:

… the CJEU did not exclude, by principle, bulk collection, but considered in its Schrems II decision that for such bulk collection to take place lawfully, sufficiently clear and precise limits must be in place to delimit the scope of such bulk collection. (…)

The EDPB also recognizes that while replacing the PPD-28, the EO 14086 provides for new safeguards and limits to the collection and use of data collected outside the U.S., as the limitations of FISA or other more specific U.S. laws do not apply.[33]

As Korff observed, the CJEU has considered the question of bulk collection of electronic communication data, in an intra-EU context, in cases like Digital Rights Ireland[34] and La Quadrature du Net.[35] In Schrems I, the Court referenced Digital Rights Ireland, while stating:

(…) legislation permitting the public authorities to have access on a generalised basis to the content of electronic communications must be regarded as compromising the essence of the fundamental right to respect for private life, as guaranteed by Article 7 of the Charter (…)[36]

This is potentially important, because the Court concluded the discussion included in this paragraph by saying that “a level of protection of fundamental rights essentially equivalent to that guaranteed in the EU legal order” is “apparent in particular from the preceding paragraphs.”[37] This could suggest that, as under the Data Protection Directive in Schrems I, the Court may see the issue of bulk collection of the contents of electronic communications as a serious problem for adequacy under Article 45 GDPR.

The Commission addressed this in the Adequacy Decision as follows:

(…) collection of data within the United States, which is the most relevant for the present adequacy finding as it concerns data that has been transferred to organisations in the U.S., must always be targeted (…) ‘Bulk collection’ may only be carried out outside the United States, on the basis of EO 12333.[38]

The Commission relies on a distinction between data collection that the U.S. government does within the United States and outside of the United States. This likely refers to an argument—discussed by, e.g., Christakis[39] —that adequacy assessment should only concern the processing of personal data that takes place due to a data transfer to the country in question. In other words, it should only concern domestic surveillance, not international surveillance (if personal data transferred from the EU would fall under domestic surveillance in that third country).

The Commission also made a second relevant point:

(…) bulk collection under EO 12333 takes place only when necessary to advance specific validated intelligence priorities and is subject to a number of limitations and safeguards designed to ensure that data is not accessed on an indiscriminate basis. Bulk collection is therefore to be contrasted to collection taking place on a generalised and indiscriminate basis (‘mass surveillance’) without limitations and safeguards.[40]

In the Commission’s view, there is a categorical distinction between “bulk collection” as practiced by the United States and the “generalized and indiscriminate” mass surveillance that the CJEU scrutinized in Digital Rights Ireland and other cases. This may seem like an unnatural reading of “generalized and indiscriminate,” given that it is meant not to apply to “the collection of large quantities of signals intelligence that, due to technical or operational considerations, is acquired without the use of discriminants (for example, without the use of specific identifiers or selection terms).”[41] There may, however, be analogies in EU law that could lead the Court to agree with the Commission on this point.

Consider the Court’s interpretation of the prohibition on “general monitoring” obligations from Article 15(1) of the eCommerce Directive.[42] In Glawischnig-Piesczek, the Court interpreted this rule as not precluding member states from requiring hosting providers to monitor all the content they host in order to identify content identical to “the content of information which was previously declared to be unlawful.”[43] In other words, “general monitoring” was interpreted as not covering indiscriminate processing of all data stored by a hosting provider in order to find content identical to some other content.[44] The Court adopted an analogous approach with respect to Article 17 of the Copyright Directive.[45] This suggests that, in somewhat similar contexts, the Court is willing to see activities that may technically appear to be “general” as “not general,” if some procedural or substantive limitations are present.

B.     Effective Redress

The lack of effective redress available to EU citizens against potential restrictions of their right to privacy from U.S. intelligence activities was central to the Schrems II decision. Among the Court’s key findings were that “PPD-28 does not grant data subjects actionable rights before the courts against the US authorities”[46] and that, under Executive Order 12333, “access to data in transit to the United States [is possible] without that access being subject to any judicial review.”[47]

The new executive order introduced redress mechanisms that include creating a civil-liberties-protection officer in the Office of the Director of National Intelligence (DNI), as well as a new Data Protection Review Court (DPRC). The DPRC is proposed as an independent review body that will make decisions binding on U.S. intelligence agencies. The old framework had sparked concerns about the independence of the DNI’s ombudsperson, and what was seen as insufficient safeguards against external pressures, including the threat of removal. Under the new framework, the independence and binding powers of the DPRC are grounded in regulations issued by the U.S. attorney general.

In a recent public debate, Max Schrems argued that the CJEU would have a difficult time finding that this judicial procedure satisfies Article 47 of the EU Charter, while at the same time holding that some courts in Poland and Hungary do not satisfy it.[48]

1.        Article 47 of the Charter ‘contributes’ to the benchmark level of protection

Schrems’ comment raises two distinct issues. First, Schrems seems to suggest that an adequacy decision can only be granted if the available redress mechanism satisfies the requirements of Article 47 of the Charter of Fundamental Rights.[49] But this is a hasty conclusion. The CJEU’s phrasing in Schrems II is more cautious:

…Article 47 of the Charter, which also contributes to the required level of protection in the European Union, compliance with which must be determined by the Commission before it adopts an adequacy decision pursuant to Article 45(1) of the GDPR.[50]

In arguing that Article 47 “also contributes to the required level of protection,” the Court is not saying that it determines the required level of protection. This is potentially significant, given that the standard of adequacy is “essential equivalence,” not that it be procedurally and substantively identical. Moreover, the Court did not say that the Commission must determine compliance with Article 47 itself, but with the “required level of protection” (which, again, must be “essentially equivalent”). Hence, it is far from clear how the CJEU’s jurisprudence interpreting Article 47 of the Charter is to be applied in the context of an adequacy assessment under Article 45 GDPR.

2.        Is there an independent and impartial tribunal with binding powers?

Second, there is the related but distinct question of whether the redress mechanism is effective under the applicable standard of “required level of protection.” Christakis, Propp, & Swire offer helpful analysis suggesting that it is, considering the proposed DPRC’s independence, effective investigative powers, and authority to issue binding determinations.[51] Gorski & Korff argue that this is not the case, because the DPRC is not “wholly autonomous” and “free from hierarchical constraint.”[52]

The Commission stated in the Adequacy Decision that the available avenues of redress “allow individuals to have access to their personal data, to have the lawfulness of government access to their data reviewed and, if a violation is found, to have such violation remedied, including through the rectification or erasure of their personal data.”[53] Moreover:

(…) the executive branch (the Attorney General and intelligence agencies) are barred from interfering with or improperly influencing the DPRC’s review. The DPRC itself is required to impartially adjudicate cases and operates according to its own rules of procedure (adopted by majority vote) (…)[54]

Likely the most serious objection to this assessment (raised by Gorski) is that:

(…) the court’s decisions can be overruled by the President. Indeed, the President could presumably overrule these decisions in secret, since the court’s opinions are not issued publicly.[55]

Given that Christakis, Propp, & Swire appear to disagree,[56] this question of U.S. law may require further scrutiny. Even if the scenario sketched by Gorski is theoretically possible, however, the CJEU may take the view that it would not be appropriate to rule based on the assumption that the U.S. government would act to mislead the EU. And without that assumption, then the possibility of future changes to U.S. law appear to be adequately addressed by the adequacy-monitoring process (Article 45(4) GDPR).

3.        Do EU persons have effective access to the redress mechanism?

In the already-cited public debate, Max Schrems argued that it may be practically impossible for EU persons to benefit from the new redress mechanism, due to the requirements imposed on “qualifying complaints” under the executive order.[57] Presumably, Schrems implicitly refers to the requirements that a complaint:

(i) “alleges a covered violation has occurred that pertains to personal information of or about the complainant, a natural person, reasonably believed to have been transferred to the United States from a qualifying state after” the official designation of that country by the Attorney General;

(ii) includes “information that forms the basis for alleging that a covered violation has occurred, which need not demonstrate that the complainant’s data has in fact been subject to United States signals intelligence activities; the nature of the relief sought; the specific means by which personal information of or about the complainant was believed to have been transmitted to the United States; the identities of the United States Government entities believed to be involved in the alleged violation (if known); and any other measures the complainant pursued to obtain the relief requested and the response received through those other measures;”

(iii) “is not frivolous, vexatious, or made in bad faith”[58]

Given the qualifications that a complaint need only to “allege” a violation and “need not demonstrate that the complainant’s data has in fact been subject to United States signals intelligence activities,” it is unclear what Schrems’ basis for suggesting that it will not be possible for EU persons to benefit from this redress mechanism is.

C.     Access to Information About Data Processing

Finally, Schrems’ NOYB raised a concern that “judgment by ‘Court’ [is] already spelled out in Executive Order.”[59] This concern seems to be based on the view that a decision of the DPRC (“the judgment”) and what the DPRC communicates to the complainant are the same thing. In other words, the legal effects of a DPRC decision are exhausted by providing the individual with the neither-confirm-nor-deny statement set out in Section 3 of the executive order. This is clearly incorrect. The DPRC has the power to issue binding directions to intelligence agencies. The actual binding determinations of the DPRC are not predetermined by the executive order; only the information to be provided to the complainant is.

Relatedly, Korff argues that:

(…) the meaningless “boilerplate” responses that are spelled out in the rules also violate the principle, enshrined in the ECHR and therefore also applicable under the Charter, that any judgment of a court must be “pronounced publicly”. The “boilerplate” responses, in my opinion, do not constitute the “judgment” reached (…)[60]

Here, as before, Korff appears to elide the question of the legal standard of “adequacy,” directly applying to a third country what he argues is required under the European Convention of Human Rights and thus under the EU Charter.

The issues of access to information and data may, however, call for closer consideration. For example, in La Quadrature du Net, the CJEU looked at the difficult problem of notifying persons whose data has been subject to state surveillance, requiring individual notification “only to the extent that and as soon as it is no longer liable to jeopardise” the law-enforcement tasks in question.[61] Nevertheless, given the “essential equivalence” standard applicable to third-country adequacy assessments, it does not automatically follow that individual notification is at all required in that context.

Moreover, it also does not necessarily follow that adequacy requires that EU citizens have a right to access the data processed by foreign government agencies. The fact that there are significant restrictions on rights to information and access in some EU member states,[62] though not definitive (after all, those countries may be violating EU law), may be instructive for the purposes of assessing the adequacy of data protection in a third country, where EU law requires only “essential equivalence.”

The Commission’s Adequacy Decision accepted that individuals would have access to their personal data processed by U.S. public authorities, but clarifies that this access may be legitimately limited—e.g., by national-security considerations.[63] The Commission did not take the simplistic view that access to personal data must be guaranteed by the same procedure that provides binding redress, including through the Data Protection Review Court. Instead, the Commission accepts that other avenues, such as requests under the Freedom of Information Act, may perform that function.

IV.    Conclusion

With the Adequacy Decision, the European Commission announced that it has favorably assessed the October 2022 executive order’s changes to the U.S. data-protection framework, which apply to foreigners from friendly jurisdictions (presumed to include the EU). The Adequacy Decision is certain to be challenged before the CJEU by privacy advocates. As discussed above, the key legal concerns will likely be the proportionality of data collection and the availability of effective redress.

Opponents of granting an adequacy decision tend to rely on the assumption that a finding of adequacy requires virtually identical substantive and procedural privacy safeguards as required within the EU. As noted by the European Commission in its decision, this position is not well-supported by CJEU case law, which clearly recognizes that only “adequate level” and “essential equivalence” of protection are required from third-party countries under the GDPR. To date, the CJEU has not had to specify in greater detail precisely what, in their view, these provisions mean. Instead, the Court has been able to point to certain features of U.S. law and practice that were significantly below the GDPR standard (e.g., that the official responsible for providing individual redress was not guaranteed to be independent of political pressure). Future legal challenges to a new Adequacy Decision will most likely require the CJEU to provide more guidance on what “adequate” and “essentially equivalent” mean.

In the Adequacy Decision, the Commission carefully considered the features of U.S. law and practice that the Court previously found inadequate under the GDPR. Nearly half of the explanatory part of the decision is devoted to “access and use of personal data transferred from the [EU] by public authorities in the” United States, with the analysis grounded in CJEU’s Schrems II decision.

Overall, the Commission presents a sophisticated, yet uncynical, picture of U.S. law and practice. The lack of cynicism about, e.g., the independence of the DPRC adjudicative process, will undoubtedly be seen by some as naïve and unrealistic, even if the “realism” in this case is based on speculations of what might happen (e.g., secret changes to U.S. policy), rather than evidence. Litigants will likely invite the CJEU to assume that the U.S. government cannot be trusted and that it will attempt to mislead the European Commission and thus undermine the adequacy-monitoring process (Article 45(3) GDPR). It is not clear, however, that the Court will be willing to go that way—not least due to respect for comity in international law.

[1] Regulation (EU) 2016/679 (General Data Protection Regulation).

[2] Case C-311/18, Data Protection Comm’r v. Facebook Ireland Ltd. & Maximillian Schrems, ECLI:EU:C:2019:1145 (CJ, Jul. 16, 2020), available at http://curia.europa.eu/juris/liste.jsf?num=C-311/18 [hereinafter “Schrems II”].

[3] See, e.g., Ariane Mole, Willy Mikalef, & Juliette Terrioux, Why This French Court Decision Has Far-Reaching Consequences for Many Businesses, IAPP.org (Mar. 15, 2021), https://iapp.org/news/a/why-this-french-court-decision-has-far-reaching-consequences-for-many-businesses; Gabriela Zanfir-Fortuna, Understanding Why the First Pieces Fell in the Transatlantic Transfers Domino, The Future of Privacy Forum (2022), https://fpf.org/blog/understanding-why-the-first-pieces-fell-in-the-transatlantic-transfers-domino; Caitlin Fennessy, The Austrian Google Analytics decision: The Race Is On, IAPP Privacy Perspectives (Feb. 7, 2022) https://iapp.org/news/a/the-austrian-google-analytics-decision-the-race-is-on; Italian SA Bans Use of Google Analytics: No Adequate Safeguards for Data Transfers to the USA (Jun. 23, 2022), https://www.gpdp.it/web/guest/home/docweb/-/docweb-display/docweb/9782874.

[4] Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities, The White House (2022), https://www.whitehouse.gov/briefing-room/presidential-actions/2022/10/07/executive-order-on-enhancing-safeguards-for-united-states-signals-intelligence-activities.

[5] European Commission and United States Joint Statement on Trans-Atlantic Data Privacy Framework, European Commission (Mar. 25, 2022), https://ec.europa.eu/commission/presscorner/detail/en/IP_22_2087.

[6] Draft Commission Implementing Decision Pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the Adequate Level of Protection of Personal Data Under the EU-US Data Privacy Framework, European Commission (2022), available at https://commission.europa.eu/system/files/2022-12/Draft%20adequacy%20decision%20on%20EU-US%20Data%20Privacy%20Framework_0.pdf.

[7]  Commission Implementing Decision EU 2023/1795 of 10 July 2023 pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the adequate level of protection of personal data under the EU-US Data Privacy Framework, OJ L 231, 20.9.2023, European Commission (2023), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32023D1795 (hereinafter “Adequacy Decision”).

[8] See Patrice Navarro & Julie Schwartz, Member of French Parliament Lodges First Request for Annulment of EU-US Data Privacy Framework, Hogan Lovells Engage (Sep. 8, 2023), https://www.engage.hoganlovells.com/knowledgeservices/news/member-of-french-parliament-lodges-first-request-for-annulment-of-eu-us-data-privacy-framework; Philippe Latombe, Communiqué de Presse (Sep. 7, 2023), available at https://www.politico.eu/wp-content/uploads/2023/09/07/4_6039685923346583457.pdf.

[9] See, e.g., Joe Jones, EU-US Data Adequacy Litigation Negins, IAPP.org (Sep. 8, 2023), https://iapp.org/news/a/eu-u-s-data-adequacy-litigation-begins.

[10] Latombe, supra note 9.

[11] Id.

[12] See supra note 8.

[13] Mark Scott, We Don’t Talk About Fixing Social Media, Digital Bridge from Politico (Aug. 3, 2023), https://www.politico.eu/newsletter/digital-bridge/we-dont-talk-about-fixing-social-media. See also New Trans-Atlantic Data Privacy Framework Largely a Copy of “Privacy Shield”. NOYB Will Challenge the Decision, noyb.eu (2023), https://noyb.eu/en/european-commission-gives-eu-us-data-transfers-third-round-cjeu.

[14] Case C-362/14, Maximillian Schrems v Data Protection Commissioner, ECLI:EU:C:2015:650, available at https://curia.europa.eu/juris/liste.jsf?num=C-362/14 [hereinafter “Schrems I”].

[15] Scott, supra note 13.

[16] Schrems II [178].

[17] Case C?362/14, Maximillian Schrems v Data Protection Commissioner, EU:C:2015:650 (CJEU judgment of 6 October 2015) [hereinafter: “Schrems I”].

[18] Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals With Regard to the Processing of Personal Data and on the Free Movement of Such Data (“Data Protection Directive”).

[19] Schrems I [73].

[20] Theodore Christakis, Kenneth Propp, & Peter Swire, EU/US Adequacy Negotiations and the Redress Challenge: Whether a New U.S. Statute is Necessary to Produce an “Essentially Equivalent” Solution, European Law Blog (2022), https://europeanlawblog.eu/2022/01/31/eu-us-adequacy-negotiations-and-the-redress-challenge-whether-a-new-u-s-statute-is-necessary-to-produce-an-essentially-equivalent-solution.

[21] Opinion of Advocate General Saugmandsgaard Øe delivered on 19 December 2019, Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems, ECLI:EU:C:2019:1145 [248].

[22] European Data Protection Board, Recommendations 02/2020 on the European Essential Guarantees for surveillance measures, available at https://edpb.europa.eu/sites/default/files/files/file1/edpb_recommendations_202002_europeanessentialguaranteessurveillance_en.pdf (hereinafter: “EDPB Recommendations on surveillance measures”).

[23] EDPB Recommendations on surveillance measures [8].

[24] Executive Order, supra note 5, Sec. 2(a)(ii)(B).

[25] Douwe Korff, The Inadequacy of the October 2022 New US Presidential Executive Order on Enhancing Safeguards For United States Signals Intelligence Activities, 13 (2022), https://www.ianbrown.tech/2022/11/11/the-inadequacy-of-the-us-executive-order-on-enhancing-safeguards-for-us-signals-intelligence-activities.

[26] Id. at 10–13.

[27] EDPB Recommendations on surveillance measures [34].

[28] Opinion of Advocate General Saugmandsgaard Øe in Schrems II [249].

[29] European Commission, supra note 8, Recital 135.

[30] New US Executive Order Unlikely to Satisfy EU Law, NOYB (Oct. 7, 2022), https://noyb.eu/en/new-us-executive-order-unlikely-satisfy-eu-law.

[31] Korff, supra note 25 at 19.

[32] Schrems II [184].

[33] European Data Protection Supervisor, Opinion 5/2023 on the European Commission Draft Implementing Decision on the Adequate Protection of Personal Data Under the EU-US Data Privacy Framework, [134]-[135] (2023), https://edpb.europa.eu/our-work-tools/our-documents/opinion-art-70/opinion-52023-european-commission-draft-implementing_en. See also Alex Joel, Necessity, Proportionality, and Executive Order 14086, Joint PIJIP/TLS Research Paper Series (2023), https://digitalcommons.wcl.american.edu/research/99.

[34] Digital Rights Ireland and Others, Cases C?293/12 and C?594/12, EU:C:2014:238.

[35] La Quadrature du Net and Others v Premier Ministre and Others, Case C-511/18, ECLI:EU:C:2020:791.

[36] Schrems I [94].

[37] Schrems I [96].

[38] European Commission, supra note 8, Recitals 140-141 (footnotes omitted).

[39] Theodore Christakis, Squaring the Circle? International Surveillance, Underwater Cables and EU-US Adequacy Negotiations (Part 1), European Law Blog (2021), https://europeanlawblog.eu/2021/04/12/squaring-the-circle-international-surveillance-underwater-cables-and-eu-us-adequacy-negotiations-part1; Theodore Christakis, Squaring the Circle? International Surveillance, Underwater Cables and EU-US Adequacy Negotiations (Part 2), European Law Blog (2021), https://europeanlawblog.eu/2021/04/13/squaring-the-circle-international-surveillance-underwater-cables-and-eu-us-adequacy-negotiations-part2.

[40] European Commission, supra note 8, Recital 141, footnote 250 (emphasis added).

[41] Id., Recital 141, footnote 250.

[42] Directive (EU) 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on Certain Legal Aspects of Information Society Services, in Particular Electronic Commerce, in the Internal Market (‘Directive on Electronic Commerce’) [2000] OJ L178/1.

[43] Case C-18/18, Eva Glawischnig-Piesczek v Facebook [2019] ECLI:EU:C:2019:821. See also Daphne Keller, Facebook Filters, Fundamental Rights, and the CJEU’s Glawischnig-Piesczek Ruling, 69 GRUR International 616 (2020).

[44] As Keller puts it: “Instead of defining prohibited ‘general’ monitoring as monitoring that affects every user, the Court effectively defines it as monitoring for content that was not specified in advance by a court.” Id. at 620.

[45] Case C?401/19, Poland v Parliament and Council [2022] ECLI:EU:C:2022:297; Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on Copyright and Related Rights in the Digital Single Market and Amending Directives 96/9/EC and 2001/29/EC (OJ 2019 L 130, p. 92). For background, see Christophe Geiger & Bernd Justin Jütte, Platform Liability Under Art. 17 of the Copyright in the Digital Single Market Directive, Automated Filtering and Fundamental Rights: An Impossible Match, 70 GRUR International 517 (2021).

[46] Schrems II [181].

[47] Schrems II [183].

[48] @MBarczentewicz, Twitter (Aug. 24, 2023, 9:43 AM), https://twitter.com/MBarczentewicz/status/1694707035659813023. See also Max Schrems, Open Letter on the Future of EU-US Data Transfers (May 23, 2022), https://noyb.eu/en/open-letter-future-eu-us-data-transfers.

[49] Similar phrasing can be found in Ashley Gorski, The Biden Administration’s SIGINT Executive Order, Part II: Redress for Unlawful Surveillance, Just Security (2022), https://www.justsecurity.org/83927/the-biden-administrations-sigint-executive-order-part-ii. Gorski’s text shows well how easy it is to elide, even unintentionally, the distinction between the Article 47 being a standard that must be satisfied by a third country, and it merely contributing to the level of protection that constitutes a benchmark for an adequacy assessment. At one point she notes that “the CJEU held that U.S. law failed to provide an avenue of redress ‘essentially equivalent’ to that required by Article 47.” In other places, however, she adopts the phrasing of “satisfying” Article 47.

[50] Schrems II [186].

[51] Theodore Christakis, Kenneth Propp & Peter Swire, The Redress Mechanism in the Privacy Shield Successor: On the Independence and Effective Powers of the DPRC, IAPP.org (2022), https://iapp.org/news/a/the-redress-mechanism-in-the-privacy-shield-successor-on-the-independence-and-effective-powers-of-the-dprc.

[52] Gorski, supra note 49; Korff, supra note 25 at 21.

[53] European Commission, supra note 8, Recital 175.

[54] Id., Recital 187 (footnotes omitted).

[55] Gorski, supra note 49.

[56] According to them: “(…) key U.S. Supreme Court decisions have affirmed the binding force of a DOJ regulation and the legal conclusion that all of the executive branch, including the president and the attorney general, are bound by it.” Christakis, Propp, & Swire, supra note 51.

[57] @MBarczentewicz, Twitter (Aug. 24, 2023, 9:43 AM), https://twitter.com/MBarczentewicz/status/1694707035659813023.

[58] Executive Order, section 5(k)(i)-(iv).

[59] NOYB, New US Executive Order Unlikely to Satisfy EU Law (Oct. 7, 2022), https://noyb.eu/en/new-us-executive-order-unlikely-satisfy-eu-law. See also NOYB, supra note 13.

[60] Korff, supra note 25 at 25.

[61] Joined cases C-511/18, C-512/18 and C-520/18, La Quadrature du Net and others, ECLI:EU:C:2020:791 [191].

[62] European Union Agency for Fundamental Rights, Surveillance by Intelligence Services: Fundamental Rights Safeguards and Remedies in the EU – Volume II: Field Perspectives and Legal Update (2017) https://fra.europa.eu/en/publication/2017/surveillance-intelligence-services-fundamental-rights-safeguards-and-remedies-eu.

[63] European Commission, supra note 8, Recitals 199-200.

Continue reading
Data Security & Privacy

Even Meta Deserves the Rule of Law

Popular Media In Robert Bolt’s play “A Man for All Seasons,” the character of Sir Thomas More argues at one point that he would “give the Devil . . .

In Robert Bolt’s play “A Man for All Seasons,” the character of Sir Thomas More argues at one point that he would “give the Devil benefit of law, for my own safety’s sake!” Defending the right to due process for a broadly disliked company is similarly not the most popular position, but nonetheless, even Meta deserves the rule of law.

Read the full piece here.

Continue reading
Data Security & Privacy

Mikołaj Barczentewicz on Ireland’s Meta Fine

Presentations & Interviews ICLE Senior Scholar Miko?aj Barczentewicz joined the Mobile Dev Memo podcast to discuss the Irish Data Protection Commission’s recent $1.3 billion levied against Meta over . . .

ICLE Senior Scholar Miko?aj Barczentewicz joined the Mobile Dev Memo podcast to discuss the Irish Data Protection Commission’s recent $1.3 billion levied against Meta over its transmission of EU resident data to the United States, and what the case means for the future of U.S.-EU data flows. The full episode is embedded below.

Continue reading
Data Security & Privacy

Gus Hurwitz on Children’s Online Privacy

Presentations & Interviews ICLE Director of Law & Economics Programs Gus Hurwitz was a guest on The Cyberlaw Podcast to discuss the Federal Trade Commission’s (FTC) recent settlement with . . .

ICLE Director of Law & Economics Programs Gus Hurwitz was a guest on The Cyberlaw Podcast to discuss the Federal Trade Commission’s (FTC) recent settlement with Amazon of a claim regarding children’s privacy, as well as separate FTC efforts to rewrite its 2019 consent decree with Meta over children’s advertising and services.

Other topics included Amazon settling another FTC  complaint over security failings at its Ring doorbell operation; Microsoft losing a data protection case in Ireland; and whether automated tip suggestions should be condemned as “dark patterns.”

The full episode is embedded below.

Continue reading
Data Security & Privacy

Ireland’s Massive Fine Against Meta Could Erode Trust In EU Law

Popular Media The €1.2 billion fine that the Irish Data Protection Commission (DPC) against Meta marks a new record for violation of the EU’s General Data Protection . . .

The €1.2 billion fine that the Irish Data Protection Commission (DPC) against Meta marks a new record for violation of the EU’s General Data Protection Regulation (GDPR), but it is the DPC’s order that the company to shut off its transatlantic flow of user data that will have the most far-reaching consequences for international trade, privacy policy, and the rule of law.

Read the full piece here.

Continue reading
Data Security & Privacy

Keeping Data Flowing Is in India’s Interest

Popular Media Mandates to restrict the flow of data across national boundaries have taken hold in a growing number of jurisdictions, including India. Spearheaded by nations like . . .

Mandates to restrict the flow of data across national boundaries have taken hold in a growing number of jurisdictions, including India. Spearheaded by nations like China, Iran, and Russia, the idea has vocal proponents among those who claim it will forward the goal of “digital sovereignty.”

Read the full piece here.

Continue reading
Data Security & Privacy