Showing Latest Publications

Jonah Gelbach on Free PACER

Presentations & Interviews ICLE Academic Affiliate Jonah Gelbach was a guest on Berkeley Law’s Voices Carry podcast to discuss how aggregating federal court data can help researchers tease . . .

ICLE Academic Affiliate Jonah Gelbach was a guest on Berkeley Law’s Voices Carry podcast to discuss how aggregating federal court data can help researchers tease out critical trends, as well as efforts to push the federal judiciary to drop the paywall on the Public Access to Court Electronic Records database. The full episode is embedded below.

Continue reading
Intellectual Property & Licensing

David Teece on Diversity in Corporate Governance

Presentations & Interviews ICLE Academic Affiliate David Teece was a guest on the Insights from the Top podcast to discuss the importance of gender and racial diversity in . . .

ICLE Academic Affiliate David Teece was a guest on the Insights from the Top podcast to discuss the importance of gender and racial diversity in corporate governance, the state of securitization in emerging markets, what ownership means for rising attorneys, and how the firm has remained strong for more than a century. The full episode is embedded below.

Continue reading
Financial Regulation & Corporate Governance

The Broken Promises of Europe’s Digital Regulation

TOTM If you live in Europe, you may have noticed issues with some familiar online services. From consent forms to reduced functionality and new fees, there . . .

If you live in Europe, you may have noticed issues with some familiar online services. From consent forms to reduced functionality and new fees, there is a sense that platforms like Amazon, Google, Meta, and Apple are changing the way they do business. 

Many of these changes are the result of a new European regulation called the Digital Markets Act (DMA), which seeks to increase competition in online markets. Under the DMA, so-called “gatekeepers” must allow rivals to access their platforms. Having taken effect March 7, firms now must comply with the regulation, which explains why we are seeing these changes unfold today.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Test SLC (merger)

Popular Media DEFINITION The substantial lessening of competition or “SLC” test is a standard that regulatory authorities use to assess the legality of proposed mergers and acquisitions. . . .

DEFINITION

The substantial lessening of competition or “SLC” test is a standard that regulatory authorities use to assess the legality of proposed mergers and acquisitions. The SLC test examines whether a prospective merger is likely to substantially lessen competition in a given market. Its purpose is to prevent mergers that increase prices, reduce output, limit consumer choice, or stifle innovation as a result of a decrease in competition. Mergers that substantially lessen competition are prohibited under the laws of the jurisdictions that utilize this test, such as the USA, EU, Canada, the United Kingdom, Australia and Nigeria, amongst others.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

ICLE Comments to FTC on Children’s Online Privacy Protection Rule NPRM

Regulatory Comments Introduction We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online . . .

Introduction

We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online Privacy Protection Rule (“COPPA Rule”).

The International Center for Law and Economics (ICLE) is a nonprofit, nonpartisan research center whose work promotes the use of law & economics methodologies to inform public-policy debates. We believe that intellectually rigorous, data-driven analysis will lead to efficient policy solutions that promote consumer welfare and global economic growth.[1]

ICLE’s scholars have written extensively on privacy and data-security issues, including those related to children’s online safety and privacy. We also previously filed comments as part of the COPPA Rule Review and will make some of the same points below.[2]

The Children’s Online Privacy Protection Act (COPPA) sought to strike a balance in protecting children without harming the utility of the internet for children. As Sen. Richard Bryan (D-Nev.) put it when he laid out the purpose of COPPA:

The goals of this legislation are: (1) to enhance parental involvement in a child’s online activities in order to protect the privacy of children in the online environment; (2) to enhance parental involvement to help protect the safety of children in online fora such as chatrooms, home pages, and pen-pal services in which children may make public postings of identifying information; (3) to maintain the security of personally identifiable information of children collected online; and (4) to protect children’s privacy by limiting the collection of personal information from children without parental consent. The legislation accomplishes these goals in a manner that preserves the interactivity of children’s experience on the Internet and preserves children’s access to information in this rich and valuable medium.[3]

In other words, COPPA was designed to protect children from online threats by promoting parental involvement in a way that also preserves a rich and vibrant marketplace for children’s content online. Consequently, the pre-2013 COPPA Rule did not define personal information to include persistent identifiers standing alone. It is these persistent identifiers that are critical for the targeted advertising that funds the interactive online platforms and the creation of children’s content the legislation was designed to preserve.

COPPA applies to the “operator of any website or online service” that is either “directed to children that collects personal information from children” or that has “actual knowledge that it is collecting personal information from a child.”[4] These operators must “obtain verifiable parental consent for the collection, use, or disclosure of personal information.” The NPRM, following the mistaken 2013 amendments to the COPPA Rule, continues to define “personal information” to include persistent identifiers that are necessary for the targeted advertising undergirding the internet ecosystem.

Below, we argue that, before the FTC moves further toward restricting platform operators and content creators’ ability to monetize their work through targeted advertising, it must consider the economics of multisided platforms. The current path will lead to less available free content for children and more restrictions on their access to online platforms that depend on targeted advertising. Moreover, the proposed rules are inconsistent with the statutory text of COPPA, as persistent identifiers do not by themselves enable contacting specific individuals. Including them in the definition of “personal information” is also contrary to the statute’s purpose, as it will lead to a less vibrant internet ecosystem for children.

Finally, there are better ways to protect children online, including by promoting the use of available technological and practical solutions to avoid privacy harms. To comply with existing First Amendment jurisprudence regarding online speech, it is necessary to rely on these less-restrictive means to serve the goal of protecting children without unduly impinging their speech interests online.

I. The Economics of Online Multisided Platforms

Most of the “operators of websites and online services” subject to the COPPA Rule are what economists call multisided markets, or platforms.[5] Such platforms derive their name from the fact that they serve at least two different types of customers and facilitate their interaction. Multisided platforms generate “indirect network effects,” described by one economist as a situation where “participants on one side value being able to interact with participants on the other side… lead[ing] to interdependent demand.”[6]

Online platforms provide content to one side and access to potential consumers on the other side. In order to keep demand high, online platforms often offer free access to users, whose participation is subsidized by those participants on the other side of the platform (such as advertisers) that wish to reach them.[7] This creates a positive feedback loop in which more participants on one side of the platform leads to more participants on the other.

This dynamic is also true of platforms with content “directed to children.” Revenue is collected not from those users, but primarily from the other side of the platform—i.e., advertisers who pay for access to the platform’s users. To be successful, online platforms must keep enough—and the right type of—users engaged to maintain demand for advertising.

Moreover, many “operators” under COPPA are platforms that rely on user-generated content. Thus, they must also consider how to attract and maintain high-demand content creators, often accomplished by sharing advertising revenue. If platforms fail to serve the interests of high-demand content creators, those creators may leave the platform, thus reducing its value.

Online platforms acting within the market process are usually going to be the parties best-positioned to make decisions on behalf of platforms users. Operators with content directed to children may even compete on privacy policies and protections for children by providing tools to help users avoid what they (or, in this context, their parents and guardians) perceive to be harms, while keeping users on the platform and maintaining value for advertisers.[8]

There may, however, be examples where negative externalities[9] stemming from internet use are harmful to society more broadly. A market failure could result, for instance, if platforms’ incentives lead them to collect too much (or the wrong types of) information for targeted advertising, or to offer up content that is harmful for children or keeps them hooked to using the platform.

In situations where there are negative externalities from internet use, there may be a case to regulate online platforms in various ways. Any case for regulation must, however, acknowledge potential transaction costs, as well as how platforms and users may respond to changes in those costs. To get regulation right, the burden of avoiding a negative externality should fall on the least-cost avoider.

The Coase Theorem, derived from the work of Nobel-winning economist Ronald Coase[10] and elaborated on in the subsequent literature,[11] helps to explain the issue at-hand:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the least-cost avoider, while taking into consideration the total social costs of the institutional framework.

In one of Coase’s examples, the noise from a confectioner using his candy-making machine is a potential cost to the doctor next door, who consequently cannot use his office to conduct certain testing. Simultaneously, the doctor moving his office next door to the confectioner is a potential cost to the confectioner’s ability to use his equipment.

In a world of well-defined property rights and low transaction costs, the initial allocation of rights would not matter, because the parties could bargain to overcome the harm in a mutually beneficial manner—i.e., the confectioner could pay the doctor for lost income or to set up sound-proof walls, or conversely, the doctor could pay the confectioner to reduce the sound of his machines.[12] But since there are transaction costs that prevent this sort of bargain, it is important whether the initial right is allocated to the doctor or the confectioner. To maximize societal welfare, the cost should be placed on the entity that can avoid the harm at the lowest cost.[13]

In the context of the COPPA Rule, website operators and online services create incredible value for their users, but they also can, at times, impose negative externalities relevant to children who use their services. In the absence of transaction costs, it would not matter whether operators must obtain verifiable parental consent before collecting, using, or disclosing personal information, or whether the initial burden is placed on parents and children to avoid the harms associated with such collection, use, or disclosure.

But given that there are transaction costs involved in obtaining (and giving) verifiable parental consent,[14] it matters how the law defines personal information (which serves as a proxy for a property right, in Coase’s framing). If personal information is defined too broadly and the transaction costs for providers to gain verifiable parental consent are too high, the result may be that the societal benefits of children’s internet use will be lost, as platform operators restrict access beyond the optimum level.

The threat of liability for platform operators under COPPA also risks excessive collateral censorship.[15] This arguably has already occurred, as operators like YouTube have restricted content creators’ ability to monetize their work through targeted advertising, leading on balance to less children’s content. By wrongly placing the burden on operators to avoid harms associated with targeted advertising, societal welfare is reduced, including the welfare of children who no longer get the benefits of that content.

On the other hand, there are situations where website operators and online services are the least-cost avoiders. For example, they may be the parties best-placed to monitor and control harms associated with internet use in cases where it is difficult or impossible to hold those using their platforms accountable for the harms they cause.[16] In other words, operators should still be held liable under COPPA when they facilitate adults’ ability to message children, or to identify a child’s location without parental consent, in ways that could endanger children.[17] Placing the burden on children or their parents to avoid such harms could allow operators to impose un- or undercompensated harms on society.

Thus, in order to get the COPPA Rule’s balance right, it is important to determine whether it is the operators or their users who are the least-cost avoiders. Placing the burden on the wrong parties would harm societal welfare, either by reducing the value that online platforms confer to their users, or in placing more uncompensated negative externalities on society.

II. Persistent Identifiers and ‘Personal Information’

As mentioned above, under COPPA, a website operator or online service that is either directed to children or that has actual knowledge that it collects personal information from a child must obtain “verifiable parental consent” for the “collection, use or disclosure” of that information.[18] But the NPRM continues to apply the expanded definition of “personal information” to include persistent identifiers from the 2013 amendments.

COPPA’s definition for personal information is “individually identifiable information” collected online.[19] The legislation included examples such as first and last name; home or other physical address; as well as email address, telephone number, or Social Security number.[20] These are all identifiers obviously connected to people’s real identities. COPPA does empower the FTC to determine whether other identifiers should be included, but the commission must permit “the physical or online contacting of a specific individual”[21] or “information concerning the child or the parents of that child that the website collects online from the child and combines with an identifier described in this paragraph.”[22]

In 2013, the FTC amended the definition of personal information to include:

A persistent identifier that can be used to recognize a user over time and across different Web sites or online services. Such persistent identifier includes, but is not limited to, a customer number held in a cookie, an Internet Protocol (IP) address, a processor or device serial number, or unique device identifier.[23]

The NPRM here continues this error.

Neither IP addresses nor device identifiers alone “permit the physical or online contacting of a specific individual,” as required by 15 U.S.C. §?6501(8)(F). A website or app could not identify personal identity or whether a person is an adult or child from these pieces of information alone. In order for persistent identifiers, like those relied upon for targeted advertising, to be counted as personal information under 15 U.S.C. §?6501(8)(G), they need to be combined with other identifiers listed in the definitions. In other words, it is only when a persistent identifier is combined with a first and last name, an address, an email, a phone number, or a Social Security number that it should be considered personal information protected by the statute.

While administrative agencies receive Chevron deference in court challenges when definitions are ambiguous, this text, when illuminated by canons of statutory construction,[24] is clear. The canon of ejusdem generis applies when general words follow an enumeration of two or more things.[25] The general words are taken to apply only to persons or things of the same general kind or class as those mentioned specifically. Persistent identifiers, such as cookies, bear little resemblance to the other examples of “personally identifiable information” listed in the statute, such as first and last name, address, phone, email, or Social Security number. Only when combined with such information could a persistent identifier become personal information.

The NPRM states that the Commission is “not persuaded” by this line of argumentation, pointing back to the same reasoning offered in the 2013 amendments. The NPRM states that it is “the reality that at any given moment a specific individual is using that device,” which “underlies the very premise behind behavioral advertising.”[26] Moreover the NPRM reasons that “while multiple people in a single home often use the same phone number, home address, and email address, Congress nevertheless defined these identifiers as ‘individually identifiable information’ in the COPPA statute.”[27] But this reasoning is flawed.

While multiple people regularly share an address, and sometimes even a phone number or email, each of these identifiers allows for contacting an individual person in a way that a persistent identifier simply does not. In each of those cases, bad actors can use such information to send direct messages to people (phone numbers and emails); find their physical location (address); and potentially to cause them harm.

A persistent identifier, on its own, is not the same. Without the subpoena of an internet service provider (ISP) or virtual private network (VPN), a bad actor that intended harm could not tell either where the person to whom the persistent identifier is assigned is located, or to message them directly. Persistent identifiers are useful primarily to online platforms in supporting their internal operations (which the NPRM continues to allow) and serving users targeted advertising.

Moreover, the fact that bills seeking to update COPPA—proposed but never passed by Congress—have proposed expanding the definition of personal information to include persistent identifiers suggests that the FTC has asserted authority that it does not have under the current statute.[28] Under Supreme Court precedent,[29] when considering whether an agency has the authority that it claims to pass rules, courts must consider whether Congress has rejected proposals to expand the agency’s jurisdiction in similar ways.

The NPRM also ignores the practical realities of the relationship between parents and children when it comes to devices and internet use. Parental oversight is already built into any type of advertisement (including targeted ads) that children see. Few children can view those advertisements without their parents providing them a device and the internet access to do so. Even fewer children can realistically make their own purchases. Consequently, the NPRM misunderstands targeted advertising in the context of children’s content, which is not based on any knowledge about the users as individuals, but on the browsing and search history of the device they happen to be using.

Children under age 13, in particular, are extremely unlikely to have purchased the devices they use; to have paid for the internet access to use those devices; or to have any disposable income or means to pay for goods and services online. Thus, contrary to the NPRM’s assumptions, the actual “targets” of this advertising—even on websites or online services that host children’s content—are the children’s parents.

This NPRM continues the 2013 amendments’ mistake and will continue to greatly reduce the ability of children’s content to generate revenue through the use of relatively anonymous persistent identifiers. As we describe in the next section, the damage done by the 2013 amendments is readily apparent, and the Commission should take this opportunity to rectify the problem.

III. More Parental Consent, Less Children’s Content

As outlined above, in a world without transaction costs—or, at least, one in which such costs are sufficiently low—verifiable parental consent would not matter, because it would be extremely easy for a bargain to be struck between operators and parents. In the real world, however, transaction costs exist. In fact, despite the FTC’s best efforts under the COPPA Rule, the transaction costs associated with obtaining verifiable parental consent continue to be sufficiently high as to prevent most operators from seeking that consent for persistent identifiers. As we stated in our previous comments, the economics are simple: if content creators lose access to revenue from targeted advertising, there will be less content created from which children can benefit.

FIGURE 1: Supply Curve for Children’s Online Content

The supply curve for children’s online content shifts left as the marginal cost of monetizing it increases. The marginal cost of monetizing such content is driven upward by the higher compliance costs of obtaining verifiable parental consent before serving targeted advertising. This supply shift means that less online content will be created for children.

These results are not speculative at this point. Scholars who have studied the issue have found the YouTube settlement, made pursuant to the 2013 amendments, has resulted in less child-directed online content, due to creators’ inability to monetize that content through targeted advertising. In their working paper “COPPAcalypse? The YouTube Settlement’s Impact on Kids Content,”[30] Garrett Johnson, Tesary Lin, James C. Cooper, & Liang Zhong summarized the issue as follows:

The Children’s Online Privacy Protection Act (COPPA), and its implementing regulations, broadly prohibit operators of online services directed at children under 13 from collecting personal information without providing notice of its data collection and use practices and obtaining verifiable parental consent. Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA essentially acts as a de facto ban on the collection of personal information by providers of free child-directed content. In 2013, the FTC amended the COPPA rules to include in the definition of personal information “persistent identifier that can be used to recognize a user over time and across different Web sites or online services,” such as a “customer number held in a cookie . . . or unique device identifier.” This regulatory change meant that, as a practical matter, online operators who provide child-directed content could no longer engage in personalized advertising.

On September 4, 2019, the FTC entered into a consent agreement with YouTube to settle charges that it had violated COPPA. The FTC’s allegations focused on YouTube’s practice of serving personalized advertising on child-directed content at children without obtaining verifiable parental consent. Although YouTube maintains it is a general audience website and users must be at least 13 years old to obtain a Google ID (which makes personalized advertising possible), the FTC complaint alleges that YouTube knew that many of its channels were popular with children under 13, citing YouTube’s own claims to advertisers. The settlement required YouTube to identify child-directed channels and videos and to stop collecting personal information from visitors to these channels. In response, YouTube required channel owners producing [“made-for-kids”] MFK content to designate either their entire channels or specific videos as MFK, beginning on January 1, 2020. YouTube supplemented these self-designations with an automated classifier designed to identify content that was likely directed at children younger than 13. In so doing, YouTube effectively shifted liability under COPPA to the channel owners, who could face up to $42,530 in fines per video if they fail to self-designate and are not detected by YouTube’s classifier.[31]

By requiring verifiable parental consent, the rule change and settlement increased the transaction costs imposed on online platforms that host content created by others. YouTube’s economically rational response was to restrict content creators’ ability to benefit from (considerably more lucrative) personalized advertising. The result was less content created for children, including by driving out less-profitable content creators:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.[32]

This is not the only finding regarding COPPA’s role in reducing the production of content for children. Morgan Reed—president of the App Association, a global trade association for small and medium-sized technology companies—presented extensively at the FTC’s 2019 COPPA Workshop.[33] Reed’s testimony detailed that the transaction costs associated with obtaining verifiable parental consent did little to enhance parental control, but much to reduce the quality and quantity of content directed to children.

It is worth highlighting, in particular, Reed’s repeated use of the words “friction,” “restriction,” and “cost” to describe how COPPA’s institutional features affect the behavior of social-media platforms, parents, and children. While noting that general audience content is “unfettered, meaning that you do not feel restricted by what you can get to, how you do it. It’s easy, it’s low friction. Widely available. I can get it on any platform, in any case, in any context and I can get to it rapidly,” Reed said that COPPA-regulated apps and content are, by contrast, all about:

Friction, restriction, and cost. Every layer of friction you add alters parent behavior significantly. We jokingly refer to it as the over the shoulder factor. If a parent wants access to something and they have to pass it from the back seat to the front seat of the car more than one time, the parent moves on to the next thing. So the more friction you add to an application directed at children the less likely it is that the parent is going to take the steps necessary to get through it because the competition, of course, is as I said, free, unfettered, widely available. Restriction. Kids balk against some of the restrictions. I can’t get to this, I can’t do that. And they say that to the parent. And from the parent’s perspective, fine, I’ll just put in a different age date. They’re participating, they’re parenting but they’re not using the regulatory construction that we all understand.

The COPPA side, expensive, onerous or friction full. We have to find some way around that. Restrictive, fewer features, fewer capabilities, less known or available, and it’s entertaining-ish. …

Is COPPA the barrier? I thought this quote really summed it up. “Seamlessness is expected. But with COPPA, seamlessness is impossible.” And that has been one of the single largest areas of concern. Our folks are looking to provide a COPPA compliant environment. And they’re finding doing VPC is really hard. We want to make it this way, we just walked away. And why do they want to do it? We wanted to create a hub for kids to promote creativity. So these are not folks who are looking to take data and provide interest based advertising. They’re trying to figure out how to do it so they can build an engaging product. Parental consent makes the whole process very complicated. And this is the depressing part. …

We say that VPC is intentional friction. It’s clear from everything we’ve heard in the last two panels that the authors of COPPA, we don’t really want information collected on kids. So friction is intentional. And this is leading to the destruction of general audience applications basically wiping out COPPA apps off the face of the map.[34]

Reed’s use of the word “friction” is particularly enlightening. The economist Mike Munger of Duke University has often described transaction costs as frictions—explaining that, to consumers, all costs are transaction costs.[35] When higher transaction costs are imposed on social-media platforms, end users feel the impact. In this case, the result is that children and parents receive less quality children’s apps and content.

Thus, when the NPRM states that “the Commission [doesn’t] find compelling the argument that the 2013 persistent identifier modification has caused harm by hindering the ability of operators to monetize online content through targeted advertising,”[36] in part because “the 2013 Amendments permit monetization… through providing notice and seeking parental consent for the use of personal information for targeted advertising,”[37] it misses how transaction costs prevent this outcome. The FTC should not ignore the data provided by scholars who have researched the question, nor the direct testimony of app developers.

IV. Lower-Cost Ways to Avoid Harms to Children

Widely available practical and technological means are a lower-cost way to avoid the negative externalities associated with internet use, relative to verifiable-parental-consent laws. As NetChoice put it in the complaint the group filed against Arkansas’ social-media age-verification law, “[p]arents have myriad ways to restrict their children’s access to online services and to keep their children safe on such services.”[38]

NetChoice’s complaint recognized the subjective nature of negative externalities, stating:

Just as people inevitably have different opinions about what books, television shows, and video games are appropriate for minors, people inevitably have different views about whether and to what degree online services are appropriate for minors. While many minors use online services in wholesome and productive ways, online services, like many other technologies, can be abused in ways that may harm minors.[39]

They proceeded to list all the ways that parents can take control and help their children avoid online harms, including with respect to the decisions to buy devices for their children and to set terms for how and when they are permitted to use them.[40] Parents can also choose to use tools offered by cell-phone carriers and broadband providers to block certain apps and sites from their children’s devices, or to control with whom their children can communicate and for how long they can use the devices.[41]

NetChoice also pointed to wireless routers that allow parents to filter and monitor online content;[42] parental controls at the device level;[43] third-party filtering applications;[44] and numerous tools offered by NetChoice members that offer relatively low-cost monitoring and control by parents, or even by teen users acting on their own behalf.[45] Finally, they noted that, in response to market demand,[46] NetChoice members expend significant resources curating content to ensure that it is appropriate.[47]

Similarly, parents can protect their children’s privacy simply by taking control of the devices they allow their children to use. Tech-savvy parents can, if they so choose, install software or use ad-blockers to prevent collection of persistent identifiers.[48] Even less tech-savvy parents can make sure that their children are not subject to ads and tracking simply by monitoring their device usage and ensuring they only use YouTube Kids or other platforms created explicitly for children. In fact, most devices and operating systems now have built-in, easy-to-use controls that enable both monitoring and blocking of children’s access to specific apps and websites.[49]

This litany of less-restrictive means to accomplish the goal of protecting children online bears repeating, because even children have some First Amendment interests in receiving online speech.[50] If a court were to examine the COPPA Rule as a speech regulation that forecloses children’s access to online content, it would be subject to strict scrutiny. This means the rules would need to be the least-restrictive possible in order to fulfill the statute’s purpose. Educating parents and children on the available practical and technological means to avoid harms associated with internet use, including the collection of data for targeted advertising, would clearly be a less-restrictive alternative to a de facto ban of targeted advertising.

A less-restrictive COPPA rule could still enhance parental involvement and protect children from predators without impairing the marketplace for children’s online content significantly. Parents already have the ability to review their children’s content-viewing habits on devices they buy for them. A COPPA rule that enhances parental control by requiring verifiable parental consent when children are subject to sharing personal information—like first and last name, address, phone number, email address, or Social Security number—obviously makes sense, along with additions like geolocation data. But it is equally obvious that it is possible to avoid, at lower cost, the relatively anonymized collection of persistent identifiers used to support targeted ads through practical and technological means, without requiring costly verifiable parental consent.

V. Perils of Bringing More Entities Under the COPPA Rule

The costs of the COPPA Rule would be further exacerbated by the NPRM’s proposal to modify the criteria for determining whether a site or service is directed toward children.[51] These proposed changes, particularly the reliance on third-party services and comparisons with “similar websites or online services,” raise significant concerns about both their practical implementation and potential unintended consequences. The latter could include further losses of online content for both children and adults, as content creators drawn into COPPA’s orbit lose access to revenue from targeted advertising.

The FTC’s current practice employs a multi-factor test to ascertain whether a site or service is directed at children under 13. This comprehensive approach considers various elements, including subject matter, visual and audio content, and empirical evidence regarding audience composition.[52] The proposed amendments aim to expand this test by introducing such factors as marketing materials, representations to third parties and, notably, reviews by users or third parties and comparisons with similar websites or services.[53]

The inclusion of third-party reviews and comparisons with similar services as factors in determining a site’s target audience introduces a level of ambiguity and unreliability that would be counterproductive to COPPA’s goals. Without clear standards to evaluate their competence or authority, relying on third-party reviews would leave operators without a solid foundation upon which to assess compliance. This ambiguity could lead to overcompliance. In particular, online platforms that carry third-party content may err on the side of caution in order to align with the spirit of the rule. This threatens to stifle innovation and free expression by restricting creators’ ability to monetize content that has any chance to be considered “directed to children.” Moreover, to avoid this loss of revenue, content creators could shift their focus exclusively to content clearly aimed only at adults, rather than that which could be interesting to adults and children alike.

Similarly, the proposal to compare operators with “similar websites or online services” is fraught with challenges. The lack of guidance on how to evaluate similarity or to determine which service sets the standard for compliance would increase burdens on operators, with little evidence of tangible realized benefits. It’s also unclear who would make these determinations and how disputes would be resolved, leading to further compliance costs and potential litigation. Moreover, operators may be left in a position where it is impractical to accurately assess the audience of similar services, thereby further complicating compliance efforts.

Given these considerations, the FTC should not include reliance on third-party services or comparisons with similar websites or online services in its criteria for determining whether content is directed at children under 13. These approaches introduce a level of uncertainty and unreliability that could lead to overcompliance, increased costs, and unintended negative impacts on online content and services, including further restrictions on content creators who create content interesting to both adults and children. Instead, the FTC should focus on providing clear, direct guidelines that allow operators to assess their compliance with COPPA confidently, without the need to rely on potentially biased or manipulative third-party assessments. This approach will better serve the FTC’s goal of protecting children’s online privacy, while ensuring a healthy, innovative online ecosystem.

Conclusion

The FTC should reconsider the inclusion of standalone persistent identifiers in the definition of “personal information.” The NPRM continues to enshrine the primary mistake of the 2013 amendments. This change was inconsistent with the purposes and text of the COPPA statute. It already has reduced, and will continue to reduce, the availability of children’s online content.

[1] ICLE has received financial support from numerous companies, organizations, and individuals, including firms with interests both supportive of and in opposition to the ideas expressed in this and other ICLE-supported works. Unless otherwise noted, all ICLE support is in the form of unrestricted, general support. The ideas expressed here are the authors’ own and do not necessarily reflect the views of ICLE’s advisors, affiliates, or supporters.

[2] Much of these comments are adapted from ICLE’s 2019 COPPA Rule Review Comments, available at https://laweconcenter.org/wp-content/uploads/2019/12/COPPA-Comments-2019.pdf; Ben Sperry, A Law & Economics Approach to Social-Media Regulation, CPI TechREG Chronicle (Feb. 29, 2022), https://laweconcenter.org/resources/a-law-economics-approach-to-social-media-regulation; Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes (ICLE Issue Brief, Nov. 9, 2023), available at https://laweconcenter.org/wp-content/uploads/2023/11/Issue-Brief-Transaction-Costs-of-Protecting-Children-Under-the-First-Amendment-.pdf.

[3] 144 Cong. Rec. 11657 (1998) (Statement of Sen. Richard Bryan), available at https://www.congress.gov/crec/1998/10/07/CREC-1998-10-07.pdf#page=303.

[4] 15 U.S.C. §?6502(b)(1)(A).

[5] See, e.g., Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Euro. Econ. Ass’n 990 (2003).

[6] David S. Evans, Multisided Platforms in Antitrust Practice, at 3 (Oct. 17, 2023), forthcoming, Michael Noel, Ed., Elgar Encyclopedia on the Economics of Competition and Regulation, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4606511.

[7] For instance, many nightclubs hold “ladies’ night” events in which female patrons receive free admission or discounted drinks in order to attract more men, who pay full fare for both.

[8] See, e.g., Ben Sperry, Congress Should Focus on Protecting Teens from Real Harms, Not Targeted Ads, The Hill (Feb. 16, 2023), https://thehill.com/opinion/congress-blog/3862238-congress-should-focus-on-protecting-teens-from-real-harms-not-targeted-ads.

[9] An externality is a side effect of an activity that is not reflected in the cost of that activity—basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action.

[10] See Ronald H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960)

[11] See Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[12] See Coase, supra note 8, at 8-10.

[13] See id. at 34 (“When an economist is comparing alternative social arrangements, the proper procedure is to compare the total social product yielded by these different arrangements.”).

[14] See Part III below.

[15] See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Liability, 87 Notre Dame L. Rev. 293, 295-96 (2011); Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L J. 26, 39 (2022); Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-firstamendment-and-online-intermediary-liability.

[16] See Geoffrey A. Manne, Kristian Stout, & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (Sep. 6, 2023), https://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach.

[17] See Statement of Commissioner Alvaro M. Bedoya On the Issuance of the Notice of Proposed Rulemaking to Update the Children’s Online Privacy Protection Rule (COPPA Rule), at 3-4 (Dec. 20, 2023), available at https://www.ftc.gov/system/files/ftc_gov/pdf/BedoyaStatementonCOPPARuleNPRMFINAL12.20.23.pdf (listing examples of these types of enforcement actions).

[18] 15 U.S.C. §?6502(b)(1)(A)(ii).

[19] 15 U.S.C. §?6501(8).

[20] 15 U.S.C. §?6501(8)(A)-(E).

[21] 15 U.S.C. §?6501(8)(F).

[22] 15 U.S.C. §?6501(8)(G).

[23] 16 CFR § 312.2 (Personal information)(7).

[24] See Chevron U.S.A. Inc. v. Natural Resources Defense Council, Inc., 467 U. S. 837, 843 n.9 (1984) (“If a court, employing traditional tools of statutory construction, ascertains that Congress had an intention on the precise question at issue, that intention is the law and must be given effect.”).

[25] What is EJUSDEM GENERIS?, The Law Dictionary: Featuring Black’s Law Dictionary Free Online Legal Dictionary 2nd Ed. (last accessed Dec. 9, 2019), https://thelawdictionary.org/ejusdem-generis.

[26] NPRM at 2043.

[27] Id.

[28] See, e.g., Children and Teens’ Online Privacy Protection Act, S. 1418, §2(a)(3) 118th Cong. (2024).

[29] See FDA v. Brown & Williamson, 529 U.S. 120, 148-50 (2000).

[30] Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[31] Id. at 6-7 (emphasis added).

[32] Id. at 1.

[33] The Future of the COPPA Rule: An FTC Workshop Part 2, Federal Trade Commission (Oct. 7, 2019), available at https://www.ftc.gov/system/files/documents/public_events/1535372/transcript_of_coppa_workshop_part_2_1.pdf.

[34] Id. at 6 (emphasis added).

[35] See Michael Munger, To Consumers, All Costs are Transaction Costs, Am. Inst. Econ. Rsch. (June 13, 2023), https://www.aier.org/article/to-consumers-all-costs-are-transaction-costs.

[36] NPRM at 2043.

[37] Id. at 2034, n. 121.

[38] See NetChoice Complaint, NetChoice LLC v. Griffin, NO. 5:23-CV-05105, 2023 U.S. Dist. LEXIS 154571 (W.D. Ark. 2023), available at https://netchoice.org/wp-content/uploads/2023/06/NetChoice-v-Griffin_-Complaint_2023-06-29.pdf.

[39] Id. at para. 13.

[40] See id. at para. 14

[41] See id.

[42] See id. at para 15.

[43] See id. at para 16.

[44] See id.

[45] See id. at para. 17, 19-21

[46] Sperry, supra note 8.

[47] See NetChoice Complaint, supra note 36, at para. 18.

[48] See, e.g., Mary James & Catherine McNally, The Best Ad Blockers 2024, all about cookies (last updated Feb. 29, 2024), https://allaboutcookies.org/best-ad-blockers.

[49] See, e.g., Parental Controls for Apple, Android, and Other Devices, internet matters (last accessed Mar. 7, 2024), https://www.internetmatters.org/parental-controls/smartphones-and-other-devices.

[50] See, e.g., Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 794-95 (2011); NetChoice, LLC v. Griffin, 2023 WL 5660155, at *17 (W.D. Ark. Aug. 31, 2023) (finding Arkansas’s Act 689 “obviously burdens minors’ First Amendment rights” by “bar[ring] minors from opening accounts on a variety of social media platforms.”).

[51] See NPRM at 2047.

[52] See id. at 2046-47.

[53] Id. at 2047 (“Additionally, the Commission believes that other factors can help elucidate the intended or actual audience of a site or service, including user or third-party reviews and the age of users on similar websites or services.”).

Continue reading
Data Security & Privacy

A Competition Perspective on Physician Non-Compete Agreements

Scholarship Abstract Physician non-compete agreements may have significant competitive implications, and effects on both providers and patients, but they are treated variously under the law on . . .

Abstract

Physician non-compete agreements may have significant competitive implications, and effects on both providers and patients, but they are treated variously under the law on a state-by-state basis. Reviewing the relevant law and the economic literature cannot identify with confidence the net effects of such agreements on either physicians or health care delivery with any generality. In addition to identifying future research projects to inform policy, it is argued that the antitrust “rule of reason” provides a useful and established framework with which to evaluate such agreements in specific health care markets and, potentially, to address those agreements most likely to do significant damage to health care competition and consumers.

Continue reading
Antitrust & Consumer Protection

ICLE Comments to European Commission on Competition in Virtual Worlds

Regulatory Comments Executive Summary We welcome the opportunity to comment on the European Commission’s call for contributions on competition in “Virtual Worlds”.[1] The International Center for Law . . .

Executive Summary

We welcome the opportunity to comment on the European Commission’s call for contributions on competition in “Virtual Worlds”.[1] The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

The metaverse is an exciting and rapidly evolving set of virtual worlds. As with any new technology, concerns about the potential risks and negative consequences that the metaverse may bring have moved policymakers to explore how best to regulate this new space.

From the outset, it is important to recognize that simply because the metaverse is new does not mean that competition in this space is unregulated or somehow ineffective. Existing regulations may not explicitly or exclusively target metaverse ecosystems, but a vast regulatory apparatus already covers most aspects of business in virtual worlds. This includes European competition law, the Digital Markets Act (“DMA”), the General Data Protection Act (“GDPR), the Digital Services Act (“DSA”), and many more. Before it intervenes in this space, the commission should carefully consider whether there are any metaverse-specific problems not already addressed by these legal provisions.

This sense that competition intervention would be premature is reinforced by three important factors.

The first is that competition appears particularly intense in this space (Section I). There are currently multiple firms vying to offer compelling virtual worlds. At the time of writing, however, none appears close to dominating the market. In turn, this intense competition will encourage platforms to design services that meet consumers’ demands, notably in terms of safety and privacy. Nor does the market appear likely to fall into the hands of one of the big tech firms that command a sizeable share of more traditional internet services. Meta notoriously has poured more than $3.99 billion into its metaverse offerings during the first quarter of 2023, in addition to $13.72 billion the previous calendar year.[2] Despite these vast investments and a strategic focus on metaverse services, the company has, thus far, struggled to achieve meaningful traction in the space.[3]

Second, the commission’s primary concern appears to be that metaverses will become insufficiently “open and interoperable”.[4] But to the extent that these ecosystems do, indeed, become closed and proprietary, there is no reason to believe this to be a problem. Closed and proprietary ecosystems have several features that may be attractive to consumers and developers (Section II). These include improved product safety, performance, and ease of development. This is certainly not to say that closed ecosystems are always better than more open ones, but rather that it would be wrong to assume that one model or the other is optimal. Instead, the proper balance depends on tradeoffs that markets are better placed to decide.

Finally, timing is of the essence (Section III). Intervening so early in a fledgling industry’s life cycle is like shooting a moving target from a mile away. New rules or competition interventions might end up being irrelevant. Worse, by signaling that metaverses will be subject to heightened regulatory scrutiny for the foreseeable future, the commission may chill investment from the very firms is purports to support. In short, the commission should resist the urge to intervene so long as the industry is not fully mature.

I. Competing for Consumer Trust

The Commission is right to assume, in its call for contributions, that the extent to which metaverse services compete with each other (and continue to do so in the future) will largely determine whether they fulfil consumers’ expectations and meet the safety and trustworthiness requirements to which the commission aspires. As even the left-leaning Lessig put it:

Markets regulate behavior in cyberspace too. Prices structures often constrain access, and if they do not, then busy signals do. (America Online (AOL) learned this lesson when it shifted from an hourly to a flat-rate pricing plan.) Some sites on the web charge for access, as on-line services like AOL have for some time. Advertisers reward popular sites; online services drop unpopular forums. These behaviors are all a function of market constraints and market opportunity, and they all reflect the regulatory role of the market.[5]

Indeed, in a previous call for contributions, the Commission implicitly recognized the important role that competition plays, although it frames the subject primarily in terms of the problems that would arise if competition ceased to operate:

There is a risk of having a small number of big players becoming future gatekeepers of virtual worlds, creating market entry barriers and shutting out EU start-ups and SMEs from this emerging market. Such a closed ecosystem with the prevalence of proprietary systems can negatively affect the protection of personal information and data, the cybersecurity and the freedom and openness of virtual worlds at the same time.[6]

It is thus necessary to ask whether there is robust competition in the market for metaverse services. The short answer is a resounding yes.

A. Competition Without Tipping

While there is no precise definition of what constitutes a metaverse—much less a precise definition of the relevant market—available data suggests the space is highly competitive. This is evident in the fact that even a major global firm like Meta—having invested billions of dollars in its metaverse branch (and having rebranded the company accordingly)—has struggled to gain traction.[7]

Other major players in the space include the likes of Roblox, Fortnite, and Minecraft, which all have somewhere between 70 and 200 million active users.[8] This likely explains why Meta’s much-anticipated virtual world struggled to gain meaningful traction with consumers, stalling at around 300,000 active users.[9] Alongside these traditional players, there are also several decentralized platforms that are underpinned by blockchain technology. While these platforms have attracted massive investments, they are largely peripheral in terms of active users, with numbers often only in the low thousands.[10]

There are several inferences that can be drawn from these limited datasets. For one, it is clear that the metaverse industry is not yet fully mature. There are still multiple paradigms competing for consumer attention: game-based platforms versus social-network platforms; traditional platforms versus blockchain platforms, etc. In the terminology developed by David Teece, the metaverse industry has not yet reached a “paradigmatic” stage. It is fair to assume there is still significant scope for the entry of differentiated firms.[11]

It is also worth noting that metaverse competition does not appear to exhibit the same sort of network effects and tipping that is sometimes associated with more traditional social networks.[12] Despite competing for nearly a decade, no single metaverse project appears to be running away with the market.[13] This lack of tipping might be because these projects are highly differentiated.[14] It may also be due to the ease of multi-homing among them.[15]

More broadly, it is far from clear that competition will lead to a single metaverse for all uses. Different types of metaverse services may benefit from different user interfaces, graphics, and physics engines. This cuts in favor of multiple metaverses coexisting, rather than all services coordinating within a single ecosystem. Competition therefore appears likely lead to the emergence of multiple differentiated metaverses, rather than a single winner.

Ultimately, competition in the metaverse industry is strong and there is little sense these markets are about to tip towards a single firm in the year future.

B. Competing for Consumer Trust

As alluded to in the previous subsection, the world’s largest and most successful metaverse entrants to date are traditional videogaming platforms that have various marketplaces and currencies attached.[16] In other words, decentralized virtual worlds built upon blockchain technology remain marginal.

This has important policy implications. The primary legal issues raised by metaverses are the same as those encountered on other digital marketplaces. This includes issues like minor fraud, scams, and children buying content without their parents’ authorization.[17] To the extent these harms are not adequately deterred by existing laws, metaverse platforms themselves have important incentives to police them. In turn, these incentives may be compounded by strong competition among platforms.

Metaverses are generally multi-sided platforms that bring together distinct groups of users, including consumers and content creators. In order to maximize the value of their ecosystems, platforms have an incentive to balance the interests of these distinct groups.[18] In practice, this will often mean offering consumers various forms of protection against fraud and scams and actively policing platforms’ marketplaces. As David Evans puts it:

But as with any community, there are numerous opportunities for people and businesses to create negative externalities, or engage in other bad behavior, that can reduce economic efficiency and, in the extreme, lead to the tragedy of the commons. Multi-sided platforms, acting selfishly to maximize their own profits, often develop governance mechanisms to reduce harmful behavior. They also develop rules to manage many of the same kinds of problems that beset communities subject to public laws and regulations. They enforce these rules through the exercise of property rights and, most importantly, through the “Bouncer’s Right” to exclude agents from some quantum of the platform, including prohibiting some agents from the platform entirely…[19]

While there is little economic research to suggest that competition directly increases hosts’ incentive to policy their platforms, it stands to reason that doing so effectively can help platforms to expand the appeal of their ecosystems. This is particularly important for metaverse services whose userbases remain just a fraction of the size they could ultimately reach. While 100 or 200 million users already comprises a vast ecosystem, it pales in comparison to the sometimes billions of users that “traditional” online platforms attract.

The bottom line is that the market for metaverses is growing. This likely compounds platforms’ incentives to weed out undesirable behavior, thereby complementing government efforts to achieve the same goal.

II. Opening Platforms or Opening Pandora’s Box?

In its call for contributions, the commission seems concerned that the metaverse competition may lead to closed ecosystems that may be less beneficial to consumers than more open ones. But if this is indeed the commission’s fear, it is largely unfounded.

There are many benefits to closed ecosystems. Choosing the optimal degree of openness entails tradeoffs. At the very least, this suggests that policymakers should be careful not to assume that opening platforms up will systematically provide net benefits to consumers.

A. Antitrust Enforcement and Regulatory Initiatives

To understand why open (and weakly propertized) platforms are not always better for consumers, it is worth looking at past competition enforcement in the online space. Recent interventions by competition authorities have generally attempted (or are attempting) to move platforms toward more openness and less propertization. For their part, these platforms are already tremendously open (as the “platform” terminology implies) and attempt to achieve a delicate balance between centralization and decentralization.

Figure I: Directional Movement of Antitrust Intervention

The Microsoft cases and the Apple investigation both sought or seek to bring more openness and less propertization to those respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and to open its platform to rival media players and web browsers (more openness).[20] The same applies to Apple. Plaintiffs in private antitrust litigation brought in the United States[21] and government enforcement actions in Europe[22] are seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as to ensure that it cannot exclude rival mobile-payments solutions from its platform (more openness).

The various cases that were brought by EU and U.S. authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property.[23] The European Union’s Amazon investigation centers on the ways in which the company uses data from third-party sellers (and, ultimately, the distribution of revenue between those sellers and Amazon).[24] In both cases, authorities are ultimately trying to limit the extent to which firms can propertize their assets.

Finally, both of the EU’s Google cases sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals.[25] The separate Android decision sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing litigation brought by state attorneys general in the United States.[26]

Much of the same can be said of the numerous regulatory initiatives pertaining to digital markets. Indeed, draft regulations being contemplated around the globe mimic the features of the antitrust/competition interventions discussed above. For instance, it is widely accepted that Europe’s DMA effectively transposes and streamlines the enforcement of the theories harm described above.[27] Similarly, several scholars have argued that the proposed American Innovation and Choice Online Act (“AICOA”) in the United States largely mimics European competition policy.[28] The legislation would ultimately require firms to open up their platforms, most notably by forcing them to treat rival services as they would their own and to make their services more interoperable with those rivals.[29]

What is striking about these decisions and investigations is the extent to which authorities are pushing back against the very features that distinguish the platforms they are investigating. Closed (or relatively closed) platforms are forced to open up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

B. The Empty Quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be vanishingly few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems in both the mobile and desktop segments. Most have ended in failure. Ubuntu and other flavors of the Linux operating system remain fringe products. There have been attempts to create open-source search engines, but they have not met with success.[30] The picture is similar in the online retail space. Amazon appears to have beaten eBay, despite the latter being more open and less propertized. Indeed, Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the ways in which they may sell their goods.[31]

This theme is repeated in the standardization space. There have been innumerable attempts to impose open, royalty-free standards. At least in the mobile-internet industry, few (if any) of these have taken off. Instead, proprietary standards such as 5G and WiFi have been far more successful. That pattern is repeated in other highly standardized industries, like digital-video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format.[32]

Figure II: Open and Shared Platforms

This is not to say that there haven’t been any successful examples of open, royalty-free standards. Internet protocols, blockchain, and Wikipedia all come to mind. Nor does it mean that we will not see more decentralized goods in the future. But by and large, firms and consumers have not yet taken to the idea of fully open and shared platforms. Or, at least, those platforms have not yet achieved widespread success in the marketplace (potentially due to supply-side considerations, such as the difficulty of managing open platforms or the potentially lower returns to innovation in weakly propertized ones).[33] And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase in the blockchain space, or Android’s use of Linux).

C. Potential Explanations

The preceding section posited a recurring reality: the digital platforms that competition authorities wish to bring into existence are fundamentally different from those that emerge organically. But why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success?

Three potential explanations come to mind. First, “closed” and “propertized” platforms might systematically—and perhaps anticompetitively—thwart their “open” and “shared” rivals. Second, shared platforms might fail to persist (or grow pervasive) because they are much harder to monetize, and there is thus less incentive to invest in them. This is essentially a supply-side explanation. Finally, consumers might opt for relatively closed systems precisely because they prefer these platforms to marginally more open ones—i.e., a demand-side explanation.

In evaluating the first conjecture, the key question is whether successful “closed” and “propertized” platforms overcame their rivals before or after they achieved some measure of market dominance. If success preceded dominance, then anticompetitive foreclosure alone cannot explain the proliferation of the “closed” and “propertized” model.[34]

Many of today’s dominant platforms, however, often overcame open/shared rivals, well before they achieved their current size. It is thus difficult to make the case that the early success of their business models was due to anticompetitive behavior. This is not to say these business models cannot raise antitrust issues, but rather that anticompetitive behavior is not a good explanation for their emergence.

Both the second and the third conjectures essentially ask whether “closed” and “propertized” might be better adapted to their environment than “open” and “shared” rivals.

In that respect, it is not unreasonable to surmise that highly propertized platforms would generally be easier to monetize than shared ones. For example, to monetize open-source platforms often requires relying on complementarities, which tend to be vulnerable to outside competition and free-riding.[35] There is thus a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platform’s ability to propertize their assets may harm innovation.

Similarly, authorities should reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design. The European Commission, for example, has a long track record of seeking to open digital platforms, notably by requiring that platform owners do not preinstall their own web browsers (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept using the very business model that the commission reprimanded, rather than the “pro-consumer” model it sought to impose on the industry. For example, Apple tied the Safari browser to its iPhones; Google went to some length to ensure that Chrome was preloaded on devices; and Samsung phones come with Samsung Internet as default.[36] Yet this has not ostensibly steered consumers away from those platforms.

Along similar lines, a sizable share of consumers opt for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). In other words, it is hard to claim that opening platforms is inherently good for consumers when those same consumers routinely opt for platforms with the very features that policymakers are trying to eliminate.

Finally, it is worth noting that the remedies imposed by competition authorities have been anything but successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unmitigated flop, selling a paltry 1,787 copies.[37] Likewise, the internet-browser “ballot box” imposed by the commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the commission’s decision.[38]

One potential inference is that consumers do not value competition interventions that make dominant ecosystems marginally more open and less propertized. There are also many reasons why consumers might prefer “closed” systems (at least, relative to the model favored by many policymakers), even when they must pay a premium for them.

Take the example of app stores. Maintaining some control over the apps that can access the store enables platforms to easily weed out bad actors. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. Indeed, it may be that a measure of control facilitates the very innovations that consumers demand. Therefore, “authorities and courts should not underestimate the indispensable role control plays in achieving coordination and coherence in the context of systemic ef?ciencies. Without it, the attempted novelties and strategies might collapse under their own complexity.”[39]

Relatively centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers.[40] This is especially true when consumers will tend to attribute dips in performance to the overall platform, rather than to a particular app.[41] At the same time, they can take advantage of positive externalities to improve the quality of the overall platform.

And it is surely the case that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Absent false information at the time of the initial platform decision, this decision will effectively incorporate expectations about subsequent constraints.[42]

Furthermore, forcing users to make too many “within-platform” choices may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different.[43] In short, contrary to what antitrust authorities appear to believe, closed platforms might give most users exactly what they desire.

All of this suggests that consumers and firms often gravitate spontaneously toward both closed and highly propertized platforms, the opposite of what the commission and other competition authorities tend to favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. Instead, what some regard as “market failures” may in fact be features that explain the rapid emergence of the digital economy.

When considering potential policy reforms targeting the metaverse, policymakers would be wrong to assume openness (notably, in the form of interoperability) and weak propertization are always objectively superior. Instead, these platform designs entail important tradeoffs. Closed metaverse ecosystems may lead to higher consumer safety and better performance, while interoperable systems may reduce the frictions consumers face when moving from one service to another. There is little reason to believe policymakers are in a better position to weigh these tradeoffs than consumers, who vote with their virtual feet.

III. Conclusion: Competition Intervention Would be Premature

A final important argument against intervening today is that the metaverse industry is nowhere near mature. Tomorrow’s competition-related challenges and market failures might not be the same as today’s. This makes it exceedingly difficult for policymakers to design appropriate remedies and increases the risk that intervention might harm innovation.

As of 2023, the entire metaverse industry (both hardware and software) is estimated to be worth somewhere in the vicinity of $80 billion, and projections suggest this could grow by a factor of 10 by 2030.[44] Growth projections of this sort are notoriously unreliable. But in this case, they do suggest there is some consensus that the industry is not fully fledged.

Along similar lines, it remains unclear what types of metaverse services will gain the most traction with consumers, what sorts of hardware consumers will use to access these services, and what technologies will underpin the most successful metaverse platforms. In fact, it is still an open question whether the metaverse industry will foster any services that achieve widespread consumer adoption in the foreseeable future.[45] In other words, it is not exactly clear what metaverse products and services the Commission should focus on in the first place.

Given these uncertainties, competition intervention in the metaverse appears premature. Intervening so early in the industry’s life cycle is like aiming at a moving target. Ensuing remedies might end up being irrelevant before they have any influence on the products that firms develop. More worryingly, acting now signals that the metaverse industry will be subject to heightened regulatory scrutiny for the foreseeable future. In turn, this may deter large platforms from investing in the European market. It also may funnel venture-capital investments away from the European continent.

Competition intervention in burgeoning industries is no free lunch. The best evidence concerning these potential costs comes from the GDPR. While privacy regulation is obviously not the same as competition law, the evidence concerning the GDPR suggests that heavy-handed intervention may, at least in some instances, slow down innovation and reduce competition.

The most-cited empirical evidence concerning the effects of the GDPR comes from a paper by Garrett Johnson and co-authors, who link the GDPR to widespread increases to market concentration, particularly in the short-term:

We show that websites’ vendor use falls after the European Union’s (EU’s) General Data Protection Regulation (GDPR), but that market concentration also increases among technology vendors that provide support services to websites…. The week after the GDPR’s enforcement, website use of web technology vendors falls by 15% for EU residents. Websites are relatively more likely to retain top vendors, which increases the concentration of the vendor market by 17%. Increased concentration predominantly arises among vendors that use personal data, such as cookies, and from the increased relative shares of Facebook and Google-owned vendors, but not from website consent requests. Although the aggregate changes in vendor use and vendor concentration dissipate by the end of 2018, we find that the GDPR impact persists in the advertising vendor category most scrutinized by regulators.[46]

Along similar lines, an NBER working paper by Jian Jia and co-authors finds that enactment of the GDPR markedly reduced venture-capital investments in Europe:

Our findings indicate a negative differential effect on EU ventures after the rollout of GDPR relative to their US counterparts. These negative effects manifest in the overall number of financing rounds, the overall dollar amount raised across rounds, and in the dollar amount raised per individual round. Specifically, our findings suggest a $3.38 million decrease in the aggregate dollars raised by EU ventures per state per crude industry category per week, a 17.6% reduction in the number of weekly venture deals, and a 39.6% decrease in the amount raised in an average deal following the rollout of GDPR.[47]

In another paper, Samuel Goldberg and co-authors find that the GDPR led to a roughly 12% reduction in website pageviews and e-commerce revenue in Europe.[48] Finally, Rebecca Janssen and her co-authors show that the GDPR decreased the number of apps offered on Google’s Play Store between 2016 and 2019:

Using data on 4.1 million apps at the Google Play Store from 2016 to 2019, we document that GDPR induced the exit of about a third of available apps; and in the quarters following implementation, entry of new apps fell by half.[49]

Of course, the body of evidence concerning the GDPR’s effects is not entirely unambiguous. For example, Rajkumar Vekatesean and co-authors find that the GDPR had mixed effects on the returns of different types of firms.[50] Other papers also show similarly mixed effects.[51]

Ultimately, the empirical literature concerning the effects of the GDPR shows that regulation—in this case, privacy protection—is no free lunch. Of course, this does not mean that competition intervention targeting the metaverse would necessarily have these same effects. But in the absence of a clear market failure to solve, it is unclear why policymakers should run such a risk in the first place.

In the end, competition intervention in the metaverse is unlikely to be costless. The metaverse is still in its infancy, regulation could deter essential innovation, and the commission has thus far failed to identify any serious market failures that warrant public intervention. The result is that the commission’s call for contributions appears premature or, in other words, that the commission is putting the meta-cart before the meta-horse.

 

[1] Competition in Virtual Worlds and Generative AI – Calls for contributions, European Commission (Jan. 9, 2024) https://competition-policy.ec.europa.eu/document/download/e727c66a-af77-4014-962a-7c9a36800e2f_en?filename=20240109_call-for-contributions_virtual-worlds_and_generative-AI.pdf (hereafter, “Call for Contributions”).

[2] Jonathan Vaian, Meta’s Reality Labs Records $3.99 Billion Quarterly Loss as Zuckerberg Pumps More Cash into Metaverse, CNBC (Apr. 26, 2023), https://www.cnbc.com/2023/04/26/metas-reality-labs-unit-records-3point99-billion-first-quarter-loss-.html.

[3] Alan Truly, Horizon Worlds Leak: Only 1 in 10 Users Return & Web Launch Is Coming, Mixed News (Mar. 3, 2023), https://mixed-news.com/en/horizon-worlds-leak-only-1-in-10-users-return-web-launch-coming; Kevin Hurler, Hey Fellow Kids: Meta Is Revamping Horizon Worlds to Attract More Teen Users, Gizmodo (Feb. 7, 2023), https://gizmodo.com/meta-metaverse-facebook-horizon-worlds-vr-1850082068; Emma Roth, Meta’s Horizon Worlds VR Platform Is Reportedly Struggling to Keep Users, The Verge (Oct. 15, 2022),
https://www.theverge.com/2022/10/15/23405811/meta-horizon-worlds-losing-users-report; Paul Tassi, Meta’s ‘Horizon Worlds’ Has Somehow Lost 100,000 Players in Eight Months, Forbes, (Oct. 17, 2022), https://www.forbes.com/sites/paultassi/2022/10/17/metas-horizon-worlds-has-somehow-lost-100000-players-in-eight-months/?sh=57242b862a1b.

[4] Call for Contributions, supra note 1. (“6) Do you expect the technology incorporated into Virtual World platforms, enabling technologies of Virtual Worlds and services based on Virtual Worlds to be based mostly on open standards and/or protocols agreed through standard-setting organisations, industry associations or groups of companies, or rather the use of proprietary technology?”).

[5] Less Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 508 (1999).

[6] Virtual Worlds (Metaverses) – A Vision for Openness, Safety and Respect, European Commission, https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/13757-Virtual-worlds-metaverses-a-vision-for-openness-safety-and-respect/feedback_en?p_id=31962299H.

[7] Catherine Thorbecke, What Metaverse? Meta Says Its Single Largest Investment Is Now in ‘Advancing AI’, CNN Business (Mar. 15, 2023), https://www.cnn.com/2023/03/15/tech/meta-ai-investment-priority/index.html; Ben Marlow, Mark Zuckerberg’s Metaverse Is Shattering into a Million Pieces, The Telegraph (Apr. 23, 2023), https://www.telegraph.co.uk/business/2023/04/21/mark-zuckerbergs-metaverse-shattering-million-pieces; Will Gendron, Meta Has Reportedly Stopped Pitching Advertisers on the Metaverse, BusinessInsider (Apr. 18, 2023), https://www.businessinsider.com/meta-zuckerberg-stopped-pitching-advertisers-metaverse-focus-reels-ai-report-2023-4.

[8] Mansoor Iqbal, Fortnite Usage and Revenue Statistics, Business of Apps (Jan. 9, 2023), https://www.businessofapps.com/data/fortnite-statistics; Matija Ferjan, 76 Little-Known Metaverse Statistics & Facts (2023 Data), Headphones Addict (Feb. 13, 2023), https://headphonesaddict.com/metaverse-statistics.

[9] James Batchelor, Meta’s Flagship Metaverse Horizon Worlds Struggling to Attract and Retain Users, Games Industry (Oct. 17, 2022), https://www.gamesindustry.biz/metas-flagship-metaverse-horizon-worlds-struggling-to-attract-and-retain-users; Ferjan, id.

[10] Richard Lawler, Decentraland’s Billion-Dollar ‘Metaverse’ Reportedly Had 38 Active Users in One Day, The Verge (Oct. 13, 2022), https://www.theverge.com/2022/10/13/23402418/decentraland-metaverse-empty-38-users-dappradar-wallet-data; The Sandbox, DappRadar, https://dappradar.com/multichain/games/the-sandbox (last visited May 3, 2023); Decentraland, DappRadar, https://dappradar.com/multichain/social/decentraland (last visited May 3, 2023).

[11] David J. Teece, Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing and Public Policy, 15 Research Policy 285-305 (1986), https://www.sciencedirect.com/science/article/abs/pii/0048733386900272.

[12] Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo. Mason L. Rev. 1279 (2021).

[13] Roblox, Wikipedia, https://en.wikipedia.org/wiki/Roblox (last visited May 3, 2023); Minecraft, Wikipedia, https://en.wikipedia.org/wiki/Minecraft (last visited May 3, 2023); Fortnite, Wikipedia, https://en.wikipedia.org/wiki/Fortnite (last visited May 3, 2023); see Fiza Chowdhury, Minecraft vs Roblox vs Fortnite: Which Is Better?, Metagreats (Feb. 20, 2023), https://www.metagreats.com/minecraft-vs-roblox-vs-fortnite.

[14]  Marc Rysman, The Economics of Two-Sided Markets, 13 J. Econ. Perspectives 134 (2009) (“First, if standards can differentiate from each other, they may be able to successfully coexist (Chou and Shy, 1990; Church and Gandal, 1992). Arguably, Apple and Microsoft operating systems have both survived by specializing in different markets: Microsoft in business and Apple in graphics and education. Magazines are an obvious example of platforms that differentiate in many dimensions and hence coexist.”).

[15] Id. at 134 (“Second, tipping is less likely if agents can easily use multiple standards. Corts and Lederman (forthcoming) show that the fixed cost of producing a video game for one more standard have reduced over time relative to the overall fixed costs of producing a game, which has led to increased distribution of games across multiple game systems (for example, PlayStation, Nintendo, and Xbox) and a less-concentrated game system market.”).

[16] What Are Fortnite, Roblox, Minecraft and Among Us? A Parent’s Guide to the Most Popular Online Games Kids Are Playing, FTC Business (Oct. 5, 2021), https://www.ftc.net/blog/what-are-fortnite-roblox-minecraft-and-among-us-a-parents-guide-to-the-most-popular-online-games-kids-are-playing; Jay Peters, Epic Is Merging Its Digital Asset Stores into One Huge Marketplace, The Verge (Mar. 22, 2023), https://www.theverge.com/2023/3/22/23645601/epic-games-fab-asset-marketplace-state-of-unreal-2023-gdc.

[17] Luke Winkie, Inside Roblox’s Criminal Underworld, Where Kids Are Scamming Kids, IGN (Jan. 2, 2023), https://www.ign.com/articles/inside-robloxs-criminal-underworld-where-kids-are-scamming-kids; Fake Minecraft Updates Pose Threat to Users, Tribune (Sept. 11, 2022), https://tribune.com.pk/story/2376087/fake-minecraft-updates-pose-threat-to-users; Ana Diaz, Roblox and the Wild West of Teenage Scammers, Polygon (Aug. 24, 2019) https://www.polygon.com/2019/8/24/20812218/roblox-teenage-developers-controversy-scammers-prison-roleplay; Rebecca Alter, Fortnite Tries Not to Scam Children and Face $520 Million in FTC Fines Challenge, Vulture (Dec. 19, 2022), https://www.vulture.com/2022/12/fortnite-epic-games-ftc-fines-privacy.html; Leonid Grustniy, Swindle Royale: Fortnite Scammers Get Busy, Kaspersky Daily (Dec. 3, 2020), https://www.kaspersky.com/blog/top-four-fortnite-scams/37896.

[18] See, generally, David Evans & Richard Schmalensee, Matchmakers: The New Economics of Multisided Platforms (Harvard Business Review Press, 2016).

[19] David S. Evans, Governing Bad Behaviour By Users of Multi-Sided Platforms, Berkley Technology Law Journal 27:2 (2012), 1201.

[20] See Case COMP/C-3/37.792, Microsoft, OJ L 32 (May 24, 2004). See also, Case COMP/39.530, Microsoft (Tying), OJ C 120 (Apr. 26, 2013).

[21] See Complaint, Epic Games, Inc. v. Apple Inc., 493 F. Supp. 3d 817 (N.D. Cal. 2020) (4:20-cv-05640-YGR).

[22] See European Commission Press Release IP/20/1073, Antitrust: Commission Opens Investigations into Apple’s App Store Rules (Jun. 16, 2020); European Commission Press Release IP/20/1075, Antitrust: Commission Opens Investigation into Apple Practices Regarding Apple Pay (Jun. 16, 2020).

[23] See European Commission Press Release IP/18/421, Antitrust: Commission Fines Qualcomm €997 Million for Abuse of Dominant Market Position (Jan. 24, 2018); Federal Trade Commission v. Qualcomm Inc., 969 F.3d 974 (9th Cir. 2020).

[24] See European Commission Press Release IP/19/4291, Antitrust: Commission Opens Investigation into Possible Anti-Competitive Conduct of Amazon (Jul. 17, 2019).

[25] See Case AT.39740, Google Search (Shopping), 2017 E.R.C. I-379. See also, Case AT.40099 (Google Android), 2018 E.R.C.

[26] See Complaint, United States v. Google, LLC, (2020), https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws; see also, Complaint, Colorado et al. v. Google, LLC, (2020), available at https://coag.gov/app/uploads/2020/12/Colorado-et-al.-v.-Google-PUBLIC-REDACTED-Complaint.pdf.

[27] See, e.g., Giorgio Monti, The Digital Markets Act: Institutional Design and Suggestions for Improvement, Tillburg L. & Econ. Ctr., Discussion Paper No. 2021-04 (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3797730 (“In sum, the DMA is more than an enhanced and simplified application of Article 102 TFEU: while the obligations may be criticised as being based on existing competition concerns, they are forward-looking in trying to create a regulatory environment where gatekeeper power is contained and perhaps even reduced.”) (Emphasis added).

[28] See, e.g., Aurelien Portuese, “Please, Help Yourself”: Toward a Taxonomy of Self-Preferencing, Information Technology & Innovation Foundation (Oct. 25, 2021), available at https://itif.org/sites/default/files/2021-self-preferencing-taxonomy.pdf. (“The latest example of such weaponization of self-preferencing by antitrust populists is provided by Sens. Amy Klobuchar (D-MN) and Chuck Grassley (R-IA). They introduced legislation in October 2021 aimed at prohibiting the practice.2 However, the legislation would ban self-preferencing only for a handful of designated companies—the so-called “covered platforms,” not the thousands of brick-and-mortar sellers that daily self-preference for the benefit of consumers. Mimicking the European Commission’s Digital Markets Act prohibiting self-preferencing, Senate and the House bills would degrade consumers’ experience and undermine competition, since self-preferencing often benefits consumers and constitutes an integral part, rather than an abnormality, of the process of competition.”).

[29] Efforts to saddle platforms with “non-discrimination” constraints are tantamount to mandating openness. See Geoffrey A. Manne, Against the Vertical Discrimination Presumption, Foreword, Concurrences No. 2-2020 (2020) at 2 (“The notion that platforms should be forced to allow complementors to compete on their own terms, free of constraints or competition from platforms is a species of the idea that platforms are most socially valuable when they are most ‘open.’ But mandating openness is not without costs, most importantly in terms of the effective operation of the platform and its own incentives for innovation.”).

[30] See, e.g., Klint Finley, Your Own Private Google: The Quest for an Open Source Search Engine, Wired (Jul. 12, 2021), https://www.wired.com/2012/12/solar-elasticsearch-google.

[31] See Brian Connolly, Selling on Amazon vs. eBay in 2021: Which Is Better?, JungleScout (Jan. 12, 2021), https://www.junglescout.com/blog/amazon-vs-ebay; Crucial Differences Between Amazon and eBay, SaleHOO, https://www.salehoo.com/educate/selling-on-amazon/crucial-differences-between-amazon-and-ebay (last visited Feb. 8, 2021).

[32] See, e.g., Dolby Vision Is Winning the War Against HDR10 +, It Requires a Single Standard, Tech Smart, https://voonze.com/dolby-vision-is-winning-the-war-against-hdr10-it-requires-a-single-standard (last visited June 6, 2022).

[33] On the importance of managers, see, e.g., Nicolai J Foss & Peter G Klein, Why Managers Still Matter, 56 MIT Sloan Mgmt. Rev., 73 (2014) (“In today’s knowledge-based economy, managerial authority is supposedly in decline. But there is still a strong need for someone to define and implement the organizational rules of the game.”).

[34] It is generally agreed upon that anticompetitive foreclosure is possible only when a firm enjoys some degree of market power. Frank H. Easterbrook, Limits of Antitrust, 63 Tex. L. Rev. 1, 20 (1984) (“Firms that lack power cannot injure competition no matter how hard they try. They may injure a few consumers, or a few rivals, or themselves (see (2) below) by selecting ‘anticompetitive’ tactics. When the firms lack market power, though, they cannot persist in deleterious practices. Rival firms will offer the consumers better deals. Rivals’ better offers will stamp out bad practices faster than the judicial process can. For these and other reasons many lower courts have held that proof of market power is an indispensable first step in any case under the Rule of Reason. The Supreme Court has established a market power hurdle in tying cases, despite the nominally per se character of the tying offense, on the same ground offered here: if the defendant lacks market power, other firms can offer the customer a better deal, and there is no need for judicial intervention.”).

[35] See, e.g., Josh Lerner & Jean Tirole, Some Simple Economics of Open Source, 50 J. Indus. Econ. 197 (2002).

[36] See Matthew Miller, Thanks, Samsung: Android’s Best Mobile Browser Now Available to All, ZDNet (Aug. 11, 2017), https://www.zdnet.com/article/thanks-samsung-androids-best-mobile-browser-now-available-to-all.

[37] FACT SHEET: Windows XP N Sales, RegMedia (Jun. 12, 2009), available at https://regmedia.co.uk/2009/06/12/microsoft_windows_xp_n_fact_sheet.pdf.

[38] See Case COMP/39.530, Microsoft (Tying), OJ C 120 (Apr. 26, 2013).

[39] Konstantinos Stylianou, Systemic Efficiencies in Competition Law: Evidence from the ICT Industry, 12 J. Competition L. & Econ. 557 (2016).

[40] See, e.g., Steven Sinofsky, The App Store Debate: A Story of Ecosystems, Medium (Jun. 21, 2020), https://medium.learningbyshipping.com/the-app-store-debate-a-story-of-ecosystems-938424eeef74.

[41] Id.

[42] See, e.g., Benjamin Klein, Market Power in Aftermarkets, 17 Managerial & Decision Econ. 143 (1996).

[43] See, e.g., Simon Hill, What Is Android Fragmentation, and Can Google Ever Fix It?, DigitalTrends (Oct. 31, 2018), https://www.digitaltrends.com/mobile/what-is-android-fragmentation-and-can-google-ever-fix-it.

[44] Metaverse Market Revenue Worldwide from 2022 to 2030, Statista, https://www.statista.com/statistics/1295784/metaverse-market-size (last visited May 3, 2023); Metaverse Market by Component (Hardware, Software (Extended Reality Software, Gaming Engine, 3D Mapping, Modeling & Reconstruction, Metaverse Platform, Financial Platform), and Professional Services), Vertical and Region – Global Forecast to 2027, Markets and Markets (Apr. 27, 2023), https://www.marketsandmarkets.com/Market-Reports/metaverse-market-166893905.html; see also, Press Release, Metaverse Market Size Worth $ 824.53 Billion, Globally, by 2030 at 39.1% CAGR, Verified Market Research (Jul. 13, 2022), https://www.prnewswire.com/news-releases/metaverse-market-size-worth–824-53-billion-globally-by-2030-at-39-1-cagr-verified-market-research-301585725.html.

[45] See, e.g., Megan Farokhmanesh, Will the Metaverse Live Up to the Hype? Game Developers Aren’t Impressed, Wired (Jan. 19, 2023), https://www.wired.com/story/metaverse-video-games-fortnite-zuckerberg; see also Mitch Wagner, The Metaverse Hype Bubble Has Popped. What Now?, Fierce Electronics (Feb. 24, 2023), https://www.fierceelectronics.com/embedded/metaverse-hype-bubble-has-popped-what-now.

[46] Garret A. Johnson, et al., Privacy and Market Concentration: Intended and Unintended Consequences of the GDPR, Forthcoming Management Science 1 (2023).

[47] Jian Jia, et al., The Short-Run Effects of GDPR on Technology Venture Investment, NBER Working Paper 25248, 4 (2018), available at https://www.nber.org/system/files/working_papers/w25248/w25248.pdf.

[48] Samuel G. Goldberg, Garrett A. Johnson, & Scott K. Shriver, Regulating Privacy Online: An Economic Evaluation of GDPR (2021), available at https://www.ftc.gov/system/files/documents/public_events/1588356/johnsongoldbergshriver.pdf.

[49] Rebecca Janßen, Reinhold Kesler, Michael Kummer, & Joel Waldfogel, GDPR and the Lost Generation of Innovative Apps, Nber Working Paper 30028, 2 (2022), available at https://www.nber.org/system/files/working_papers/w30028/w30028.pdf.

[50] Rajkumar Venkatesan, S. Arunachalam & Kiran Pedada, Short Run Effects of Generalized Data Protection Act on Returns from AI Acquisitions, University of Virginia Working Paper 6 (2022), available at: https://conference.nber.org/conf_papers/f161612.pdf. (“On average, GDPR exposure reduces the ROA of firms. We also find that GDPR exposure increases the ROA of firms that make AI acquisitions for improving customer experience, and cybersecurity. Returns on AI investments in innovation and operational efficiencies are unaffected by GDPR.”)

[51] For a detailed discussion of the empirical literature concerning the GDPR, see Garrett Johnson, Economic Research on Privacy Regulation: Lessons From the GDPR And Beyond, NBER Working Paper 30705 (2022), available at https://www.nber.org/system/files/working_papers/w30705/w30705.pdf.

Continue reading
Antitrust & Consumer Protection

ICLE Comments to European Commission on AI Competition

Regulatory Comments Executive Summary We thank the European Commission for launching this consultation on competition in generative AI. The International Center for Law & Economics (“ICLE”) is . . .

Executive Summary

We thank the European Commission for launching this consultation on competition in generative AI. The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

In our comments, we express concern that policymakers may equate the rapid rise of generative AI services with a need to intervene in these markets when, in fact, the opposite is true. As we explain, the rapid growth of AI markets, as well as the fact that new market players are thriving, suggests competition is intense. If incumbent firms could easily leverage their dominance into burgeoning generative AI markets, we would not have seen the growth of generative AI unicorns such as OpenAI, Midjourney, and Anthropic, to name but a few.

Of course, this is not to say that generative AI markets are not important—quite the opposite. Generative AI is already changing the ways that many firms do business and improving employee productivity in many industries.[1] The technology is also increasingly useful in the field of scientific research, where it has enabled creation of complex models that expand scientists’ reach.[2] Against this backdrop, Commissioner Margrethe Vestager was right to point out that it “is fundamental that these new markets stay competitive, and that nothing stands in the way of businesses growing and providing the best and most innovative products to consumers.”[3]

But while sensible enforcement is of vital importance to maintain competition and consumer welfare, knee-jerk reactions may yield the opposite outcomes. As our comments explain, overenforcement in the field of generative AI could cause the very harms that policymakers seek to avert. For instance, preventing so-called “big tech” firms from competing in these markets (for example, by threatening competition intervention as soon as they embed generative AI services in their ecosystems or seek to build strategic relationships with AI startups) may thwart an important source of competition needed to keep today’s leading generative-AI firms in check. In short, competition in AI markets is important, but trying naïvely to hold incumbent tech firms back out of misguided fears they will come to dominate this space is likely to do more harm than good.

Our comment proceeds as follows. Section I summarizes recent calls for competition intervention in generative AI markets. Section II argues that many of these calls are underpinned by fears of data-related incumbency advantages (often referred to as “data-network effects”). Section III explains why these effects are unlikely to play a meaningful role in generative-AI markets. Section IV concludes by offering five key takeaways to help policymakers (including the Commission) better weigh the tradeoffs inherent to competition intervention in generative-AI markets.

I. Calls for Intervention in AI Markets

It was once (and frequently) said that Google’s “data monopoly” was unassailable: “If ‘big data’ is the oil of the information economy, Google has Standard Oil-like monopoly dominance—and uses that control to maintain its dominant position.”[4] Similar claims of data dominance have been attached to nearly all large online platforms, including Facebook (Meta), Amazon, and Uber.[5]

While some of these claims continue even today (for example, “big data” is a key component of the U.S. Justice Department’s (DOJ) Google Search and adtech antitrust suits),[6] a shiny new data target has emerged in the form of generative artificial intelligence (AI). The launch of ChatGPT in November 2022, as well as the advent of AI image-generation services like Midjourney and Dall-E, have dramatically expanded the public’s conception of what is—and what might be—possible to achieve with generative-AI technologies built on massive datasets.

While these services remain in the early stages of mainstream adoption and remain in the throes of rapid, unpredictable technological evolution, they nevertheless already appear to be on the radar of competition policymakers around the world. Several antitrust enforcers appear to believe that, by acting now, they can avoid the “mistakes” that were purportedly made during the formative years of Web 2.0.[7] These mistakes, critics assert, include failing to appreciate the centrality of data in online markets, as well as letting mergers go unchecked and allowing early movers to entrench their market positions.[8] As Lina Khan, chair of the U.S. Federal Trade Commission (FTC), put it: “we are still reeling from the concentration that resulted from Web 2.0, and we don’t want to repeat the mis-steps of the past with AI”.[9]

This response from the competition-policy world is deeply troubling. Rather than engage in critical self-assessment and adopt an appropriately restrained stance, the enforcement community appears to be champing at the bit. Rather than assessing their prior assumptions based on the current technological moment, enforcers’ top priority appears to be figuring out how to rapidly and almost reflexively deploy existing competition tools to address the presumed competitive failures presented by generative AI.[10]

It is increasingly common for competition enforcers to argue that so-called “data-network effects” serve not only to entrench incumbents in those markets where the data is collected, but also confer similar, self-reinforcing benefits in adjacent markets. Several enforcers have, for example, prevented large online platforms from acquiring smaller firms in adjacent markets, citing the risk that they could use their vast access to data to extend their dominance into these new markets.[11]

They have also launched consultations to ascertain the role that data plays in AI competition. For instance, in an ongoing consultation, the European Commission asks: “What is the role of data and what are its relevant characteristics for the provision of generative AI systems and/or components, including AI models?”[12] Unsurprisingly, the FTC has likewise been bullish about the risks posed by incumbents’ access to data. In comments submitted to the U.S. Copyright Office, for example, the FTC argued that:

The rapid development and deployment of AI also poses potential risks to competition. The rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms. These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.[13]

Certainly, it stands to reason that the largest online platforms—including Alphabet, Meta, Apple, and Amazon—should have a meaningful advantage in the burgeoning markets for generative-AI services. After all, it is widely recognized that data is an essential input for generative AI.[14] This competitive advantage should be all the more significant, given that these firms have been at the forefront of AI technology for more than a decade. Over this period, Google’s DeepMind and AlphaGo and Meta’s have routinely made headlines.[15] Apple and Amazon also have vast experience with AI assistants, and all of these firms use AI technology throughout their platforms.[16]

Contrary to what one might expect, however, the tech giants have, to date, been largely unable to leverage their vast data troves to outcompete startups like OpenAI and Midjourney. At the time of writing, OpenAI’s ChatGPT appears to be, by far, the most successful chatbot,[17] despite the large tech platforms’ apparent access to far more (and more up-to-date) data.

In these comments, we suggest that there are important lessons to glean from these developments, if only enforcers would stop to reflect. The meteoric rise of consumer-facing AI services should offer competition enforcers and policymakers an opportunity for introspection. As we explain, the rapid emergence of generative-AI technology may undercut many core assumptions of today’s competition-policy debates, which have largely focused on the rueful after-effects of the purported failure of 20th-century antitrust to address the allegedly manifest harms of 21st-century technology. These include the notions that data advantages constitute barriers to entry and can be leveraged to project dominance into adjacent markets; that scale itself is a market failure to be addressed by enforcers; and that the use of consumer data is inherently harmful to those consumers.

II. Data-Network Effects Theory and Enforcement

Proponents of tougher interventions by competition enforcers into digital markets often cite data-network effects as a source of competitive advantage and barrier to entry (though terms like “economies of scale and scope” may offer more precision).[18] The crux of the argument is that “the collection and use of data creates a feedback loop of more data, which ultimately insulates incumbent platforms from entrants who, but for their data disadvantage, might offer a better product.”[19] This self-reinforcing cycle purportedly leads to market domination by a single firm. Thus, it is argued, for example, that Google’s “ever-expanding control of user personal data, and that data’s critical value to online advertisers, creates an insurmountable barrier to entry for new competition.”[20]

Right off the bat, it is important to note the conceptual problem these claims face. Because data can be used to improve the quality of products and/or to subsidize their use, the idea of data as an entry barrier suggests that any product improvement or price reduction made by an incumbent could be a problematic entry barrier to any new entrant. This is tantamount to an argument that competition itself is a cognizable barrier to entry. Of course, it would be a curious approach to antitrust if competition were treated as a problem, as it would imply that firms should under-compete—i.e., should forego consumer-welfare enhancements—in order to inculcate a greater number of firms in a given market simply for its own sake.[21]

Meanwhile, actual economic studies of data-network effects have been few and far between, with scant empirical evidence to support the theory.[22] Andrei Hagiu and Julian Wright’s theoretical paper offers perhaps the most comprehensive treatment of the topic to date.[23] The authors ultimately conclude that data-network effects can be of different magnitudes and have varying effects on firms’ incumbency advantage.[24] They cite Grammarly (an AI writing-assistance tool) as a potential example: “As users make corrections to the suggestions offered by Grammarly, its language experts and artificial intelligence can use this feedback to continue to improve its future recommendations for all users.”[25]

This is echoed by other economists who contend that “[t]he algorithmic analysis of user data and information might increase incumbency advantages, creating lock-in effects among users and making them more reluctant to join an entrant platform.”[26] Crucially, some scholars take this logic a step further, arguing that platforms may use data from their “origin markets” in order to enter and dominate adjacent ones:

First, as we already mentioned, data collected in the origin market can be used, once the enveloper has entered the target market, to provide products more efficiently in the target market. Second, data collected in the origin market can be used to reduce the asymmetric information to which an entrant is typically subject when deciding to invest (for example, in R&D) to enter a new market. For instance, a search engine could be able to predict new trends from consumer searches and therefore face less uncertainty in product design.[27]

This possibility is also implicit in Hagiu and Wright’s paper.[28] Indeed, the authors’ theoretical model rests on an important distinction between within-user data advantages (that is, having access to more data about a given user) and across-user data advantages (information gleaned from having access to a wider user base). In both cases, there is an implicit assumption that platforms may use data from one service to gain an advantage in another market (because what matters is information about aggregate or individual user preferences, regardless of its origin).

Our review of the economic evidence suggests that several scholars have, with varying degrees of certainty, raised the possibility that incumbents may leverage data advantages to stifle competitors in their primary market or in adjacent ones (be it via merger or organic growth). As we explain below, however, there is ultimately little evidence to support such claims. Policymakers have, however, been keenly receptive to these limited theoretical findings, basing multiple decisions on these theories, often with little consideration given to the caveats that accompany them.[29]

Indeed, it is remarkable that, in its section on “[t]he data advantage for incumbents,” the “Furman Report” created for the UK government cited only two empirical economic studies, and they offer directly contradictory conclusions with respect to the question of the strength of data advantages.[30] Nevertheless, the Furman Report concludes that data “may confer a form of unmatchable advantage on the incumbent business, making successful rivalry less likely,”[31] and adopts without reservation “convincing” evidence from non-economists that have no apparent empirical basis.[32]

In the Google/Fitbit merger proceedings, the European Commission found that the combination of data from Google services with that of Fitbit devices would reduce competition in advertising markets:

Giving [sic] the large amount of data already used for advertising purposes that Google holds, the increase in Google’s data collection capabilities, which goes beyond the mere number of active users for which Fitbit has been collecting data so far, the Transaction is likely to have a negative impact on the development of an unfettered competition in the markets for online advertising.[33]

As a result, the Commission cleared the merger on the condition that Google refrain from using data from Fitbit devices for its advertising platform.[34] The Commission will likely focus on similar issues during its ongoing investigation of Microsoft’s investment into OpenAI.[35]

Along similar lines, the FTC’s complaint to enjoin Meta’s purchase of a virtual-reality (VR) fitness app called “Within” relied, among other things, on the fact that Meta could leverage its data about VR-user behavior to inform its decisions and potentially outcompete rival VR-fitness apps: “Meta’s control over the Quest platform also gives it unique access to VR user data, which it uses to inform strategic decisions.”[36]

The DOJ’s twin cases against Google also implicate data leveraging and data barriers to entry. The agency’s adtech complaint charges that “Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”[37] Similarly, in its search complaint, the agency argues that:

Google’s anticompetitive practices are especially pernicious because they deny rivals scale to compete effectively. General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms.[38]

Finally, updated merger guidelines published in recent years by several competition enforcers cite the acquisition of data as a potential source of competition concerns. For instance, the FTC and DOJ’s newly published guidelines state that “acquiring data that helps facilitate matching, sorting, or prediction services may enable the platform to weaken rival platforms by denying them that data.”[39] Likewise, the UK Competition and Markets Authority (CMA) warns against incumbents acquiring firms in order to obtain their data and foreclose other rivals:

Incentive to foreclose rivals…

7.19(e) Particularly in complex and dynamic markets, firms may not focus on short term margins but may pursue other objectives to maximise their long-run profitability, which the CMA may consider. This may include… obtaining access to customer data….[40]

In short, competition authorities around the globe have been taking an increasingly aggressive stance on data-network effects. Among the ways this has manifested is in basing enforcement decisions on fears that data collected by one platform might confer a decisive competitive advantage in adjacent markets. Unfortunately, these concerns rest on little to no empirical evidence, either in the economic literature or the underlying case records.

III. Data-Incumbency Advantages in Generative-AI Markets

Given the assertions canvassed in the previous section, it would be reasonable to assume that firms such as Google, Meta, and Amazon should be in pole position to dominate the burgeoning market for generative AI. After all, these firms have not only been at the forefront of the field for the better part of a decade, but they also have access to vast troves of data, the likes of which their rivals could only dream when they launched their own services. Thus, the authors of the Furman Report caution that “to the degree that the next technological revolution centres around artificial intelligence and machine learning, then the companies most able to take advantage of it may well be the existing large companies because of the importance of data for the successful use of these tools.”[41]

To date, however, this is not how things have unfolded—although it bears noting these markets remain in flux and the competitive landscape is susceptible to change. The first significantly successful generative-AI service was arguably not from either Meta—which had been working on chatbots for years and had access to, arguably, the world’s largest database of actual chats—or Google. Instead, the breakthrough came from a previously unknown firm called OpenAI.

OpenAI’s ChatGPT service currently holds an estimated 60% of the market (though reliable numbers are somewhat elusive).[42] It broke the record for the fastest online service to reach 100 million users (in only a couple of months), more than four times faster than the previous record holder, TikTok.[43] Based on Google Trends data, ChatGPT is nine times more popular worldwide than Google’s own Bard service, and 14 times more popular in the United States.[44] In April 2023, ChatGPT reportedly registered 206.7 million unique visitors, compared to 19.5 million for Google’s Bard.[45] In short, at the time we are writing, ChatGPT appears to be the most popular chatbot. The entry of large players such as Google Bard or Meta AI appear to have had little effect thus far on its market position.[46]

The picture is similar in the field of AI-image generation. As of August 2023, Midjourney, Dall-E, and Stable Diffusion appear to be the three market leaders in terms of user visits.[47] This is despite competition from the likes of Google and Meta, who arguably have access to unparalleled image and video databases by virtue of their primary platform activities.[48]

This raises several crucial questions: how have these AI upstarts managed to be so successful, and is their success just a flash in the pan before Web 2.0 giants catch up and overthrow them? While we cannot answer either of these questions dispositively, we offer what we believe to be some relevant observations concerning the role and value of data in digital markets.

A first important observation is that empirical studies suggest that data exhibits diminishing marginal returns. In other words, past a certain point, acquiring more data does not confer a meaningful edge to the acquiring firm. As Catherine Tucker put it following a review of the literature: “Empirically there is little evidence of economies of scale and scope in digital data in the instances where one would expect to find them.”[49]

Likewise, following a survey of the empirical literature on this topic, Geoffrey Manne and Dirk Auer conclude that:

Available evidence suggests that claims of “extreme” returns to scale in the tech sector are greatly overblown. Not only are the largest expenditures of digital platforms unlikely to become proportionally less important as output increases, but empirical research strongly suggests that even data does not give rise to increasing returns to scale, despite routinely being cited as the source of this effect.[50]

In other words, being the firm with the most data appears to be far less important than having enough data. This lower bar may be accessible to far more firms than one might initially think possible. And obtaining enough data could become even easier—that is, the volume of required data could become even smaller—with technological progress. For instance, synthetic data may provide an adequate substitute to real-world data,[51] or may even outperform real-world data.[52] As Thibault Schrepel and Alex Pentland surmise:

[A]dvances in computer science and analytics are making the amount of data less relevant every day. In recent months, important technological advances have allowed companies with small data sets to compete with larger ones.[53]

Indeed, past a certain threshold, acquiring more data might not meaningfully improve a service, where other improvements (such as better training methods or data curation) could have a large impact. In fact, there is some evidence that excessive data impedes a service’s ability to generate results appropriate for a given query: “[S]uperior model performance can often be achieved with smaller, high-quality datasets than massive, uncurated ones. Data curation ensures that training datasets are devoid of noise, irrelevant instances, and duplications, thus maximizing the efficiency of every training iteration.”[54]

Consider, for instance, a user who wants to generate an image of a basketball. Using a model trained on an indiscriminate range and number of public photos in which a basketball appears surrounded by copious other image data, the user may end up with an inordinately noisy result. By contrast, a model trained with a better method on fewer, more carefully selected images, could readily yield far superior results.[55] In one important example:

[t]he model’s performance is particularly remarkable, given its small size. “This is not a large language model trained on the whole Internet; this is a relatively small transformer trained for these tasks,” says Armando Solar-Lezama, a computer scientist at the Massachusetts Institute of Technology, who was not involved in the new study…. The finding implies that instead of just shoving ever more training data into machine-learning models, a complementary strategy might be to offer AI algorithms the equivalent of a focused linguistics or algebra class.[56]

Platforms’ current efforts are thus focused on improving the mathematical and logical reasoning of large language models (LLMs), rather than maximizing training datasets.[57] Two points stand out. The first is that firms like OpenAI rely largely on publicly available datasets—such as GSM8K—to train their LLMs.[58] Second, the real challenge to create cutting-edge AI is not so much in collecting data, but rather in creating innovative AI-training processes and architectures:

[B]uilding a truly general reasoning engine will require a more fundamental architectural innovation. What’s needed is a way for language models to learn new abstractions that go beyond their training data and have these evolving abstractions influence the model’s choices as it explores the space of possible solutions.

We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.[59]

Furthermore, it is worth noting that the data most relevant to startups in a given market may not be those data held by large incumbent platforms in other markets, but rather data specific to the market in which the startup is active or, even better, to the given problem it is attempting to solve:

As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use—they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market and mounted their successful challenges—not before.[60]

The bottom line is that data is not the be-all and end-all that many in competition circles make it out to be. While data may often confer marginal benefits, there is little sense these are ultimately decisive.[61] As a result, incumbent platforms’ access to vast numbers of users and data in their primary markets might only marginally affect their AI competitiveness.

A related observation is that firms’ capabilities and other features of their products arguably play a more important role than the data they own.[62] Examples of this abound in digital markets. Google overthrew Yahoo, despite initially having access to far fewer users and far less data; Google and Apple overcame Microsoft in the smartphone OS market despite having comparatively tiny ecosystems (at the time) to leverage; and TikTok rose to prominence despite intense competition from incumbents like Instagram, which had much larger user bases. In each of these cases, important product-design decisions (such as the PageRank algorithm, recognizing the specific needs of mobile users,[63] and TikTok’s clever algorithm) appear to have played a far more significant role than initial user and data endowments (or lack thereof).

All of this suggests that the early success of OpenAI likely has more to do with its engineering decisions than what data it did (or did not) own. Going forward, OpenAI and its rivals’ ability to offer and monetize compelling stores offering custom versions of their generative-AI technology will arguably play a much larger role than (and contribute to) their ownership of data.[64] In other words, the ultimate challenge is arguably to create a valuable platform, of which data ownership is a consequence, but not a cause.

It is also important to note that, in those instances where it is valuable, data does not just fall from the sky. Instead, it is through smart business and engineering decisions that firms can generate valuable information (which does not necessarily correlate with owning more data).

For instance, OpenAI’s success with ChatGPT is often attributed to its more efficient algorithms and training models, which arguably have enabled the service to improve more rapidly than its rivals.[65] Likewise, the ability of firms like Meta and Google to generate valuable data for advertising arguably depends more on design decisions that elicit the right data from users, rather than the raw number of users in their networks.

Put differently, setting up a business so as to extract and organize the right information is more important than simply owning vast troves of data.[66] Even in those instances where high-quality data is an essential parameter of competition, it does not follow that having vaster databases or more users on a platform necessarily leads to better information for the platform.

Indeed, if data ownership consistently conferred a significant competitive advantage, these new firms would not be where they are today. This does not mean that data is worthless, of course. Rather, it means that competition authorities should not assume that the mere possession of data is a dispositive competitive advantage, absent compelling empirical evidence to support such a finding. In this light, the current wave of decisions and competition-policy pronouncements that rely on data-related theories of harm are premature.

IV. Five Key Takeaways: Reconceptualizing the Role of Data in Generative-AI Competition

As we explain above, data (network effects) are not the source of barriers to entry that they are sometimes made out to be. The picture is far more nuanced. Indeed, as economist Andres Lerner demonstrated almost a decade ago (and the assessment is only truer today):

Although the collection of user data is generally valuable for online providers, the conclusion that such benefits of user data lead to significant returns to scale and to the entrenchment of dominant online platforms is based on unsupported assumptions. Although, in theory, control of an “essential” input can lead to the exclusion of rivals, a careful analysis of real-world evidence indicates that such concerns are unwarranted for many online businesses that have been the focus of the “big data” debate.[67]

While data can be an important part of the competitive landscape, incumbents’ data advantages are far less pronounced than today’s policymakers commonly assume. In that respect, five main lessons emerge:

  1. Data can be (very) valuable, but beyond a certain threshold, those benefits tend to diminish. In other words, having the most data is less important than having enough;
  2. The ability to generate valuable information does not depend on the number of users or the amount of data a platform has previously acquired;
  3. The most important datasets are not always proprietary;
  4. Technological advances and platforms’ engineering decisions affect their ability to generate valuable information, and this effect swamps effects stemming from the amount of data they own; and
  5. How platforms use data is arguably more important than what data or how much data they own.

These lessons have important ramifications for competition-policy debates over the competitive implications of data in technologically evolving areas.

First, it is not surprising that startups, rather than incumbents, have taken an early lead in generative AI (and in Web 2.0 before it). After all, if data-incumbency advantages are small or even nonexistent, then smaller and more nimble players may have an edge over established tech platforms. This is all the more likely given that, despite significant efforts, the biggest tech platforms were unable to offer compelling generative-AI chatbots and image-generation services before the emergence of ChatGPT, Dall-E, Midjourney, etc.

This failure suggests that, in a process akin to Clayton Christensen’s “innovator’s dilemma,”[68] something about the incumbent platforms’ existing services and capabilities was holding them back in those markets. Of course, this does not necessarily mean that those same services or capabilities could not become an advantage when the generative-AI market starts addressing issues of monetization and scale.[69] But it does mean that assumptions about a firm’s market power based on its possession of data are off the mark.

Another important implication is that, paradoxically, policymakers’ efforts to prevent Web 2.0 platforms from competing freely in generative AI markets may ultimately backfire and lead to less, not more, competition. Indeed, OpenAI is currently acquiring a sizeable lead in generative AI. While competition authorities might like to think that other startups will emerge and thrive in this space, it is important not to confuse desires with reality. While there currently exists a vibrant AI-startup ecosystem, there is at least a case to be made that the most significant competition for today’s AI leaders will come from incumbent Web 2.0 platforms—although nothing is certain at this stage. Policymakers should beware not to stifle that competition on the misguided assumption that competitive pressure from large incumbents is somehow less valuable to consumers than that which originates from smaller firms.

Finally, even if there were a competition-related market failure to be addressed in the field of generative AI (which is anything but clear), it is unclear that the remedies being contemplated would do more good than harm. Some of the solutions that have been put forward have highly ambiguous effects on consumer welfare. Scholars have shown that, e.g., mandated data sharing—a solution championed by EU policymakers, among others—may sometimes dampen competition in generative-AI markets.[70] This is also true of legislation like the General Data Protection Regulation (GDPR), which makes it harder for firms to acquire more data about consumers—assuming such data is, indeed, useful to generative-AI services.[71]

In sum, it is a flawed understanding of the economics and practical consequences of large agglomerations of data that lead competition authorities to believe that data-incumbency advantages are likely to harm competition in generative AI markets—or even in the data-intensive Web 2.0 markets that preceded them. Indeed, competition or regulatory intervention to “correct” data barriers and data network and scale effects is liable to do more harm than good.

 

[1] See, e.g., Michael Chui, et al., The Economic Potential of Generative AI: The Next Productivity Frontier, McKinsey (Jun. 14, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-AI-the-next-productivity-frontier.

[2] See, e. g., Zhuoran Qiao, Weili Nie, Arash Vahdat, Thomas F. Miller III, & Animashree Anandkumar, State-Specific Protein–Ligand Complex Structure Prediction with a Multiscale Deep Generative Model, 6 Nature Machine Intelligence, 195-208 (2024); see also, Jaemin Seo, Sang Kyeun Kim, Azarakhsh Jalalvand, Rory Conlin, Andrew Rothstein, Joseph Abbate, Keith Erickson, Josiah Wai, Ricardo Shousha, & Egemen Kolemen, Avoiding Fusion Plasma Tearing Instability with Deep Reinforcement Learning, 626 Nature, 746-751 (2024).

[3] See, e.g., Press Release, Commission Launches Calls for Contributions on Competition in Virtual Worlds and Generative AI, European Commission (Jan. 9, 2024), https://ec.europa.eu/commission/presscorner/detail/en/IP_24_85.

[4] Nathan Newman, Taking on Google’s Monopoly Means Regulating Its Control of User Data, Huffington Post (Sep. 24, 2013), http://www.huffingtonpost.com/nathan-newman/taking-on-googlesmonopol_b_3980799.html.

[5] See, e.g., Lina Khan & K. Sabeel Rahman, Restoring Competition in the U.S. Economy, in Untamed: How to Check Corporate, Financial, and Monopoly Power (Nell Abernathy, Mike Konczal, & Kathryn Milani, eds., 2016), at 23 (“From Amazon to Google to Uber, there is a new form of economic power on display, distinct from conventional monopolies and oligopolies…, leverag[ing] data, algorithms, and internet-based technologies… in ways that could operate invisibly and anticompetitively.”); Mark Weinstein, I Changed My Mind—Facebook Is a Monopoly, Wall St. J. (Oct. 1, 2021), https://www.wsj.com/articles/facebook-is-monopoly-metaverse-users-advertising-platforms-competition-mewe-big-tech-11633104247 (“[T]he glue that holds it all together is Facebook’s monopoly over data…. Facebook’s data troves give it unrivaled knowledge about people, governments—and its competitors.”).

[6] See, generally, Abigail Slater, Why “Big Data” Is a Big Deal, The Reg. Rev. (Nov. 6, 2023), https://www.theregreview.org/2023/11/06/slater-why-big-data-is-a-big-deal; Amended Complaint at ¶36, United States v. Google, 1:20-cv-03010- (D.D.C. 2020); Complaint at ¶37, United States V. Google, 1:23-cv-00108 (E.D. Va. 2023), https://www.justice.gov/opa/pr/justice-department-sues-google-monopolizing-digital-advertising-technologies (“Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”).

[7] See, e.g., Press Release, European Commission, supra note 3; Krysten Crawford, FTC’s Lina Khan Warns Big Tech over AI, SIEPR (Nov. 3, 2020), https://siepr.stanford.edu/news/ftcs-lina-khan-warns-big-tech-over-ai (“Federal Trade Commission Chair Lina Khan delivered a sharp warning to the technology industry in a speech at Stanford on Thursday: Antitrust enforcers are watching what you do in the race to profit from artificial intelligence.”) (emphasis added).

[8] See, e.g., John M. Newman, Antitrust in Digital Markets, 72 Vand. L. Rev. 1497, 1501 (2019) (“[T]he status quo has frequently failed in this vital area, and it continues to do so with alarming regularity. The laissez-faire approach advocated for by scholars and adopted by courts and enforcers has allowed potentially massive harms to go unchecked.”);
Bertin Martins, Are New EU Data Market Regulations Coherent and Efficient?, Bruegel Working Paper 21/23 (2023), https://www.bruegel.org/working-paper/are-new-eu-data-market-regulations-coherent-and-efficient (“Technical restrictions on access to and re-use of data may result in failures in data markets and data-driven services markets.”); Valéria Faure-Muntian, Competitive Dysfunction: Why Competition Law Is Failing in a Digital World, The Forum Network (Feb. 24, 2021), https://www.oecd-forum.org/posts/competitive-dysfunction-why-competition-law-is-failing-in-a-digital-world.

[9] See Rana Foroohar, The Great US-Europe Antitrust Divide, FT (Feb. 5, 2024), https://www.ft.com/content/065a2f93-dc1e-410c-ba9d-73c930cedc14.

[10] See, e.g., Press Release, European Commission, supra note 3.

[11] See infra, Section II. Commentators have also made similar claims; see, e.g., Ganesh Sitaram & Tejas N. Narechania, It’s Time for the Government to Regulate AI. Here’s How, Politico (Jan. 15, 2024) (“All that cloud computing power is used to train foundation models by having them “learn” from incomprehensibly huge quantities of data. Unsurprisingly, the entities that own these massive computing resources are also the companies that dominate model development. Google has Bard, Meta has LLaMa. Amazon recently invested $4 billion into one of OpenAI’s leading competitors, Anthropic. And Microsoft has a 49 percent ownership stake in OpenAI — giving it extraordinary influence, as the recent board struggles over Sam Altman’s role as CEO showed.”).

[12] Press Release, European Commission, supra note 3.

[13] Comment of U.S. Federal Trade Commission to the U.S. Copyright Office, Artificial Intelligence and Copyright, Docket No. 2023-6 (Oct. 30, 2023), at 4, https://www.ftc.gov/legal-library/browse/advocacy-filings/comment-federal-trade-commission-artificial-intelligence-copyright (emphasis added).

[14] See, e.g. Joe Caserta, Holger Harreis, Kayvaun Rowshankish, Nikhil Srinidhi, & Asin Tavakoli, The Data Dividend: Fueling Generative AI, McKinsey Digital (Sep. 15, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-data-dividend-fueling-generative-ai (“Your data and its underlying foundations are the determining factors to what’s possible with generative AI.”).

[15] See, e.g., Tim Keary, Google DeepMind’s Achievements and Breakthroughs in AI Research, Techopedia (Aug. 11, 2023), https://www.techopedia.com/google-deepminds-achievements-and-breakthroughs-in-ai-research; See, e.g., Will Douglas Heaven, Google DeepMind Used a Large Language Model to Solve an Unsolved Math Problem, MIT Technology Review (Dec. 14, 2023), https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set; see also, A Decade of Advancing the State-of-the-Art in AI Through Open Research, Meta (Nov. 30, 2023), https://about.fb.com/news/2023/11/decade-of-advancing-ai-through-open-research; see also, 200 Languages Within a Single AI Model: A Breakthrough in High-Quality Machine Translation, Meta, https://ai.meta.com/blog/nllb-200-high-quality-machine-translation (last visited Jan. 18, 2023).

[16] See, e.g., Jennifer Allen, 10 Years of Siri: The History of Apple’s Voice Assistant, Tech Radar (Oct. 4, 2021), https://www.techradar.com/news/siri-10-year-anniversary; see also Evan Selleck, How Apple Is Already Using Machine Learning and AI in iOS, Apple Insider (Nov. 20, 2023), https://appleinsider.com/articles/23/09/02/how-apple-is-already-using-machine-learning-and-ai-in-ios; see also, Kathleen Walch, The Twenty Year History Of AI At Amazon, Forbes (July 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/07/19/the-twenty-year-history-of-ai-at-amazon.

[17] See infra Section III.

[18] See, e.g., Cédric Argenton & Jens Prüfer, Search Engine Competition with Network Externalities, 8 J. Comp. L. & Econ. 73, 74 (2012).

[19] John M. Yun, The Role of Big Data in Antitrust, in The Global Antitrust Institute Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg, eds., Nov. 11, 2020) at 233, https://gaidigitalreport.com/2020/08/25/big-data-and-barriers-to-entry/#_ftnref50; see also, e.g., Robert Wayne Gregory, Ola Henfridsson, Evgeny Kaganer, & Harris Kyriakou, The Role of Artificial Intelligence and Data Network Effects for Creating User Value, 46 Acad. of Mgmt. Rev. 534 (2020), final pre-print version at 4, http://wrap.warwick.ac.uk/134220) (“A platform exhibits data network effects if, the more that the platform learns from the data it collects on users, the more valuable the platform becomes to each user.”); see also, Karl Schmedders, José Parra-Moyano, & Michael Wade, Why Data Aggregation Laws Could be the Answer to Big Tech Dominance, Silicon Republic (Feb. 6, 2024), https://www.siliconrepublic.com/enterprise/data-ai-aggregation-laws-regulation-big-tech-dominance-competition-antitrust-imd.

[20] Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. Reg. 401, 409 (2014) (emphasis added); see also id. at 420 & 423 (“While there are a number of network effects that come into play with Google, [“its intimate knowledge of its users contained in its vast databases of user personal data”] is likely the most important one in terms of entrenching the company’s monopoly in search advertising…. Google’s overwhelming control of user data… might make its dominance nearly unchallengeable.”).

[21] See also Yun, supra note 19 at 229 (“[I]nvestments in big data can create competitive distance between a firm and its rivals, including potential entrants, but this distance is the result of a competitive desire to improve one’s product.”).

[22] For a review of the literature on increasing returns to scale in data (this topic is broader than data-network effects) see Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo Mason L. Rev. 1281, 1344 (2021).

[23] Andrei Hagiu & Julian Wright, Data-Enabled Learning, Network Effects, and Competitive Advantage, 54 RAND J. Econ. 638 (2023).

[24] Id. at 639. The authors conclude that “Data-enabled learning would seem to give incumbent firms a competitive advantage. But how strong is this advantage and how does it differ from that obtained from more traditional mechanisms…”.

[25] Id.

[26] Bruno Jullien & Wilfried Sand-Zantman, The Economics of Platforms: A Theory Guide for Competition Policy, 54 Info. Econ. & Pol’y 10080, 101031 (2021).

[27] Daniele Condorelli & Jorge Padilla, Harnessing Platform Envelopment in the Digital World, 16 J. Comp. L. & Pol’y 143, 167 (2020).

[28] See Hagiu & Wright, supra note 23.

[29] For a summary of these limitations, see generally Catherine Tucker, Network Effects and Market Power: What Have We Learned in the Last Decade?, Antitrust (2018) at 72, available at https://sites.bu.edu/tpri/files/2018/07/tucker-network-effects-antitrust2018.pdf; see also Manne & Auer, supra note 22, at 1330.

[30] See Jason Furman, Diane Coyle, Amelia Fletcher, Derek McAuley, & Philip Marsden (Dig. Competition Expert Panel), Unlocking Digital Competition (2019) at 32-35 (“Furman Report”), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[31] Id. at 34.

[32] Id. at 35. To its credit, it should be noted, the Furman Report does counsel caution before mandating access to data as a remedy to promote competition. See id. at 75. That said, the Furman Report does maintain that such a remedy should certainly be on the table because “the evidence suggests that large data holdings are at the heart of the potential for some platform markets to be dominated by single players and for that dominance to be entrenched in a way that lessens the potential for competition for the market.” Id. In fact, the evidence does not show this.

[33] Case COMP/M.9660 — Google/Fitbit, Commission Decision (Dec. 17, 2020) (Summary at O.J. (C 194) 7), available at https://ec.europa.eu/competition/mergers/cases1/202120/m9660_3314_3.pdf at 455.

[34] Id. at 896.

[35] See Natasha Lomas, EU Checking if Microsoft’s OpenAI Investment Falls Under Merger Rules, TechCrunch (Jan. 9, 2024), https://techcrunch.com/2024/01/09/openai-microsoft-eu-merger-rules.

[36] Amended Complaint at 11, Meta/Zuckerberg/Within, Fed. Trade Comm’n. (2022) (No. 605837), available at https://www.ftc.gov/system/files/ftc_gov/pdf/D09411%20-%20AMENDED%20COMPLAINT%20FILED%20BY%20COUNSEL%20SUPPORTING%20THE%20COMPLAINT%20-%20PUBLIC%20%281%29_0.pdf.

[37] Amended Complaint (D.D.C), supra note 6 at ¶37.

[38] Amended Complaint (E.D. Va), supra note 6 at ¶8.

[39] Merger Guidelines, US Dep’t of Justice & Fed. Trade Comm’n (2023) at 25, available at https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[40] Merger Assessment Guidelines, Competition and Mkts. Auth (2021) at  ¶7.19(e), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051823/MAGs_for_publication_2021_–_.pdf.

[41] Furman Report, supra note 30, at ¶4.

[42] See, e.g., Chris Westfall, New Research Shows ChatGPT Reigns Supreme in AI Tool Sector, Forbes (Nov. 16, 2023), https://www.forbes.com/sites/chriswestfall/2023/11/16/new-research-shows-chatgpt-reigns-supreme-in-ai-tool-sector/?sh=7de5de250e9c.

[43] See Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base, Reuters (Feb. 2, 2023), https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01; Google: The AI Race Is On, App Economy Insights (Feb. 7, 2023), https://www.appeconomyinsights.com/p/google-the-ai-race-is-on.

[44] See Google Trends, https://trends.google.com/trends/explore?date=today%205-y&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited, Jan. 12, 2024) and https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024).

[45] See David F. Carr, As ChatGPT Growth Flattened in May, Google Bard Rose 187%, Similarweb Blog (Jun. 5, 2023), https://www.similarweb.com/blog/insights/ai-news/chatgpt-bard.

[46] See Press Release, Introducing New AI Experiences Across Our Family of Apps and Devices, Meta (Sep. 27, 2023), https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools; Sundar Pichai, An Important Next Step on Our AI Journey, Google Keyword Blog (Feb. 6, 2023), https://blog.google/technology/ai/bard-google-ai-search-updates.

[47] See Ion Prodan, 14 Million Users: Midjourney’s Statistical Success, Yon (Aug. 19, 2023), https://yon.fun/midjourney-statistics; see also Andrew Wilson, Midjourney Statistics: Users, Polls, & Growth [Oct 2023], ApproachableAI (Oct. 13, 2023), https://approachableai.com/midjourney-statistics.

[48] See Hema Budaraju, New Ways to Get Inspired with Generative AI in Search, Google Keyword Blog (Oct. 12, 2023), https://blog.google/products/search/google-search-generative-ai-october-update; Imagine with Meta AI, Meta (last visited Jan. 12, 2024), https://imagine.meta.com.

[49] Catherine Tucker, Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility, 54 Rev. Indus. Org. 683, 686 (2019).

[50] Manne & Auer, supra note 22, at 1345.

[51] See, e.g., Stefanie Koperniak, Artificial Data Give the Same Results as Real Data—Without Compromising Privacy, MIT News (Mar. 3, 2017), https://news.mit.edu/2017/artificial-data-give-same-results-as-real-data-0303 (“[Authors] describe a machine learning system that automatically creates synthetic data—with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. While the use of authentic data can cause significant privacy concerns, this synthetic data is completely different from that produced by real users—but can still be used to develop and test data science algorithms and models.”).

[52] See, e.g., Rachel Gordon, Synthetic Imagery Sets New Bar in AI Training Efficiency, MIT News (Nov. 20, 2023), https://news.mit.edu/2023/synthetic-imagery-sets-new-bar-ai-training-efficiency-1120 (“By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional ‘real-image’ training methods.).

[53] Thibault Schrepel & Alex ‘Sandy’ Pentland, Competition Between AI Foundation Models: Dynamics and Policy Recommendations, MIT Connection Science Working Paper (Jun. 2023), at 8.

[54] Igor Susmelj, Optimizing Generative AI: The Role of Data Curation, Lightly (last visited Jan. 15, 2024), https://www.lightly.ai/post/optimizing-generative-ai-the-role-of-data-curation.

[55] See, e.g., Xiaoliang Dai, et al., Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack, ArXiv (Sep. 27, 2023) at 1, https://ar5iv.labs.arxiv.org/html/2309.15807 (“[S]upervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality.”); see also, Hu Xu, et al., Demystifying CLIP Data, ArXiv (Sep. 28, 2023), https://arxiv.org/abs/2309.16671.

[56] Lauren Leffer, New Training Method Helps AI Generalize like People Do, Sci. Am. (Oct. 26, 2023), https://www.scientificamerican.com/article/new-training-method-helps-ai-generalize-like-people-do (discussing Brendan M. Lake & Marco Baroni, Human-Like Systematic Generalization Through a Meta-Learning Neural Network, 623 Nature 115 (2023)).

[57] Timothy B. Lee, The Real Research Behind the Wild Rumors about OpenAI’s Q* Project, Ars Technica (Dec. 8, 2023), https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project.

[58] Id.; see also GSM8K, Papers with Code (last visited Jan. 18, 2023), available at https://paperswithcode.com/dataset/gsm8k; MATH Dataset, GitHub (last visited Jan. 18, 2024), available at https://github.com/hendrycks/math.

[59] Lee, supra note 57.

[60] Geoffrey Manne & Ben Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (Mar. 26, 2015), https://truthonthemarket.com/2015/03/26/debunking-the-myth-of-a-data-barrier-to-entry-for-online-services (citing Andres V. Lerner, The Role of ‘Big Data’ in Online Platform Competition (Aug. 26, 2014), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2482780.).

[61] See Catherine Tucker, Digital Data as an Essential Facility: Control, CPI Antitrust Chron. (Feb. 2020), at 11 (“[U]ltimately the value of data is not the raw manifestation of the data itself, but the ability of a firm to use this data as an input to insight.”).

[62] Or, as John Yun puts it, data is only a small component of digital firms’ production function. See Yun, supra note 19, at 235 (“Second, while no one would seriously dispute that having more data is better than having less, the idea of a data-driven network effect is focused too narrowly on a single factor improving quality. As mentioned in supra Section I.A, there are a variety of factors that enter a firm’s production function to improve quality.”).

[63] Luxia Le, The Real Reason Windows Phone Failed Spectacularly, History–Computer (Aug. 8, 2023), https://history-computer.com/the-real-reason-windows-phone-failed-spectacularly.

[64] Introducing the GPT Store, Open AI (Jan. 10, 2024), https://openai.com/blog/introducing-the-gpt-store.

[65] See Michael Schade, How ChatGPT and Our Language Models are Developed, OpenAI, https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed; Sreejani Bhattacharyya, Interesting Innovations from OpenAI in 2021, AIM (Jan. 1, 2022), https://analyticsindiamag.com/interesting-innovations-from-openai-in-2021; Danny Hernadez & Tom B. Brown, Measuring the Algorithmic Efficiency of Neural Networks, ArXiv (May 8, 2020), https://arxiv.org/abs/2005.04305.

[66] See Yun, supra note 19 at 235 (“Even if data is primarily responsible for a platform’s quality improvements, these improvements do not simply materialize with the presence of more data—which differentiates the idea of data-driven network effects from direct network effects. A firm needs to intentionally transform raw, collected data into something that provides analytical insights. This transformation involves costs including those associated with data storage, organization, and analytics, which moves the idea of collecting more data away from a strict network effect to more of a ‘data opportunity.’”).

[67] Lerner, supra note 60, at 4-5 (emphasis added).

[68] See Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (2013).

[69] See David J. Teece, Dynamic Capabilities and Strategic Management: Organizing for Innovation and Growth (2009).

[70] See Hagiu & Wright, supra note 23, at 23 (“We use our dynamic framework to explore how data sharing works: we find that it in-creases consumer surplus when one firm is sufficiently far ahead of the other by making the laggard more competitive, but it decreases consumer surplus when the firms are sufficiently evenly matched by making firms compete less aggressively, which in our model means subsidizing consumers less.”); see also Lerner, supra note 60.

[71] See, e.g., Hagiu & Wright, id. (“We also use our model to highlight an unintended consequence of privacy policies. If such policies reduce the rate at which firms can extract useful data from consumers, they will tend to increase the incumbent’s competitive advantage, reflecting that the entrant has more scope for new learning and so is affected more by such a policy.”); Jian Jia, Ginger Zhe Jin, & Liad Wagman, The Short-Run Effects of the General Data Protection Regulation on Technology Venture Investment, 40 Marketing Sci. 593 (2021) (finding GDPR reduced investment in new and emerging technology firms, particularly in data-related ventures); James Campbell, Avi Goldfarb, & Catherine Tucker, Privacy Regulation and Market Structure, 24 J. Econ. & Mgmt. Strat. 47 (2015) (“Consequently, rather than increasing competition, the nature of transaction costs implied by privacy regulation suggests that privacy regulation may be anti-competitive.”).

Continue reading
Antitrust & Consumer Protection

ICLE Amicus in RE: Gilead Tenofovir Cases

Amicus Brief Dear Justice Guerrero and Associate Justices, In accordance with California Rule of Court 8.500(g), we are writing to urge the Court to grant the Petition . . .

Dear Justice Guerrero and Associate Justices,

In accordance with California Rule of Court 8.500(g), we are writing to urge the Court to grant the Petition for Review filed by Petitioner Gilead Sciences, Inc. (“Petitioner” or “Gilead”) on February 21, 2024, in the above-captioned matter.

We agree with Petitioner that the Court of Appeal’s finding of a duty of reasonable care in this case “is such a seismic change in the law and so fundamentally wrong, with such grave consequences, that this Court’s review is imperative.” (Pet. 6.) The unprecedented duty of care put forward by the Court of Appeal—requiring prescription drug manufacturers to exercise reasonable care toward users of a current drug when deciding when to bring a new drug to market (Op. 11)—would have far-reaching, harmful implications for innovation that the Court of Appeal failed properly to weigh.

If upheld, this new duty of care would significantly disincentivize pharmaceutical innovation by allowing juries to second-guess complex scientific and business decisions about which potential drugs to prioritize and when to bring them to market. The threat of massive liability simply for not developing a drug sooner would make companies reluctant to invest the immense resources needed to bring new treatments to patients. Perversely, this would deprive the public of lifesaving and less costly new medicines. And the prospective harm from the Court of Appeal’s decision is not limited only to the pharmaceutical industry.

We urge the Court to grant the Petition for Review and to hold that innovative firms do not owe the users of current products a “duty to innovate” or a “duty to market”—that is, that firms cannot be held liable to users of a current product for development or commercialization decisions on the basis that those decisions could have facilitated the introduction of a less harmful, alternative product.

Interest of Amicus Curiae

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center aimed at building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates. It also has longstanding expertise in evaluating law and policy relating to innovation and the legal environment facing commercial activity. In this letter, we wish to briefly highlight some of the crucial considerations concerning the effect on innovation incentives that we believe would arise from the Court of Appeal’s ruling in this case.[1]

The Court of Appeal’s Duty of Care Standard Would Impose Liability Without Requiring Actual “Harm”

The Court of Appeal’s ruling marks an unwarranted departure from decades of products-liability law requiring plaintiffs to prove that the product that injured them was defective. Expanding liability to products never even sold is an unprecedented, unprincipled, and dangerous approach to product liability. Plaintiffs’ lawyers may seek to apply this new theory to many other beneficial products, arguing manufacturers should have sold a superior alternative sooner. This would wreak havoc on innovation across industries.

California Civil Code § 1714 does not impose liability for “fail[ing] to take positive steps to benefit others,” (Brown v. USA Taekwondo (2021) 11 Cal.5th 204, 215), and Plaintiffs did not press a theory that the medicine they received was defective. Moreover, the product included all the warnings required by federal and state law. Thus, Plaintiffs’ case—as accepted by the Court of Appeal—is that they consumed a product authorized by the FDA, that they were fully aware of its potential side effects, but maybe they would have had fewer side effects had Gilead made the decision to accelerate (against some indefinite baseline) the development of an alternative medicine. To call this a speculative harm is an understatement, and to dismiss Gilead’s conduct as unreasonable because motivated by a crass profit motive, (Op. at 32), elides many complicated facts that belie such a facile assertion.

A focus on the narrow question of profits for a particular drug misunderstands the inordinate complexity of pharmaceutical development and risks seriously impeding the rate of drug development overall. Doing so

[over-emphasizes] the recapture of “excess” profits on the relatively few highly profitable products without taking into account failures or limping successes experienced on the much larger number of other entries. If profits were held to “reasonable” levels on blockbuster drugs, aggregate profits would almost surely be insufficient to sustain a high rate of technological progress. . . . If in addition developing a blockbuster is riskier than augmenting the assortment of already known molecules, the rate at which important new drugs appear could be retarded significantly. Assuming that important new drugs yield substantial consumers’ surplus untapped by their developers, consumers would lose along with the drug companies. Should a tradeoff be required between modestly excessive prices and profits versus retarded technical progress, it would be better to err on the side of excessive profits. (F. M. Scherer, Pricing, Profits, and Technological Progress in the Pharmaceutical Industry, 7 J. Econ. Persp. 97, 113 (1993)).

Indeed, Plaintiffs’ claim on this ground is essentially self-refuting. If the “superior” product they claim was withheld for “profit” reasons was indeed superior, then Plaintiffs could have expected to make a superior return on that product. Thus, Plaintiffs claim they were allegedly “harmed” by not having access to a product that Petitioners were not yet ready to market, even though Petitioners had every incentive to release a potentially successful alternative as soon as possible, subject to a complex host of scientific and business considerations affecting the timing of that decision.

Related, the Court of Appeal’s decision rests on the unfounded assumption that Petitioner “knew” TAF was safer than TDF after completing Phase I trials. This ignores the realities of the drug development process and the inherent uncertainty of obtaining FDA approval, even after promising early results. Passing Phase I trials, which typically involve a small number of healthy volunteers, is a far cry from having a marketable drug. According to the Biotechnology Innovation Organization, only 7.9% of drugs that enter Phase I trials ultimately obtain FDA approval.[2] (Biotechnology Innovation Organization, Clinical Development Success Rates and Contributing Factors 2011-2020, Fig. 8b (2021), available at https://perma.cc/D7EY-P22Q.) Even after Phase II trials, which assess efficacy and side effects in a larger patient population, the success rate is only about 15.1%. (Id.) Thus, at the time Gilead decided to pause TAF development, it faced significant uncertainty about whether TAF would ever reach the market, let alone ultimately prove safer than TDF.

Moreover, the clock on Petitioner’s patent exclusivity for TAF was ticking throughout the development process. Had Petitioner “known” that TAF was a safer and more effective drug, it would have had every incentive to bring it to market as soon as possible to maximize the period of patent protection and the potential to recoup its investment. The fact that Petitioner instead chose to focus on TDF strongly suggests that it did not have the level of certainty the Court of Appeal attributed to it.

Although conventional wisdom has often held otherwise, economists generally dispute the notion that companies have an incentive to unilaterally suppress innovation for economic gain.

While rumors long have circulated about the suppression of a new technology capable of enabling automobiles to average 100 miles per gallon or some new device capable of generating electric power at a fraction of its current cost, it is rare to uncover cases where a worthwhile technology has been suppressed altogether. (John J. Flynn, Antitrust Policy, Innovation Efficiencies, and the Suppression of Technology, 66 Antitrust L.J. 487, 490 (1998)).

Calling such claims “folklore,” the economists Armen Alchian and William Allen note that, “if such a [technology] did exist, it could be made and sold at a price reflecting the value of [the new technology], a net profit to the owner.” (Armen A. Alchian & William R. Allen, Exchange & Production: Competition, Coordination, & Control (1983), at 292). Indeed, “even a monopolist typically will have an incentive to adopt an unambiguously superior technology.” (Joel M. Cohen and Arthur J. Burke, An Overview of the Antitrust Analysis of Suppression of Technology, 66 Antitrust L.J. 421, 429 n. 28 (1998)). While nominal suppression of technology can occur for a multitude of commercial and technological reasons, there is scant evidence that doing so coincides with harm to consumers, except where doing so affirmatively interferes with market competition under the antitrust laws—a claim not advanced here.

One reason the tort system is inapt for second-guessing commercial development and marketing decisions is that those decisions may be made for myriad reasons that do not map onto the specific safety concern of a products-liability action. For example, in the 1930s, AT&T abandoned the commercial development of magnetic recording “for ideological reasons. . . . Management feared that availability of recording devices would make customers less willing to use the telephone system and so undermine the concept of universal service.” (Mark Clark, Suppressing Innovation: Bell Laboratories and Magnetic Recording, 34 Tech. & Culture 516, 520-24 (1993)). One could easily imagine arguments that coupling telephones and recording devices would promote safety. But the determination of whether safety or universal service (and the avoidance of privacy invasion) was a “better” basis for deciding whether to pursue the innovation is not within the ambit of tort law (nor the capability of a products-liability jury). And yet, it would necessarily become so if the Court of Appeal’s decision were to stand.

A Proper Assessment of Public Policy Would Cut Strongly Against Adoption of the Court of Appeal’s Holding

The Court of Appeal notes that “a duty that placed manufacturers ‘under an endless obligation to pursue ever-better new products or improvements to existing products’ would be unworkable and unwarranted,” (Op. 10), yet avers that “plaintiffs are not asking us to recognize such a duty” because “their negligence claim is premised on Gilead’s possession of such an alternative in TAF; they complain of Gilead’s knowing and intentionally withholding such a treatment….” (Id).

From an economic standpoint, this is a distinction without a difference.

Both a “duty to invent” and a “duty to market” what is already invented would increase the cost of bringing any innovative product to market by saddling the developer with an expected additional (and unavoidable) obligation as a function of introducing the initial product, differing only perhaps by degree. Indeed, a “duty to invent” could conceivably be more socially desirable because in that case a firm could at least avoid liability by undertaking the process of discovering new products (a socially beneficial activity), whereas the “duty to market” espoused by the Court of Appeal would create only the opposite incentive—the incentive never to gain knowledge of a superior product on the basis of which liability might attach.[3]

And public policy is relevant. This Court in Brown v. Superior Court, (44 Cal. 3d 1049 (1988)), worried explicitly about the “[p]ublic policy” implications of excessive liability rules for the provision of lifesaving drugs. (Id. at 1063-65). As the Court in Brown explained, drug manufacturers “might be reluctant to undertake research programs to develop some pharmaceuticals that would prove beneficial or to distribute others that are available to be marketed, because of the fear of large adverse monetary judgments.” (Id. at 1063). The Court of Appeal agreed, noting that “the court’s decision [in Brown] was grounded in public policy concerns. Subjecting prescription drug manufacturers to strict liability for design defects, the court worried, might discourage drug development or inflate the cost of otherwise affordable drugs.” (Op. 29).

In rejecting the relevance of the argument here, however, the Court of Appeal (very briefly) argued a) that Brown espoused only a policy against burdening pharmaceutical companies with a duty stemming from unforeseeable harms, (Op. 49-50), and b) that the relevant cost here might be “some failed or wasted efforts,” but not a reduction in safety. (Op. 51).[4] Both of these claims are erroneous.

On the first, the legalistic distinction between foreseeable and unforeseeable harm was not, in fact, the determinative distinction in Brown. Rather, that distinction was relevant only because it maps onto the issue of incentives. In the face of unforeseeable, and thus unavoidable, harm, pharmaceutical companies would have severely diminished incentives to innovate. While foreseeable harms might also deter innovation by imposing some additional cost, these costs would be smaller, and avoidable or insurable, so that innovation could continue. To be sure, the Court wanted to ensure that the beneficial, risk-reduction effects of the tort system were not entirely removed from pharmaceutical companies. But that meant a policy decision that necessarily reduced the extent of tort-based risk optimization in favor of the manifest, countervailing benefit of relatively higher innovation incentives. That same calculus applies here, and it is this consideration, not the superficial question of foreseeability, that animated this Court in Brown.

On the second, the Court of Appeal inexplicably fails to acknowledge that the true cost of the imposition of excessive liability risk from a “duty to market” (or “duty to innovate”) is not limited to the expenditure of wasted resources, but the non-expenditure of any resources. The court’s contention appears to contemplate that such a duty would not remove a firm’s incentive to innovate entirely, although it might deter it slightly by increasing its expected cost. But economic incentives operate at the margin. Even if there remains some profit incentive to continue to innovate, the imposition of liability risk simply for the act of doing so would necessarily reduce the amount of innovation (in some cases, and especially for some smaller companies less able to bear the additional cost, to the point of deterring innovation entirely). But even this reduction in incentive is a harm. The fact that some innovation may still occur despite the imposition of considerable liability risk is not a defense of the imposition of that risk; rather, it is a reason to question its desirability, exactly as this Court did in Brown.

The Court of Appeal’s Decision Would Undermine Development of Lifesaving and Safer New Medicines

Innovation is a long-term, iterative process fraught with uncertainty. At the outset of research and development, it is impossible to know whether a potential new drug will ultimately prove superior to existing drugs. Most attempts at innovation fail to yield a marketable product, let alone one that is significantly safer or more effective than its predecessors. Deciding whether to pursue a particular line of research depends on weighing myriad factors, including the anticipated benefits of the new drug, the time and expense required to develop it, and its financial viability relative to existing products. Sometimes, potentially promising drug candidates are not pursued fully, even if theoretically “better” than existing drugs to some degree, because the expected benefits are not sufficient to justify the substantial costs and risks of development and commercialization.

If left to stand, the Court of Appeal’s decision would mean that whenever this stage of development is reached for a drug that may offer any safety improvement, the manufacturer will face potential liability for failing to bring that drug to market, regardless of the costs and risks involved in its development or the extent of the potential benefit. Such a rule would have severe unintended consequences that would stifle innovation.

First, by exposing manufacturers to liability on the basis of early-stage research that has not yet established a drug candidate’s safety and efficacy, the Court of Appeal’s rule would deter manufacturers from pursuing innovations in the first place. Drug research involves constant iteration, with most efforts failing and the potential benefits of success highly uncertain until late in the process. If any improvement, no matter how small or tentative, could trigger liability for failing to develop the new drug, manufacturers will be deterred from trying to innovate at all.

Second, such a rule would force manufacturers to direct scarce resources to developing and commercializing drugs that offer only small or incremental benefits because failing to do so would invite litigation. This would necessarily divert funds away from research into other potential drugs that could yield greater advancements. Further, as each small improvement is made, it reduces the relative potential benefit from, and therefore the incentive to undertake, further improvements. Rather than promoting innovation, the Court of Appeal’s decision would create incentives that favor small, incremental changes over larger, riskier leaps with the greatest potential to significantly advance patient welfare.

Third, and conversely, the Court of Appeal’s decision would set an unrealistic and dangerous standard of perfection for drug development. Pharmaceutical companies should not be expected to bring only the “safest” version of a drug to market, as this would drastically increase the time and cost of drug development and deprive patients of access to beneficial treatments in the meantime.

Fourth, the threat of liability would lead to inefficient and costly distortions in how businesses organize their research and development efforts. To minimize the risk of liability, manufacturers may avoid integrating ongoing research into existing product lines, instead keeping the processes separate unless and until a potential new technology is developed that offers benefits so substantial as to clearly warrant the costs and liability exposure of its development in the context of an existing drug line. Such an incentive would prevent potentially beneficial innovations from being pursued and would increase the costs of drug development.

Finally, the ruling would create perverse incentives that could actually discourage drug companies from developing and introducing safer alternative drugs. If bringing a safer drug to market later could be used as evidence that the first-generation drug was not safe enough, companies may choose not to invest in developing improved versions at all in order to avoid exposing themselves to liability. This would, of course, directly undermine the goal of increasing drug safety overall.

The Court of Appeal gave insufficient consideration to these severe policy consequences of the duty it recognized. A manufacturer’s decision when to bring a potentially safer drug to market involves complex trade-offs that courts are ill-equipped to second-guess—particularly in the limited context of a products-liability determination.

Conclusion

The Court of Appeal’s novel “duty to market” any known, less-harmful alternative to an existing product would deter innovation to the detriment of consumers. The Court of Appeal failed to consider how its decision would distort incentives in a way that harms the very patients the tort system is meant to protect. This Court should grant review to address these important legal and policy issues and to prevent this unprecedented expansion of tort liability from distorting manufacturers’ incentives to develop new and better products.

[1] No party or counsel for a party authored or paid for this amicus letter in whole or in part.

[2] It is important to note that this number varies with the kind of medicine involved, but across all categories of medicines there is a high likelihood of failure subsequent to Phase I trials.

[3] To the extent the concern is with disclosure of information regarding a potentially better product, that is properly a function of the patent system, which requires public disclosure of new ideas in exchange for the receipt of a patent. (See Brenner v. Manson, 383 U.S. 519, 533 (1966) (“one of the purposes of the patent system is to encourage dissemination of information concerning discoveries and inventions.”)). Of course, the patent system preserves innovation incentives despite the mandatory disclosure of information by conferring an exclusive right to the inventor to use the new knowledge. By contrast, using the tort system as an information-forcing device in this context would impose risks and costs on innovation without commensurate benefit, ensuring less, rather than more, innovation.

[4] The Court of Appeal makes a related argument when it claims that “the duty does not require manufacturers to perfect their drugs, but simply to act with reasonable care for the users of the existing drug when the manufacturer has developed an alternative that it knows is safer and at least equally efficacious. Manufacturers already engage in this type of innovation in the ordinary course of their business, and most plaintiffs would likely face a difficult road in establishing a breach of the duty of reasonable care.” (Op. at 52-3).

Continue reading
Innovation & the New Economy