Spotlight

April 2024

HIGHLIGHTS

A Choice-of-Law Alternative to Federal Preemption of State Privacy Law

Executive Summary A prominent theme in debates about US national privacy legislation is whether federal law should preempt state law. A federal statute could create . . .

Executive Summary

A prominent theme in debates about US national privacy legislation is whether federal law should preempt state law. A federal statute could create one standard for markets that are obviously national in scope. Another approach is to allow states to be “laboratories of democracy” that adopt different laws so they can discover the best ones.

We propose a federal statute requiring states to recognize contractual choice-of-law provisions, so companies and consumers can choose what state privacy law to adopt. Privacy would continue to be regulated at the state level. However, the federal government would provide for jurisdictional competition among states, such that companies operating nationally could comply with the privacy laws of any one state.

Our proposed approach would foster a double competition aimed at discerning and delivering on consumers’ true privacy interests: market competition to deliver privacy policies that consumers prefer and competition among states to develop the best privacy laws.

Unlike a single federal privacy law, this approach would provide 50 competing privacy regimes for national firms. The choice-of-law approach can trigger competition and innovation in privacy practices while preserving a role for meaningful state privacy regulation.

Introduction

The question of preemption of state law by the federal government has bedeviled debates about privacy regulation in the United States. A prominent theme is to propose a national privacy policy that largely preempts state policies to create one standard for markets that are obviously national. Another approach is to allow states to be “laboratories of democracy” that adopt different laws, with the hope that they will adopt the best rules over time. Both approaches have substantial costs and weaknesses.

The alternative approach we propose would foster a double competition aimed at discerning and delivering on consumers’ true privacy interests: market competition to deliver privacy policies that consumers prefer and competition among states to develop the best privacy laws. Indeed, our proposal aims to obtain the best features—and avoid the worst features—of both a federal regime and a multistate privacy law regime by allowing firms and consumers to agree on compliance with the single regime of their choosing.

Thus, we propose a federal statute requiring states to recognize contractual choice-of-law provisions, so companies and consumers can choose what state privacy law to adopt. Privacy would continue to be regulated at the state level. However, the federal government would provide for jurisdictional competition among states, and companies operating nationally could comply with the privacy laws of any one state.

Unlike a single federal privacy law, this approach would provide 50 competing privacy regimes for national firms. Protecting choice of law can trigger competition and innovation in privacy practices while preserving a role for meaningful state privacy regulation.

The Emerging Patchwork of State Privacy Statutes Is a Problem for National Businesses

A strong impetus for federal privacy legislation is the opportunity national and multinational businesses see to alleviate the expense and liability of having a patchwork of privacy statutes with which they must comply in the United States. Absent preemptive legislation, they could conceivably operate under 50 different state regimes, which would increase costs and balkanize their services and policies without coordinate gains for consumers. Along with whether a federal statute should have a private cause of action, preempting state law is a top issue when policymakers roll up their sleeves and discuss federal privacy legislation.

But while the patchwork argument is real, it may be overstated. There are unlikely ever to be 50 distinct state regimes; rather, a small number of state legislation types is likely, as jurisdictions follow each other’s leads and group together, including by promulgating model state statutes.[1] States don’t follow the worst examples from their brethren, as the lack of biometric statutes modeled on Illinois’s legislation illustrates.[2]

Along with fewer “patches,” the patchwork’s costs will tend to diminish over time as states land on relatively stable policies, allowing compliance to be somewhat routinized.

Nonetheless, the patchwork is far from ideal. It is costly to firms doing business nationally. It costs small firms more per unit of revenue, raising the bar to new entry and competition. And it may confuse consumers about what their protections are (though consumers don’t generally assess privacy policies carefully anyway).

But a Federal Privacy Statute Is Far from Ideal as Well

Federal preemption has many weaknesses and costs as well. Foremost, it may not deliver meaningful privacy to consumers. This is partially because “privacy” is a congeries of interests and values that defy capture.[3] Different people prioritize different privacy issues differently. In particular, the elites driving and influencing legislation may prioritize certain privacy values differently from consumers, so legislation may not serve most consumers’ actual interests.[4]

Those in the privacy-regulation community sometimes assume that passing privacy legislation ipso facto protects privacy, but that is not a foregone conclusion. The privacy regulations issued under the Gramm-Leach-Bliley Act (concerning financial services)[5] and the Health Insurance Portability and Accountability Act (concerning health care)[6] did not usher in eras of consumer confidence about privacy in their respective fields.

The short-term benefits of preempting state law may come with greater long-term costs. One cost is the likely drop in competition among firms around privacy. Today, as some have noted, “Privacy is actually a commercial advantage. . . . It can be a competitive advantage for you and build trust for your users.”[7] But federal privacy regulation seems almost certain to induce firms to treat compliance as the full measure of privacy to offer consumers. Efforts to outperform or ace out one another will likely diminish.[8]

Another long-term cost of preempting state law is the drop in competition among states to provide well-tuned privacy and consumer-protection legislation. Our federal system’s practical genius, which Justice Louis Brandeis articulated 90 years ago in New State Ice v. Liebmann, is that state variation allows natural experiments in what best serves society—business and consumer interests alike.[9] Because variations are allowed, states can amend their laws individually, learn from one another, adapt, and converge on good policy.

The economic theory of federalism draws heavily from the Tiebout model.[10] Charles Tiebout argued that competing local governments could, under certain conditions, produce public goods more efficiently than the national government could. Local governments act as firms in a marketplace for taxes and public goods, and consumer-citizens match their preferences to the providers. Efficient allocation requires mobile people and resources, enough jurisdictions with the freedom to set their own laws, and limited spillovers among jurisdictions (effects of one jurisdiction’s policies on others).

A related body of literature on “market-preserving federalism” argues that strong and self-reinforcing limits on national and local power can preserve markets and incentivize economic growth and development.[11] The upshot of this literature is that when local jurisdictions can compete on law, not only do they better match citizens’ policy preferences, but the rules tend toward greater economic efficiency.

In contrast to the economic gains from decentralization, moving authority over privacy from states to the federal government may have large political costs. It may deepen Americans’ growing dissatisfaction with their democracy. Experience belies the ideal of responsive national government when consumers, acting as citizens, want to learn about or influence the legislation and regulation that governs more and more areas of their lives. The “rejectionist” strain in American politics that Donald Trump’s insurgency and presidency epitomized may illustrate deep dissatisfaction with American democracy that has been growing for decades. Managing a highly personal and cultural

issue like privacy through negotiation between large businesses and anonymous federal regulators would deepen trends that probably undermine the government’s legitimacy.

To put a constitutional point on it, preempting states on privacy contradicts the original design of our system, which assigned limited powers to the federal government.[12] The federal government’s enumerated powers generally consist of national public goods—particularly defense. The interstate commerce clause, inspired by state parochialism under the Articles of Confederation, exists to make commerce among states (and with tribes) regular; it is not rightly a font of power to regulate the terms and conditions of commerce generally.[13]

Preempting state law does not necessarily lead to regulatory certainty, as is often imagined. Section 230 of the Communications Decency Act may defeat once and for all the idea that federal legislation creates certainty.[14] More than a quarter century after its passage, it is hotly debated in Congress and threatened in the courts.[15]

The Fair Credit Reporting Act (FCRA) provides a similar example.[16] Passed in 1970, it comprehensively regulated credit reporting. Since then, Congress has amended it dozens of times, and regulators have made countless alterations through interpretation and enforcement.[17] The Consumer Financial Protection Bureau recently announced a new inquiry into data brokering under the FCRA.[18] That is fine, but it illustrates that the FCRA did not solve problems and stabilize the law. It just moved the jurisdiction to Washington, DC.

Meanwhile, as regulatory theory predicts, credit reporting has become a three-horse race.[19] A few slow-to-innovate firms have captured and maintained dominance thanks partially to the costs and barriers to entry that uniform regulation creates.

Legal certainty may be a chimera while business practices and social values are in flux. Certainty develops over time as industries settle into familiar behaviors and roles.

An Alternative to Preemption: Business and Consumer Choice

One way to deal with this highly complex issue is to promote competition for laws. The late, great Larry Ribstein, with several coauthors over the years, proposed one such legal mechanism: a law market empowered by choice-of-law statutes.[20] Drawing on the notion of market competition as a discovery process,[21] Ribstein and Henry Butler explained:

In order to solve the knowledge problem and to create efficient legal technologies, the legal system can use the same competitive process that encourages innovation in the private sector—that is, competition among suppliers of law. As we will see, this entails enforcing contracts among the parties regarding the applicable law. The greater the knowledge problem the more necessary it is to unleash markets for law to solve the problem.[22]

The proposal set forth below promotes just such competition and solves the privacy-law patchwork problem without the costs of federal preemption. It does this through a simple procedural regulation requiring states to enforce choice-of-law terms in privacy contracts, rather than through a heavy-handed, substantive federal law. Inspired by Butler and Ribstein’s proposal for pluralist insurance regulation,[23] the idea is to make the choice of legal regime a locus of privacy competition.

Modeled on the US system of state incorporation law, our proposed legislation would leave firms generally free to select the state privacy law under which they do business nationally. Firms would inform consumers, as they must to form a contract, that a given state’s laws govern their policies. Federal law would ensure that states respect those choice-of-law provisions, which would be enforced like any other contract term.

This would strengthen and deepen competition around privacy. If firms believed privacy was a consumer interest, they could select highly protective state laws and advertise that choice, currying consumer favor. If their competitors chose relatively lax state law, they could advertise to the public the privacy threats behind that choice. The process would help hunt out consumers’ true interests through an ongoing argument before consumers. Businesses’ and consumers’ ongoing choices— rather than a single choice by Congress followed by blunt, episodic amendments—would shape the privacy landscape.

The way consumers choose in the modern marketplace is a broad and important topic that deserves further study and elucidation. It nevertheless seems clear—and it is rather pat to observe—that consumers do not carefully read privacy policies and balance their implications. Rather, a hive mind of actors including competitors, advocates, journalists, regulators, and politicians pore over company policies and practices. Consumers take in branding and advertising, reputation, news, personal recommendations, rumors, and trends to decide on the services they use and how they use them.

That detail should not be overlooked: Consumers may use services differently based on the trust they place in them to protect privacy and related values. Using an information-intensive service is not a proposition to share everything or nothing. Consumers can and do shade their use and withhold information from platforms and services depending on their perceptions of whether the privacy protections offered meet their needs.

There is reason to be dissatisfied with the modern marketplace, in which terms of service and privacy policies are offered to the individual consumer on a “take it or leave it” basis. There is a different kind of negotiation, described above, between the hive mind and large businesses. But when the hive mind and business have settled on terms, individuals cannot negotiate bespoke policies reflecting their particular wants and needs. This collective decision-making may be why some advocates regard market processes as coercive. They do not offer custom choices to all but force individual consumers into channels cut by all.

The solution that orthodox privacy advocates offer does not respond well to this problem, because they would replace “take it or leave it” policies crafted in the crucible of the marketplace with “take it or leave it” policies crafted in a political and regulatory crucible. Their prescriptions are sometimes to require artificial notice and “choice,” such as whether to accept cookies when one visits websites. This, as experience shows, does not reach consumers when they are interested in choosing.

Choice of law in privacy competition is meant to preserve manifold choices when and where consumers make their choices, such as at the decision to transact, and then let consumers choose how they use the services they have decided to adopt. Let new entrants choose variegated privacy-law regimes, and consumers will choose among them. That does not fix the whole problem, but at least it doesn’t replace consumer choice with an “expert” one-size-fits-all choice.

In parallel to business competition around privacy choice of law, states would compete with one another to provide the most felicitous environment for consumers and businesses. Some states would choose more protection, seeking the rules businesses would choose to please privacy-conscious consumers. Others might choose less protection, betting that consumers prefer goods other than information control, such as free, convenient, highly interactive, and custom services.

Importantly, this mechanism would allow companies to opt in to various privacy regimes based on the type of service they offer, enabling a degree of fine-tuning appropriate for different industries and different activities that no alternative would likely offer. This would not only result in the experimentation and competition of federalism but also enable multiple overlapping privacy-regulation regimes, avoiding the “one-size-doesn’t-fit-all” problem.

While experimentation continued, state policies would probably rationalize and converge over time. There are institutions dedicated to this, such as the Uniform Law Commission, which is at its best when it harmonizes existing laws based on states’ experience.[24]

It is well within the federal commerce power to regulate state enforcement of choice-of-law provisions, because states may use them to limit interjurisdictional competition. Controlling that is precisely what the commerce power is for. Utah’s recent Social Media Regulation Act[25] barred enforcement of choice-of-law provisions, an effort to regulate nationally from a state capital. Federally backing contractual choice-of-law selections would curtail this growing problem.

At the same time, what our proposed protections for choice-of-law rules do is not much different from what contracts already routinely do and courts enforce in many industries. Contracting parties often specify the governing state’s law and negotiate for the law that best suits their collective needs.

Indeed, sophisticated business contracts increasingly include choice-of-law clauses that state the law that the parties wish to govern their relationship. In addition to settling uncertainty, these clauses might enable the contracting parties to circumvent those states’ laws they deem to be undesirable.[26]

This practice is not only business-to-business. Consumers regularly enter into contracts that include choice-of-law clauses—including regarding privacy law. Credit card agreements, stock and mutual fund investment terms, consumer-product warranties, and insurance contracts, among many other legal agreements, routinely specify the relevant state law that will govern.

In these situations, the insurance company, manufacturer, or mutual fund has effectively chosen the law. The consumer participates in this choice only to the same extent that she participates in any choices related to mass-produced products and services, that is, by deciding whether to buy the product or service.[27]

Allowing contracting parties to create their own legal certainty by contract would likely rankle states. Indeed, “we might expect governments to respond with hostility to the enforcement of choice-of-law clauses. In fact, however, the courts usually do enforce choice-of-law clauses.”[28] With some states trying to regulate nationally and some effectively doing so, the choice the states collectively face is having a role in privacy regulation or no role at all. Competition is better for them than exclusion from the field or minimization of their role through federal preemption of state privacy law. This proposal thus advocates simple federal legislation that preserves firms’ ability to make binding choice-of-law decisions and states’ ability to retain a say in the country’s privacy-governance regime.

Avoiding a Race to the Bottom

Some privacy advocates may object that state laws will not sufficiently protect consumers.[29] Indeed, there is literature arguing that federalism will produce a race to the bottom (i.e., competition leading every state to effectively adopt the weakest law possible), for example, when states offer incorporation laws that are the least burdensome to business interests in a way that arguably diverges from public or consumer interests.[30]

The race-to-the-bottom framing slants the issues and obscures ever-present trade-offs, however. Rules that give consumers high levels of privacy come at a cost in social interaction, price, and the quality of the goods they buy and services they receive. It is not inherently “down” or bad to prefer cheap or free goods and plentiful, social, commercial interaction. It is not inherently “up” or good to opt for greater privacy.

The question is what consumers want. The answers to that question—yes, plural—are the subject of constant research through market mechanisms when markets are free to experiment and are functioning well. Consumers’ demands can change over time through various mechanisms, including experience with new technologies and business models. We argue for privacy on the terms consumers want. The goal is maximizing consumer welfare, which sometimes means privacy and sometimes means sharing personal information in the interest of other goods. There is no race to the bottom in trading one good for another.

Yet the notion of a race to the bottom persists—although not without controversy. In the case of Delaware’s incorporation statutes, the issue is highly contested. Many scholars argue that the state’s rules are the most efficient—that “far from exploiting shareholders, . . . these rules actually benefit shareholders by increasing the wealth of corporations chartered in states with these rules.”[31]

As always, there are trade-offs, and the race-to-the-bottom hypothesis requires some unlikely assumptions. Principally, as Jonathan Macey and Geoffrey Miller discuss, the assumption that state legislators are beholden to the interests of corporations over other constituencies vying for influence. As Macey and Miller explain, the presence of a powerful lobby of specialized and well-positioned corporate lawyers (whose interests are not the same as those of corporate managers) transforms the analysis and explains the persistence and quality of Delaware corporate law.[32]

In much the same vein, there are several reasons to think competition for privacy rules would not succumb to a race to the bottom.

First, if privacy advocates are correct, consumers put substantial pressure on companies to adopt stricter privacy policies. Simply opting in to the weakest state regime would not, as with corporate law, be a matter of substantial indifference to consumers but would (according to advocates) run contrary to their interests. If advocates are correct, firms avoiding stronger privacy laws would pay substantial costs. As a result, the impetus for states to offer weaker laws would be diminished. And, consistent with Macey and Miller’s “interest-group theory” of corporate law,[33] advocates themselves would be important constituencies vying to influence state privacy laws. Satisfying these advocates may benefit state legislators more than satisfying corporate constituencies does.

Second, “weaker” and “stronger” would not be the only dimensions on which states would compete for firms to adopt their privacy regimes. Rather, as mentioned above, privacy law is not one-size-fits-all. Different industries and services entail different implications for consumer interests. States could compete to specialize in offering privacy regimes attractive to distinct industries based on interest groups with particular importance to their economies. Minnesota (home of the Mayo Clinic) and Ohio (home of the Cleveland Clinic), for example, may specialize in health care and medical privacy, while California specializes in social media privacy.

Third, insurance companies are unlikely to be indifferent to the law that the companies they cover choose. Indeed, to the extent that insurers require covered firms to adopt specific privacy practices to control risk, those insurers would likely relish the prospect of outsourcing the oversight of these activities to state law enforcers. States could thus compete to mimic large insurers’ privacy preferences—which would by no means map onto “weaker” policies—to induce insurers to require covered firms to adopt their laws.

If a race to the bottom is truly a concern, the federal government could offer a 51st privacy alternative (that is, an optional federal regime as an alternative to the states’ various privacy laws). Assuming federal privacy regulation would be stricter (an assumption inherent in the race-to-the-bottom objection to state competition), such an approach would ensure that at least one sufficiently strong opt-in privacy regime would always be available. Among other things, this would preclude firms from claiming that no option offers a privacy regime stronger than those of the states trapped in the (alleged) race to the bottom.

Choice of law exists to a degree in the European Union, a trading bloc commonly regarded as uniformly regulated (and commonly regarded as superior on privacy because of a bias toward privacy over other goods). The General Data Protection Regulation (GDPR) gives EU member states broad authority to derogate from its provisions and create state-level exemptions. Article 23 of the GDPR allows states to exempt themselves from EU-wide law to safeguard nine listed broad governmental and public interests.[34] And Articles 85 through 91 provide for derogations, exemptions, and powers to impose additional requirements relative to the GDPR for a number of “specific data processing situations.”[35]

Finally, Article 56 establishes a “lead supervisory authority” for each business.[36] In the political, negotiated processes under the GDPR, this effectively allows companies to shade their regulatory obligations and enforcement outlook through their choices of location. For the United States’ sharper rule-of-law environment, we argue that the choice of law should be articulate and clear.

Refining the Privacy Choice-of-Law Proposal

The precise contours of a federal statute protecting choice-of-law terms in contracts will determine whether it successfully promotes interfirm and interstate competition. Language will also determine its political salability.

Questions include: What kind of notice, if any, should be required to make consumers aware that they are dealing with a firm under a law regime not their own? Consumers are notoriously unwilling to investigate privacy terms—or any other contract terms—in advance, and when considering the choice of law, they would probably not articulate it to themselves. But the competitive dynamics described earlier would probably communicate relevant information to consumers even without any required notice. As always, competitors will have an incentive to ensure consumers are appropriately well-informed when they can diminish their rivals or elevate themselves in comparison by doing so.[37]

Would there be limits on which state’s laws a firm could choose? For example, could a company choose the law of a state where neither the company nor the consumer is domiciled? States would certainly argue that a company should not be able to opt out of the law of the state where it is domiciled. The federal legislation we propose would allow unlimited choice. Such a choice is important if the true benefits of jurisdictional competition are to be realized.

A federal statute requiring states to enforce choice-of-law terms should not override state law denying enforcement of choice-of-law terms that are oppressive, unfair, or improperly bargained for. In cases such as Carnival Cruise Lines v. Shute[38] and The Bremen v. Zapata Off-Shore Co.,[39] the Supreme Court has considered whether forum-selection clauses in contracts might be invalid. The Court has generally upheld such clauses, but they can be oppressive if they require plaintiffs in Maine to litigate in Hawaii, for example, without a substantial reason why Hawaii courts are the appropriate forum. Choice-of-law terms do not impose the cost of travel to remote locations, but they could be used not to establish the law governing the parties but rather to create a strategic advantage unrelated to the law in litigation. Deception built into a contract’s choice-of-law terms should remain grounds for invalidating the contract under state law, even if the state is precluded from barring choice-of-law terms by statute.

The race-to-the-bottom argument raises the question of whether impeding states from overriding contractual choice-of-law provisions would be harmful to state interests, especially since privacy law concerns consumer rights. However, there are reasons to believe race-to-the-bottom incentives would be tempered by greater legal specialization and certainty and by state courts’ ability to refuse to enforce choice-of-law clauses in certain limited circumstances. As Erin O’Hara and Ribstein put it:

Choice-of law clauses reduce uncertainty about the parties’ legal rights and obligations and enable firms to operate in many places without being subject to multiple states’ laws. These reduced costs may increase the number of profitable transactions and thereby increase social wealth. Also, the clauses may not change the results of many cases because courts in states that prohibit a contract term might apply the more lenient law of a state that has close connections with the parties even without a choice-of-law clause.[40]

Determining when, exactly, a state court can refuse to enforce a firm’s choice of privacy law because of excessive leniency is tricky, but the federal statute could set out a framework for when a court could apply its own state’s law. Much like the independent federal alternative discussed above, specific minimum requirements in the federal law could ensure that any race to the bottom that does occur can go only so far. Of course, it would be essential that any such substantive federal requirements be strictly limited, or else the benefits of jurisdictional competition would be lost.

The converse to the problem of a race to the bottom resulting from state competition is the “California effect”—the prospect of states adopting onerous laws from which no company (or consumer) can opt out. States can regulate nationally through one small tendril of authority: the power to prevent businesses and consumers from agreeing on the law that governs their relationships. If a state regulates in a way that it thinks will be disfavored, it will bar choice-of-law provisions in contracts so consumers and businesses cannot exercise their preference.

Utah’s Social Media Regulation Act, for example, includes mandatory age verification for all social media users,[41] because companies must collect proof that consumers are either of age or not in Utah. To prevent consumers and businesses from avoiding this onerous requirement, Utah bars waivers of the law’s requirements “notwithstanding any contract or choice-of-law provision in a contract.”[42] If parties could choose their law, that would render Utah’s law irrelevant, so Utah cuts off that avenue. This demonstrates the value of a proposal like the one contemplated here.

Proposed Legislation

Creating a federal policy to stop national regulation coming from state capitols, while still preserving competition among states and firms, is unique. Congress usually creates its own policy and preempts states in that area to varying degrees. There is a well-developed law around this type of preemption, which is sometimes implied and sometimes expressed in statute.[43] Our proposal does not operate that way. It merely withdraws state authority to prevent parties from freely contracting about the law that applies to them.

A second minor challenge exists regarding the subject matter about which states may not regulate choice of law. Barring states from regulating choice of law entirely is an option, but if the focus is on privacy only, the preemption must be couched to allow regulation of choice of law in other areas. Thus, the scope of “privacy” must be in the language.

Finally, the withdrawal of state authority should probably be limited to positive enactments, such as statutes and regulations, leaving intact common-law practice related to choice-of-law provisions.[44] “Statute,” “enactment,” and “provision” are preferable in preemptive language to “law,” which is ambiguous.

These challenges, and possibly more, are tentatively addressed in the following first crack at statutory language, inspired by several preemptive federal statutes, including the Employee Retirement Income Security Act of 1974,[45] the Airline Deregulation Act,[46] the Federal Aviation Administration Authorization Act of 1994,[47] and the Federal Railroad Safety Act.[48]

A state, political subdivision of a state, or political authority of at least two states may not enact or enforce any statute, regulation, or other provision barring the adoption or application of any contractual choice-of-law provision to the extent it affects contract terms governing commercial collection, processing, security, or use of personal information.

Conclusion

This report introduces a statutory privacy framework centered on individual states and consistent with the United States’ constitutional design. But it safeguards companies from the challenge created by the intersection of that design and the development of modern commerce and communication, which may require them to navigate the complexities and inefficiencies of serving multiple regulators. It fosters an environment conducive to jurisdictional competition and experimentation.

We believe giving states the chance to compete under this approach should be explored in lieu of consolidating privacy law in the hands of one central federal regulator. Competition among states to provide optimal legislation and among businesses to provide optimal privacy policies will help discover and deliver on consumers’ interests, including privacy, of course, but also interactivity, convenience, low costs, and more.

Consumers’ diverse interests are not known now, and they cannot be predicted reliably for the undoubtedly interesting technological future. Thus, it is important to have a system for discovering consumers’ interests in privacy and the regulatory environments that best help businesses serve consumers. It is unlikely that a federal regulatory regime can do these things. The federal government could offer a 51st option in such a system, of course, so advocates for federal involvement could see their approach tested alongside the states’ approaches.

[1] See Uniform Law Commission, “What Is a Model Act?,” https://www.uniformlaws.org/acts/overview/modelacts.

[2] 740 Ill. Comp. Stat. 14/15 (2008).

[3] See Jim Harper, Privacy and the Four Categories of Information Technology, American Enterprise Institute, May 26, 2020, https://www.aei.org/research-products/report/privacy-and-the-four-categories-of-information-technology.

[4] See Jim Harper, “What Do People Mean by ‘Privacy,’ and How Do They Prioritize Among Privacy Values? Preliminary Results,” American Enterprise Institute, March 18, 2022, https://www.aei.org/research-products/report/what-do-people-mean-by-privacy-and-how-do-they-prioritize-among-privacy-values-preliminary-results.

[5] Gramm-Leach-Bliley Act, 15 U.S.C. 6801, § 501 et seq.

[6] Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104-191, § 264.

[7] Estelle Masse, quoted in Ashleigh Hollowell, “Is Privacy Only for the Elite? Why Apple’s Approach Is a Marketing Advantage,” VentureBeat, October 18, 2022, https://venturebeat.com/security/is-privacy-only-for-the-elite-why-apples-approach-is-a-marketing-advantage.

[8] Competition among firms regarding privacy is common, particularly in digital markets. Notably, Apple has implemented stronger privacy protections than most of its competitors have, particularly with its App Tracking Transparency framework in 2021. See, for example, Brain X. Chen, “To Be Tracked or Not? Apple Is Now Giving Us the Choice,” New York Times, April 26, 2021, https://www.nytimes.com/2021/04/26/technology/personaltech/apple-app-tracking-transparency.html. For Apple, this approach is built into the design of its products and offers what it considers a competitive advantage: “Because Apple designs both the iPhone and processors that offer heavy-duty processing power at low energy usage, it’s best poised to offer an alternative vision to Android developer Google which has essentially built its business around internet services.” Kif Leswing, “Apple Is Turning Privacy into a Business Advantage, Not Just a Marketing Slogan,” CNBC, June 8, 2021, https://www.cnbc.com/2021/06/07/apple-is-turning-privacy-into-a-business-advantage.html. Apple has built a substantial marketing campaign around these privacy differentiators, including its ubiquitous “Privacy. That’s Apple.” slogan. See Apple, “Privacy,” https://www.apple.com/privacy. Similarly, “Some of the world’s biggest brands (including Unilever, AB InBev, Diageo, Ferrero, Ikea, L’Oréal, Mars, Mastercard, P&G, Shell, Unilever and Visa) are focusing on taking an ethical and privacy-centered approach to data, particularly in the digital marketing and advertising context.” Rachel Dulberg, “Why the World’s Biggest Brands Care About Privacy,” Medium, September 14, 2021, https://uxdesign.cc/who-cares-about-privacy-ed6d832156dd.

[9] New State Ice Co. v. Liebmann, 285 US 262, 311 (1932) (Brandeis, J., dissenting) (“To stay experimentation in things social and economic is a grave responsibility. Denial of the right to experiment may be fraught with serious consequences to the Nation. It is one of the happy incidents of the federal system that a single courageous State may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.”).

[10] See Charles M. Tiebout, “A Pure Theory of Local Expenditures,” Journal of Political Economy 64, no. 5 (1956): 416–24, https://www.jstor.org/stable/1826343.

[11] See, for example, Barry R. Weingast, “The Economic Role of Political Institutions: Market-Preserving Federalism and Economic Development,” Journal of Law, Economics, & Organization 11, no. 1 (April 1995): 1 31, https://www.jstor.org/stable/765068; Yingyi Qian and Barry R. Weingast, “Federalism as a Commitment to Preserving Market Incentives,” Journal of Economic Perspectives 11, no. 4 (Fall 1997): 83–92, https://www.jstor.org/stable/2138464; and Rui J. P. de Figueiredo Jr. and Barry R. Weingast, “Self-Enforcing Federalism,” Journal of Law, Economics, & Organization 21, no. 1 (April 2005): 103–35, https://www.jstor.org/stable/3554986.

[12] See US Const. art. I, § 8 (enumerating the powers of the federal Congress).

[13] See generally Randy E. Barnett, Restoring the Lost Constitution: The Presumption of Liberty (Princeton, NJ: Princeton University Press, 2014), 274–318.

[14] Protection for Private Blocking and Screening of Offensive Material, 47 U.S.C. 230.

[15] See Geoffrey A. Manne, Ben Sperry, and Kristian Stout, “Who Moderates the Moderators? A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet,” Rutgers Computer & Technology Law Journal 49, no. 1 (2022): 39–53, https://laweconcenter.org/wp-content/uploads/2021/11/Stout-Article-Final.pdf (detailing some of the history of how Section 230 immunity expanded and differs from First Amendment protections); Meghan Anand et al., “All the Ways Congress Wants to Change Section 230,” Slate, August 30, 2023, https://slate.com/technology/2021/03/section-230 reform-legislative-tracker.html (tracking every proposal to amend or repeal Section 230); and Technology & Marketing Law Blog, website, https://blog.ericgoldman.org (tracking all Section 230 cases with commentary).

[16] Fair Credit Reporting Act, 15 U.S.C. § 1681 et seq.

[17] See US Federal Trade Commission, Fair Credit Reporting Act: 15 U.S.C. § 1681, May 2023, https://www.ftc.gov/system/files/ftc_gov/pdf/fcra-may2023-508.pdf (detailing changes to the Fair Credit Reporting Act and its regulations over time).

[18] US Federal Reserve System, Consumer Financial Protection Bureau, “CFPB Launches Inquiry into the Business Practices of Data Brokers,” press release, May 15, 2023, https://www.consumerfinance.gov/about-us/newsroom/cfpb-launches-inquiry-into-the-business-practices-of-data-brokers.

[19] US Federal Reserve System, Consumer Financial Protection Bureau, List of Consumer Reporting Companies, 2021, 8, https://files.consumerfinance.gov/f/documents/cfpb_consumer-reporting-companies-list_03-2021.pdf (noting there are “three big nationwide providers of consumer reports”).

[20] See, for example, Erin A. O’Hara and Larry E. Ribstein, The Law Market (Oxford, UK: Oxford University Press, 2009); Erin A. O’Hara O’Connor and Larry E. Ribstein, “Conflict of Laws and Choice of Law,” in Procedural Law and Economics, ed. Chris William Sanchirico (Northampton, MA: Edward Elgar Publishing, 2012), in Encyclopedia of Law and Economics, 2nd ed., ed. Gerrit De Geest (Northampton, MA: Edward Elgar Publishing, 2009); and Bruce H. Kobayashi and Larry E. Ribstein, eds., Economics of Federalism (Northampton, MA: Edward Elgar Publishing, 2007).

[21] See F. A. Hayek, “The Use of Knowledge in Society,” American Economic Review 35, no. 4 (September 1945): 519–30, https://www.jstor.org/stable/1809376?seq=12.

[22] Henry N. Butler and Larry E. Ribstein, “Legal Process for Fostering Innovation” (working paper, George Mason University, Antonin Scalia Law School, Fairfax, VA), 2, https://masonlec.org/site/rte_uploads/files/Butler-Ribstein-Entrepreneurship-LER.pdf.

[23] See Henry N. Butler and Larry E. Ribstein, “The Single-License Solution,” Regulation 31, no. 4 (Winter 2008–09): 36–42, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1345900.

[24] See Uniform Law Commission, “Acts Overview,” https://www.uniformlaws.org/acts/overview.

[25] Utah Code Ann. § 13-63-101 et seq. (2023).

[26] O’Hara and Ribstein, The Law Market, 5.

[27] O’Hara and Ribstein, The Law Market, 5.

[28] O’Hara and Ribstein, The Law Market, 5.

[29] See Christiano Lima-Strong, “The U.S.’s Sixth State Privacy Law Is Too ‘Weak,’ Advocates Say,” Washington Post, March 30, 2023, https://www.washingtonpost.com/politics/2023/03/30/uss-sixth-state-privacy-law-is-too-weak-advocates-say.

[30] See, for example, William L. Cary, “Federalism and Corporate Law: Reflections upon Delaware,” Yale Law Journal 83, no. 4 (March 1974): 663–705, https://openyls.law.yale.edu/bitstream/handle/20.500.13051/15589/33_83YaleLJ663_1973_1974_.pdf (arguing Delaware could export the costs of inefficiently lax regulation through the dominance of its incorporation statute).

[31] Jonathan R. Macey and Geoffrey P. Miller, “Toward an Interest-Group Theory of Delaware Corporate Law,” Texas Law Review 65, no. 3 (February 1987): 470, https://openyls.law.yale.edu/bitstream/handle/20.500.13051/1029/Toward_An_Interest_Group_Theory_of_Delaware_Corporate_Law.pdf. See also Daniel R. Fischel, “The ‘Race to the Bottom’ Revisited: Reflections on Recent Developments in Delaware’s Corporation Law,” Northwestern University Law Review 76, no. 6 (1982): 913–45, https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=2409&context=journal_articles.

[32] Macey and Miller, “Toward an Interest-Group Theory of Delaware Corporate Law.”

[33] Macey and Miller, “Toward an Interest-Group Theory of Delaware Corporate Law.”

[34] Commission Regulation 2016/679, General Data Protection Regulation art. 23.

[35] Commission Regulation 2016/679, General Data Protection Regulation art. 85–91.

[36] Commission Regulation 2016/679, General Data Protection Regulation art. 56.

[37] See the discussion in endnote 8.

[38] Carnival Cruise Lines v. Shute, 499 US 585 (1991).

[39] The Bremen v. Zapata, 407 US 1 (1972).

[40] O’Hara and Ribstein, The Law Market, 8.

[41] See Jim Harper, “Perspective: Utah’s Social Media Legislation May Fail, but It’s Still Good for America,” Deseret News, April 6, 2023, https://www.aei.org/op-eds/utahs-social-media-legislation-may-fail-but-its-still-good-for-america.

[42] Utah Code Ann. § 13-63-401 (2023).

[43] See Bryan L. Adkins, Alexander H. Pepper, and Jay B. Sykes, Federal Preemption: A Legal Primer, Congressional Research Service, May 18, 2023, https://sgp.fas.org/crs/misc/R45825.pdf.

[44] Congress should not interfere with interpretation of choice-of-law provisions. These issues are discussed in Tanya J. Monestier, “The Scope of Generic Choice of Law Clauses,” UC Davis Law Review 56, no. 3 (February 2023): 959–1018, https://digitalcommons.law.buffalo.edu/cgi/viewcontent.cgi?article=2148&context=journal_articles.

[45] Employee Retirement Income Security Act of 1974, 29 U.S.C. § 1144(a).

[46] Airline Deregulation Act, 49 U.S.C. § 41713(b).

[47] Federal Aviation Administration Authorization Act of 1994, 49 U.S.C. § 14501.

[48] Federal Railroad Safety Act, 49 U.S.C. § 20106.

The Broken Promises of Europe’s Digital Regulation

If you live in Europe, you may have noticed issues with some familiar online services. From consent forms to reduced functionality and new fees, there . . .

If you live in Europe, you may have noticed issues with some familiar online services. From consent forms to reduced functionality and new fees, there is a sense that platforms like Amazon, Google, Meta, and Apple are changing the way they do business. 

Many of these changes are the result of a new European regulation called the Digital Markets Act (DMA), which seeks to increase competition in online markets. Under the DMA, so-called “gatekeepers” must allow rivals to access their platforms. Having taken effect March 7, firms now must comply with the regulation, which explains why we are seeing these changes unfold today.

Read the full piece here.

Rising Markups and Declining Business Dynamism: Evidence From the Industry Cross Section

In recent decades, various measures of “business dynamism”—such as new business entry rates and gross job or worker flows—have seen significant declines in the U.S. . . .

In recent decades, various measures of “business dynamism”—such as new business entry rates and gross job or worker flows—have seen significant declines in the U.S. (figure 1, right panel). Over a similar time frame, there is evidence that an important measure of market power—the average markup—has risen significantly (figure 1, left panel; De Loecker, Eeckhout, and Unger 2020). A natural question is whether these patterns are related.

Mi Mercado Es Su Mercado: The Flawed Competition Analysis of Mexico’s COFECE

Mexico’s Federal Economic Competition Commission (COFECE, after its Spanish acronym) has published the preliminary report it prepared following its investigation of competition in the retail . . .

Mexico’s Federal Economic Competition Commission (COFECE, after its Spanish acronym) has published the preliminary report it prepared following its investigation of competition in the retail electronic-commerce market (e.g., Amazon). The report finds that: 

Read the full piece here.

SHORT FORM WRITTEN OUTPUT

FCC’s Digital-Discrimination Rules Could Delay Broadband

When Congress passed the Infrastructure Investment and Jobs Act (IIJA) near the end of 2021, it included a short provision that required the Federal Communications . . .

When Congress passed the Infrastructure Investment and Jobs Act (IIJA) near the end of 2021, it included a short provision that required the Federal Communications Commission to adopt rules to prevent “digital discrimination.” At the time, it was understood the law intended to prohibit broadband providers from intentionally discriminating in their deployment decisions based on “income level, race, ethnicity, color, religion, or national origin.”

Read the full piece here.

Spectrum Pipeline Act a Promising Start that Needs Balance

Given how important digital connections are to Americans’ daily lives, it’s urgent that Congress move to renew the Federal Communications Commission’s authority to auction parts . . .

Given how important digital connections are to Americans’ daily lives, it’s urgent that Congress move to renew the Federal Communications Commission’s authority to auction parts of the public airwaves.

That authority lapsed a little over a year ago and efforts to reinstate it have been repeatedly stuck in partisan gridlock.

Read the full piece here.

Capital Confusion at the New York Times

In a recent guest essay for The New York Times, Aaron Klein of the Brookings Institution claims that the merger between Capital One and Discover would “keep intact the . . .

In a recent guest essay for The New York Times, Aaron Klein of the Brookings Institution claims that the merger between Capital One and Discover would “keep intact the broken and predatory system in which credit card companies profit handsomely by rewarding our richest Americans and advantaging the biggest corporations.”

That’s quite an indictment! Fortunately, Klein also offers solutions. Phew!

Read the full piece here.

US v. Apple Lawsuit Has Big Implications for Competition and Innovation

The lawsuit filed yesterday by the U.S. Justice Department (DOJ) against Apple for monopolization of the U.S. smartphone market (joined by 15 states and the District of . . .

The lawsuit filed yesterday by the U.S. Justice Department (DOJ) against Apple for monopolization of the U.S. smartphone market (joined by 15 states and the District of Columbia) has big implications for American competition and innovation.

At the heart of the complaint is the DOJ’s assertion that…

Read the full piece here.

Antitrust at the Agencies Roundup: Supply Chains, Noncompetes, and Greedflation

The big news from the agencies may be the lawsuit filed today by the U.S. Justice Department (DOJ) and 16 states against Apple alleging monopoly . . .

The big news from the agencies may be the lawsuit filed today by the U.S. Justice Department (DOJ) and 16 states against Apple alleging monopoly maintenance in violation of Section 2 of the Sherman Act. It’s an 86-page complaint and it’s just out. I’ll write more about it next week.

Two quick observations: First, the complaint opens with an anecdote from 2010 that suggests lock-in (a hard case under antitrust law), but demonstrates nothing. Second, the anecdote is followed by a statement that “[o]ver many years, Apple has repeatedly responded to competitive threats… by making it harder or more expensive for its users and developers to leave than by making it more attractive for them to stay.” 

Read the full piece here.

Mi Mercado Es Su Mercado: The Flawed Competition Analysis of Mexico’s COFECE

Mexico’s Federal Economic Competition Commission (COFECE, after its Spanish acronym) has published the preliminary report it prepared following its investigation of competition in the retail . . .

Mexico’s Federal Economic Competition Commission (COFECE, after its Spanish acronym) has published the preliminary report it prepared following its investigation of competition in the retail electronic-commerce market (e.g., Amazon). The report finds that: 

Read the full piece here.

Murthy Oral Arguments: Standing, Coercion, and the Difficulty of Stopping Backdoor Government Censorship

With Monday’s oral arguments in Murthy v. Missouri, we now have more of a feel for how the U.S. Supreme Court appears to be considering . . .

With Monday’s oral arguments in Murthy v. Missouri, we now have more of a feel for how the U.S. Supreme Court appears to be considering the issues of social-media censorship—in this case, done allegedly at the behest of federal officials.

In the International Center for Law & Economics’ (ICLE) amicus brief in the case, we argued that the First Amendment protects a marketplace of ideas, and government agents can’t intervene in that marketplace by coercing social-media companies into removing disfavored speech. But if the oral arguments are any indication, there are reasons to be skeptical that the Court will uphold the preliminary injunction the district court issued against the government officials (later upheld in a more limited form by the 5th U.S. Circuit Court of Appeals).

Read the full piece here.

Systemic Risk and Copyright in the EU AI Act

The European Parliament’s approval last week of the AI Act marked a significant milestone in the regulation of artificial intelligence. While the law’s final text . . .

The European Parliament’s approval last week of the AI Act marked a significant milestone in the regulation of artificial intelligence. While the law’s final text is less alarming than what was initially proposed, it nonetheless still includes some ambiguities that could be exploited by regulators in ways that would hinder innovation in the EU. 

Among the key features emerging from the legislation are its introduction of “general purpose AI” (GPAI) as a regulatory category and the ways that these GPAI might interact with copyright rules. Moving forward in what is rapidly becoming a global market for generative-AI services, it also bears reflecting on how the AI Act’s copyright provisions contrast with current U.S. copyright law. 

Read the full piece here.

Section 214: Title II’s Trojan Horse

The Federal Communications Commission (FCC) has proposed classifying broadband internet-access service as a common carrier “telecommunications service” under Title II of the Communications Act. One . . .

The Federal Communications Commission (FCC) has proposed classifying broadband internet-access service as a common carrier “telecommunications service” under Title II of the Communications Act. One major consequence of this reclassification would be subjecting broadband providers to Section 214 regulations that govern the provision, acquisition, and discontinuation of communication “lines.”

In the Trojan War, the Greeks conquered Troy by hiding their soldiers inside a giant wooden horse left as a gift to the besieged Trojans. Section 214 hides a potential takeover of the broadband industry inside the putative gift of improving national security.

Read the full piece here.

The Broken Promises of Europe’s Digital Regulation

If you live in Europe, you may have noticed issues with some familiar online services. From consent forms to reduced functionality and new fees, there . . .

If you live in Europe, you may have noticed issues with some familiar online services. From consent forms to reduced functionality and new fees, there is a sense that platforms like Amazon, Google, Meta, and Apple are changing the way they do business. 

Many of these changes are the result of a new European regulation called the Digital Markets Act (DMA), which seeks to increase competition in online markets. Under the DMA, so-called “gatekeepers” must allow rivals to access their platforms. Having taken effect March 7, firms now must comply with the regulation, which explains why we are seeing these changes unfold today.

Read the full piece here.

Test SLC (merger)

DEFINITION The substantial lessening of competition or “SLC” test is a standard that regulatory authorities use to assess the legality of proposed mergers and acquisitions. . . .

DEFINITION

The substantial lessening of competition or “SLC” test is a standard that regulatory authorities use to assess the legality of proposed mergers and acquisitions. The SLC test examines whether a prospective merger is likely to substantially lessen competition in a given market. Its purpose is to prevent mergers that increase prices, reduce output, limit consumer choice, or stifle innovation as a result of a decrease in competition. Mergers that substantially lessen competition are prohibited under the laws of the jurisdictions that utilize this test, such as the USA, EU, Canada, the United Kingdom, Australia and Nigeria, amongst others.

Read the full piece here.

Rising Markups and Declining Business Dynamism: Evidence From the Industry Cross Section

In recent decades, various measures of “business dynamism”—such as new business entry rates and gross job or worker flows—have seen significant declines in the U.S. . . .

In recent decades, various measures of “business dynamism”—such as new business entry rates and gross job or worker flows—have seen significant declines in the U.S. (figure 1, right panel). Over a similar time frame, there is evidence that an important measure of market power—the average markup—has risen significantly (figure 1, left panel; De Loecker, Eeckhout, and Unger 2020). A natural question is whether these patterns are related.

A Closer Look at Spotify’s Claims About Apple’s App-Store Practices

Following Monday’s announcement by the European Commission that it was handing down a €1.8 billion fine against Apple, Spotify—the Swedish music-streaming service that a decade ago lodged . . .

Following Monday’s announcement by the European Commission that it was handing down a €1.8 billion fine against Apple, Spotify—the Swedish music-streaming service that a decade ago lodged the initial private complaint that spawned the Commission’s investigation—published a short explainer on its website titled “Fast Five Facts: Facts that Show Apple Doesn’t Play Fair.” The gist of the company’s argument is that Apple engages in a series of unfair and anticompetitive practices. In this piece, we put some of these claims to the test.

Read the full piece here.

Blackout Rebates: Tipping the Scales at the FCC

Cable and satellite programming blackouts often generate significant headlines. While the share of the population affected by blackouts may be small—bordering on minuscule—most consumers don’t . . .

Cable and satellite programming blackouts often generate significant headlines. While the share of the population affected by blackouts may be small—bordering on minuscule—most consumers don’t like the idea of programming blackouts and balk at the idea of paying for TV programming they can’t access.

Read the full piece here.

The Law & Economics of the Capital One-Discover Merger

Capital One Financial announced plans late last month to acquire Discover Financial Services in a $35.3 billion deal that would give Capital One its own credit-card payment . . .

Capital One Financial announced plans late last month to acquire Discover Financial Services in a $35.3 billion deal that would give Capital One its own credit-card payment network, while simultaneously allowing the company to expand its deposit base, credit-card offerings, and rewards programs.

Read the full piece here.

The DMA’s Missing Presumption of Innocence

The EU’s Digital Markets Act (DMA) will come into effect March 7, forcing a handful of digital platforms to change their market conduct in some . . .

The EU’s Digital Markets Act (DMA) will come into effect March 7, forcing a handful of digital platforms to change their market conduct in some unprecedented ways. The law effectively judges them guilty (with a very limited, formalistic trial), and brands them “gatekeepers” based purely on size. It then sentences them to far-reaching, one-size-fits-all antitrust-style remedies in pursuit of the stated objectives of “fairness” and “contestability.” We’ll soon begin to see what that looks like in practice, and whether innocent conduct will be caught in the crossfire.

Read the full piece here.

Will AI Make Law Productive?

This is my third and final installment summarizing the arguments in my draft article The Cost of Justice at the Dawn of AI. In the first, I . . .

This is my third and final installment summarizing the arguments in my draft article The Cost of Justice at the Dawn of AI. In the first, I reviewed Baumol’s cost disease’s implications for the legal sector. Baumol recognized that if the productivity of any sector improved less than the productivity of the economy as a whole, the goods or services from that sector would become more expensive. In the second, I assessed whether the legal sector has stagnated in this way. This turns out to be difficult or impossible to measure conclusively, because it’s hard to assess whether legal work is improving in quality. But crude measures like consumer price indices suggest stagnation. Rapidly decreasing trial rates provide further evidence. It should not be surprising that fewer cases, civil and criminal, make it to trial if legal process is getting more expensive.

The CFPB’s Misleading Slant on Competition in Credit-Card Markets

In yet another example of interagency cheerleading from the Federal Trade Commission (FTC), Chair Lina Khan recently touted the work of the Consumer Financial Protection . . .

In yet another example of interagency cheerleading from the Federal Trade Commission (FTC), Chair Lina Khan recently touted the work of the Consumer Financial Protection Bureau (CFPB) on payments networks:

Read the full piece here.

Apple Fined at the 11th Hour Before the DMA Enters into Force

Just days before the EU’s Digital Markets Act (DMA) was set to enter into force, the European Commission hit Apple—one of the six designated “gatekeepers” . . .

Just days before the EU’s Digital Markets Act (DMA) was set to enter into force, the European Commission hit Apple—one of the six designated “gatekeepers” to which the new law will apply—with a hefty €1.8 billion fine for the kinds of anti-steering provisions that will be banned by the DMA, which enters into force on 6 March. 

The timing of the fine, and its seemingly arbitrary amount, are both curious. Announced as being the result of a four-year investigation, the decision amounts to an exclusionary-abuse turned exploitative-conduct case, and is underpinned by a flimsy theory of harm in a market that has seen exponential growth over the past decade. The fact that the Commission fined Apple for conduct that would be banned per se just two days later raises questions as to why the DMA was necessary and whether the Commission has faith in the new law’s effectiveness.

Read the full piece here.

How a Recent California Appellate Court Decision Will Chill Drug Development, Raise Pharmaceutical Costs

When we are sick or in pain, we need relief. We know available prescription drugs won’t always be perfect. They sometimes have side effects. But . . .

When we are sick or in pain, we need relief. We know available prescription drugs won’t always be perfect. They sometimes have side effects. But we are grateful for even imperfect relief as an alternative to perfect pain.

Pharmaceutical companies aim to identify good drugs and get them to market, while constantly returning to the lab to innovate and make them even better, working to get the next version closer to perfect and with fewer side effects. But, thanks to a recent decision by a California appellate court, the incentives to develop new drugs and innovate to find even better alternatives may be over. California may have permanently impeded all pharmaceutical innovation by holding that a drug company can be sued for bringing two safe drugs to market, but not discovering the better one first. If a new court decision holds, these companies can be punished unless they bring no drug until they find the perfect drug.

Read the full piece here.

The Whole Wide World of Government

Once upon a time (July 9, 2021, to be precise), President Joe Biden issued an executive order on “Promoting Competition in the American Economy,” which . . .

Once upon a time (July 9, 2021, to be precise), President Joe Biden issued an executive order on “Promoting Competition in the American Economy,” which declared that “a whole-of-government approach is necessary to address overconcentration, monopolization, and unfair competition in the American economy.”

Read the full piece here.

Internet for All Won’t Happen Without Real Pole Access Reform

2024 will be a make-or-break year for the $42 billion taxpayer-funded Broadband Equity, Access, and Deployment program administered by the National Telecommunications and Information Administration. . . .

2024 will be a make-or-break year for the $42 billion taxpayer-funded Broadband Equity, Access, and Deployment program administered by the National Telecommunications and Information Administration.

Making BEAD a success and achieving the Biden Administration’s vision of “Internet for All” requires an ambitious “all-of-the-above approach” from federal, state, and local policymakers to take much-needed action on utility pole reforms – an often-neglected issue that is critical to ensuring unserved, rural communities are connected to reliable, high-speed internet.

Read the full piece here.

AMICUS BRIEFS

Amicus of IP Law Experts to the 2nd Circuit in Hachette v Internet Archive

INTEREST OF AMICI CURIAE Amici Curiae are 24 former government officials, former judges, and intellectual property scholars who have developed copyright law and policy, researched . . .

INTEREST OF AMICI CURIAE

Amici Curiae are 24 former government officials, former judges, and intellectual property scholars who have developed copyright law and policy, researched and written about copyright law, or both. They are concerned about ensuring that copyright law continues to secure both the rights of authors and publishers in creating and disseminating their works and the rights of the public in accessing these works. It is vital for this Court to maintain this balance between creators and the public set forth in the constitutional authorization to Congress to create the copyright laws. Amici have no stake in the parties or in the outcome of the case. The names and affiliations of the members of the Amici are set forth in Addendum A below.[1]

SUMMARY OF ARGUMENT

Copyright fulfills its constitutional purpose to incentivize the creation and dissemination of new works by securing to creators the exclusive rights of reproduction and distribution. 17 U.S.C. § 106. Congress narrowly tailored the exceptions to these rights to avoid undermining the balanced system envisioned by the Framers. See 17 U.S.C. §§ 107–22. As James Madison recognized, the “public good fully coincides . . . with the claims of individuals” in the protection of copyright. The Federalist NO. 43, at 271–72 (James Madison) (Clinton Rossiter ed., 1961). Internet Archive (“IA”) and its amici wrongly frame copyright’s balance of interests as between the incentive to create, on the one hand, and the public good, on the other hand. That is not the balance that copyright envisions.

IA’s position also ignores the key role that publishers serve in the incentives copyright offers to authors and other creators. Few authors, no matter how intellectually driven, will continue to perfect their craft if the economic rewards are insufficient to meet their basic needs. As the Supreme Court observed, “copyright law celebrates the profit motive, recognizing that the incentive to profit from the exploitation of copyrights will redound to the public benefit by resulting in the proliferation of knowledge.” Eldred v. Ashcroft, 537 U.S. 186, 212 n.18 (2003) (quoting Am. Geophysical Union v. Texaco Inc., 802 F. Supp. 1, 27 (S.D.N.Y. 1992)). Accordingly, the Supreme Court and Congress have long recognized that copyright secures the fruits of intermediaries’ labors in their innovative development of distribution mechanisms of authors’ works. Copyright does not judge the value of a book by its cover price. Rather, core copyright policy recognizes that the profit motive drives the willingness ex ante to invest time and resources in creating both copyrighted works and the means to distribute them. In sum, commercialization is fundamental to a functioning copyright system that achieves its constitutional purpose.

IA’s unauthorized reproduction and duplication of complete works runs roughshod over this framework. Its concept of controlled digital lending (CDL) does not fall into any exception—certainly not any conception of fair use recognized by the courts or considered by Congress—and thus violates copyright owners’ exclusive rights. Expanding the fair use doctrine to immunize IA’s wholesale copying would upend Congress’s carefully-considered, repeated rejections of similar proposals.

Hoping to excuse its disregard for copyright law, IA and its amici attempt to turn the fair use analysis on its head. They acknowledge that the first sale exception does not permit CDL, as this Court made clear in Capitol Records, LLC v. ReDigi Inc., 910 F.3d 649 (2d Cir. 2018).[2] They also are aware that courts consistently have rejected variations on the argument that wholesale copying, despite a format shift, is permissible under fair use.[3] Nevertheless, IA and its amici ask this Court, for the first time in history, to create a first sale-style exemption within the fair use analysis. CDL is not the natural evolution of libraries in the digital age; rather, like Frankenstein’s monster, it is an abomination composed of disparate parts of copyright doctrine. If endorsed by this Court, it would undermine the constitutional foundation of copyright jurisprudence and the separation of powers.

The parties and other amici address the specific legal doctrines, as well as the technical and commercial context in which these doctrinal requirements apply in this case, and thus Amici provide additional information on the nature and function of copyright that should inform this Court’s analysis and decision.

First, although IA and its amici argue that there are public benefits to the copying in which IA has engaged that support a finding that CDL is fair use, their arguments ignore that copyright itself promotes the public good and the inevitable harms that would result if copyright owners were unable to enforce their rights against the wholesale, digital distribution of their works by IA.

Second, IA’s assertion of the existence of a so-called “digital first sale” doctrine—a principle that, unlike the actual first sale statute, would permit the reproduction, as well as the distribution, of copyrighted works—is in direct conflict with Congress and the Copyright Office’s repeated study (and rejection) of similar proposals. Physical and digital copies simply are different, and it is not an accident that first sale applies only to the distribution of physical copies. Ignoring decades of research and debate, IA pretends instead that Congress has somehow overlooked digital first sale, yet left it open to the courts to engage in policymaking by shoehorning it into the fair use doctrine. By doing so, IA seeks to thwart the democratic process to gain in the courts what CDL’s proponents have not been able to get from Congress.

Third, given that there is no statutory support for CDL, most libraries offer their patrons access to digital works by entering into licensing agreements with authors and their publishers. Although a minority of libraries have participated in IA’s CDL practice, and a few have filed amicus briefs in support of IA in this Court, the vast majority of libraries steer clear because they recognize that wholesale copying and distribution deters the creation of new works. As author Sandra Cisneros understands: “Real libraries do not do what Internet Archive does.” A-250 (Cisneros Decl.) ¶12. There are innumerable ways of accessing books, none of which require authors and publishers to live in a world where their books are illegally distributed for free.

No court has ever found that reproducing and giving away entire works—en masse, without permission, and without additional comment, criticism, or justification—constitutes fair use. IA’s CDL theory is a fantasy divorced from the Constitution, the laws enacted by Congress, and the longstanding policies that have informed copyright jurisprudence. This Court should reject IA’s effort to erase authors and publishers from the copyright system.

[1] The parties have consented to the filing of this brief. Amici Curiae and their counsel authored this brief. Neither a party, its counsel, nor any person other than Amici and their counsel contributed money that was intended to fund preparing or submitting this brief.

[2] See SPA-38 (“IA accepts that ReDigi forecloses any argument it might have under Section 109(a).”); Dkt. 60, Brief for Defendant-Appellant Internet Archive (hereinafter “IA Br.”) (appealing only the district court’s decision on fair use).

[3] See, e.g., ReDigi, 910 F.3d at 662; UMG Recordings, Inc. v. MP3.com, Inc., 92 F. Supp. 2d 349, 352 (S.D.N.Y. 2000); see also Disney Enters., Inc. v. VidAngel, Inc., 869 F.3d 848, 861–62 (9th Cir. 2017).

ICLE Amicus in RE: Gilead Tenofovir Cases

Dear Justice Guerrero and Associate Justices, In accordance with California Rule of Court 8.500(g), we are writing to urge the Court to grant the Petition . . .

Dear Justice Guerrero and Associate Justices,

In accordance with California Rule of Court 8.500(g), we are writing to urge the Court to grant the Petition for Review filed by Petitioner Gilead Sciences, Inc. (“Petitioner” or “Gilead”) on February 21, 2024, in the above-captioned matter.

We agree with Petitioner that the Court of Appeal’s finding of a duty of reasonable care in this case “is such a seismic change in the law and so fundamentally wrong, with such grave consequences, that this Court’s review is imperative.” (Pet. 6.) The unprecedented duty of care put forward by the Court of Appeal—requiring prescription drug manufacturers to exercise reasonable care toward users of a current drug when deciding when to bring a new drug to market (Op. 11)—would have far-reaching, harmful implications for innovation that the Court of Appeal failed properly to weigh.

If upheld, this new duty of care would significantly disincentivize pharmaceutical innovation by allowing juries to second-guess complex scientific and business decisions about which potential drugs to prioritize and when to bring them to market. The threat of massive liability simply for not developing a drug sooner would make companies reluctant to invest the immense resources needed to bring new treatments to patients. Perversely, this would deprive the public of lifesaving and less costly new medicines. And the prospective harm from the Court of Appeal’s decision is not limited only to the pharmaceutical industry.

We urge the Court to grant the Petition for Review and to hold that innovative firms do not owe the users of current products a “duty to innovate” or a “duty to market”—that is, that firms cannot be held liable to users of a current product for development or commercialization decisions on the basis that those decisions could have facilitated the introduction of a less harmful, alternative product.

Interest of Amicus Curiae

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center aimed at building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates. It also has longstanding expertise in evaluating law and policy relating to innovation and the legal environment facing commercial activity. In this letter, we wish to briefly highlight some of the crucial considerations concerning the effect on innovation incentives that we believe would arise from the Court of Appeal’s ruling in this case.[1]

The Court of Appeal’s Duty of Care Standard Would Impose Liability Without Requiring Actual “Harm”

The Court of Appeal’s ruling marks an unwarranted departure from decades of products-liability law requiring plaintiffs to prove that the product that injured them was defective. Expanding liability to products never even sold is an unprecedented, unprincipled, and dangerous approach to product liability. Plaintiffs’ lawyers may seek to apply this new theory to many other beneficial products, arguing manufacturers should have sold a superior alternative sooner. This would wreak havoc on innovation across industries.

California Civil Code § 1714 does not impose liability for “fail[ing] to take positive steps to benefit others,” (Brown v. USA Taekwondo (2021) 11 Cal.5th 204, 215), and Plaintiffs did not press a theory that the medicine they received was defective. Moreover, the product included all the warnings required by federal and state law. Thus, Plaintiffs’ case—as accepted by the Court of Appeal—is that they consumed a product authorized by the FDA, that they were fully aware of its potential side effects, but maybe they would have had fewer side effects had Gilead made the decision to accelerate (against some indefinite baseline) the development of an alternative medicine. To call this a speculative harm is an understatement, and to dismiss Gilead’s conduct as unreasonable because motivated by a crass profit motive, (Op. at 32), elides many complicated facts that belie such a facile assertion.

A focus on the narrow question of profits for a particular drug misunderstands the inordinate complexity of pharmaceutical development and risks seriously impeding the rate of drug development overall. Doing so

[over-emphasizes] the recapture of “excess” profits on the relatively few highly profitable products without taking into account failures or limping successes experienced on the much larger number of other entries. If profits were held to “reasonable” levels on blockbuster drugs, aggregate profits would almost surely be insufficient to sustain a high rate of technological progress. . . . If in addition developing a blockbuster is riskier than augmenting the assortment of already known molecules, the rate at which important new drugs appear could be retarded significantly. Assuming that important new drugs yield substantial consumers’ surplus untapped by their developers, consumers would lose along with the drug companies. Should a tradeoff be required between modestly excessive prices and profits versus retarded technical progress, it would be better to err on the side of excessive profits. (F. M. Scherer, Pricing, Profits, and Technological Progress in the Pharmaceutical Industry, 7 J. Econ. Persp. 97, 113 (1993)).

Indeed, Plaintiffs’ claim on this ground is essentially self-refuting. If the “superior” product they claim was withheld for “profit” reasons was indeed superior, then Plaintiffs could have expected to make a superior return on that product. Thus, Plaintiffs claim they were allegedly “harmed” by not having access to a product that Petitioners were not yet ready to market, even though Petitioners had every incentive to release a potentially successful alternative as soon as possible, subject to a complex host of scientific and business considerations affecting the timing of that decision.

Related, the Court of Appeal’s decision rests on the unfounded assumption that Petitioner “knew” TAF was safer than TDF after completing Phase I trials. This ignores the realities of the drug development process and the inherent uncertainty of obtaining FDA approval, even after promising early results. Passing Phase I trials, which typically involve a small number of healthy volunteers, is a far cry from having a marketable drug. According to the Biotechnology Innovation Organization, only 7.9% of drugs that enter Phase I trials ultimately obtain FDA approval.[2] (Biotechnology Innovation Organization, Clinical Development Success Rates and Contributing Factors 2011-2020, Fig. 8b (2021), available at https://perma.cc/D7EY-P22Q.) Even after Phase II trials, which assess efficacy and side effects in a larger patient population, the success rate is only about 15.1%. (Id.) Thus, at the time Gilead decided to pause TAF development, it faced significant uncertainty about whether TAF would ever reach the market, let alone ultimately prove safer than TDF.

Moreover, the clock on Petitioner’s patent exclusivity for TAF was ticking throughout the development process. Had Petitioner “known” that TAF was a safer and more effective drug, it would have had every incentive to bring it to market as soon as possible to maximize the period of patent protection and the potential to recoup its investment. The fact that Petitioner instead chose to focus on TDF strongly suggests that it did not have the level of certainty the Court of Appeal attributed to it.

Although conventional wisdom has often held otherwise, economists generally dispute the notion that companies have an incentive to unilaterally suppress innovation for economic gain.

While rumors long have circulated about the suppression of a new technology capable of enabling automobiles to average 100 miles per gallon or some new device capable of generating electric power at a fraction of its current cost, it is rare to uncover cases where a worthwhile technology has been suppressed altogether. (John J. Flynn, Antitrust Policy, Innovation Efficiencies, and the Suppression of Technology, 66 Antitrust L.J. 487, 490 (1998)).

Calling such claims “folklore,” the economists Armen Alchian and William Allen note that, “if such a [technology] did exist, it could be made and sold at a price reflecting the value of [the new technology], a net profit to the owner.” (Armen A. Alchian & William R. Allen, Exchange & Production: Competition, Coordination, & Control (1983), at 292). Indeed, “even a monopolist typically will have an incentive to adopt an unambiguously superior technology.” (Joel M. Cohen and Arthur J. Burke, An Overview of the Antitrust Analysis of Suppression of Technology, 66 Antitrust L.J. 421, 429 n. 28 (1998)). While nominal suppression of technology can occur for a multitude of commercial and technological reasons, there is scant evidence that doing so coincides with harm to consumers, except where doing so affirmatively interferes with market competition under the antitrust laws—a claim not advanced here.

One reason the tort system is inapt for second-guessing commercial development and marketing decisions is that those decisions may be made for myriad reasons that do not map onto the specific safety concern of a products-liability action. For example, in the 1930s, AT&T abandoned the commercial development of magnetic recording “for ideological reasons. . . . Management feared that availability of recording devices would make customers less willing to use the telephone system and so undermine the concept of universal service.” (Mark Clark, Suppressing Innovation: Bell Laboratories and Magnetic Recording, 34 Tech. & Culture 516, 520-24 (1993)). One could easily imagine arguments that coupling telephones and recording devices would promote safety. But the determination of whether safety or universal service (and the avoidance of privacy invasion) was a “better” basis for deciding whether to pursue the innovation is not within the ambit of tort law (nor the capability of a products-liability jury). And yet, it would necessarily become so if the Court of Appeal’s decision were to stand.

A Proper Assessment of Public Policy Would Cut Strongly Against Adoption of the Court of Appeal’s Holding

The Court of Appeal notes that “a duty that placed manufacturers ‘under an endless obligation to pursue ever-better new products or improvements to existing products’ would be unworkable and unwarranted,” (Op. 10), yet avers that “plaintiffs are not asking us to recognize such a duty” because “their negligence claim is premised on Gilead’s possession of such an alternative in TAF; they complain of Gilead’s knowing and intentionally withholding such a treatment….” (Id).

From an economic standpoint, this is a distinction without a difference.

Both a “duty to invent” and a “duty to market” what is already invented would increase the cost of bringing any innovative product to market by saddling the developer with an expected additional (and unavoidable) obligation as a function of introducing the initial product, differing only perhaps by degree. Indeed, a “duty to invent” could conceivably be more socially desirable because in that case a firm could at least avoid liability by undertaking the process of discovering new products (a socially beneficial activity), whereas the “duty to market” espoused by the Court of Appeal would create only the opposite incentive—the incentive never to gain knowledge of a superior product on the basis of which liability might attach.[3]

And public policy is relevant. This Court in Brown v. Superior Court, (44 Cal. 3d 1049 (1988)), worried explicitly about the “[p]ublic policy” implications of excessive liability rules for the provision of lifesaving drugs. (Id. at 1063-65). As the Court in Brown explained, drug manufacturers “might be reluctant to undertake research programs to develop some pharmaceuticals that would prove beneficial or to distribute others that are available to be marketed, because of the fear of large adverse monetary judgments.” (Id. at 1063). The Court of Appeal agreed, noting that “the court’s decision [in Brown] was grounded in public policy concerns. Subjecting prescription drug manufacturers to strict liability for design defects, the court worried, might discourage drug development or inflate the cost of otherwise affordable drugs.” (Op. 29).

In rejecting the relevance of the argument here, however, the Court of Appeal (very briefly) argued a) that Brown espoused only a policy against burdening pharmaceutical companies with a duty stemming from unforeseeable harms, (Op. 49-50), and b) that the relevant cost here might be “some failed or wasted efforts,” but not a reduction in safety. (Op. 51).[4] Both of these claims are erroneous.

On the first, the legalistic distinction between foreseeable and unforeseeable harm was not, in fact, the determinative distinction in Brown. Rather, that distinction was relevant only because it maps onto the issue of incentives. In the face of unforeseeable, and thus unavoidable, harm, pharmaceutical companies would have severely diminished incentives to innovate. While foreseeable harms might also deter innovation by imposing some additional cost, these costs would be smaller, and avoidable or insurable, so that innovation could continue. To be sure, the Court wanted to ensure that the beneficial, risk-reduction effects of the tort system were not entirely removed from pharmaceutical companies. But that meant a policy decision that necessarily reduced the extent of tort-based risk optimization in favor of the manifest, countervailing benefit of relatively higher innovation incentives. That same calculus applies here, and it is this consideration, not the superficial question of foreseeability, that animated this Court in Brown.

On the second, the Court of Appeal inexplicably fails to acknowledge that the true cost of the imposition of excessive liability risk from a “duty to market” (or “duty to innovate”) is not limited to the expenditure of wasted resources, but the non-expenditure of any resources. The court’s contention appears to contemplate that such a duty would not remove a firm’s incentive to innovate entirely, although it might deter it slightly by increasing its expected cost. But economic incentives operate at the margin. Even if there remains some profit incentive to continue to innovate, the imposition of liability risk simply for the act of doing so would necessarily reduce the amount of innovation (in some cases, and especially for some smaller companies less able to bear the additional cost, to the point of deterring innovation entirely). But even this reduction in incentive is a harm. The fact that some innovation may still occur despite the imposition of considerable liability risk is not a defense of the imposition of that risk; rather, it is a reason to question its desirability, exactly as this Court did in Brown.

The Court of Appeal’s Decision Would Undermine Development of Lifesaving and Safer New Medicines

Innovation is a long-term, iterative process fraught with uncertainty. At the outset of research and development, it is impossible to know whether a potential new drug will ultimately prove superior to existing drugs. Most attempts at innovation fail to yield a marketable product, let alone one that is significantly safer or more effective than its predecessors. Deciding whether to pursue a particular line of research depends on weighing myriad factors, including the anticipated benefits of the new drug, the time and expense required to develop it, and its financial viability relative to existing products. Sometimes, potentially promising drug candidates are not pursued fully, even if theoretically “better” than existing drugs to some degree, because the expected benefits are not sufficient to justify the substantial costs and risks of development and commercialization.

If left to stand, the Court of Appeal’s decision would mean that whenever this stage of development is reached for a drug that may offer any safety improvement, the manufacturer will face potential liability for failing to bring that drug to market, regardless of the costs and risks involved in its development or the extent of the potential benefit. Such a rule would have severe unintended consequences that would stifle innovation.

First, by exposing manufacturers to liability on the basis of early-stage research that has not yet established a drug candidate’s safety and efficacy, the Court of Appeal’s rule would deter manufacturers from pursuing innovations in the first place. Drug research involves constant iteration, with most efforts failing and the potential benefits of success highly uncertain until late in the process. If any improvement, no matter how small or tentative, could trigger liability for failing to develop the new drug, manufacturers will be deterred from trying to innovate at all.

Second, such a rule would force manufacturers to direct scarce resources to developing and commercializing drugs that offer only small or incremental benefits because failing to do so would invite litigation. This would necessarily divert funds away from research into other potential drugs that could yield greater advancements. Further, as each small improvement is made, it reduces the relative potential benefit from, and therefore the incentive to undertake, further improvements. Rather than promoting innovation, the Court of Appeal’s decision would create incentives that favor small, incremental changes over larger, riskier leaps with the greatest potential to significantly advance patient welfare.

Third, and conversely, the Court of Appeal’s decision would set an unrealistic and dangerous standard of perfection for drug development. Pharmaceutical companies should not be expected to bring only the “safest” version of a drug to market, as this would drastically increase the time and cost of drug development and deprive patients of access to beneficial treatments in the meantime.

Fourth, the threat of liability would lead to inefficient and costly distortions in how businesses organize their research and development efforts. To minimize the risk of liability, manufacturers may avoid integrating ongoing research into existing product lines, instead keeping the processes separate unless and until a potential new technology is developed that offers benefits so substantial as to clearly warrant the costs and liability exposure of its development in the context of an existing drug line. Such an incentive would prevent potentially beneficial innovations from being pursued and would increase the costs of drug development.

Finally, the ruling would create perverse incentives that could actually discourage drug companies from developing and introducing safer alternative drugs. If bringing a safer drug to market later could be used as evidence that the first-generation drug was not safe enough, companies may choose not to invest in developing improved versions at all in order to avoid exposing themselves to liability. This would, of course, directly undermine the goal of increasing drug safety overall.

The Court of Appeal gave insufficient consideration to these severe policy consequences of the duty it recognized. A manufacturer’s decision when to bring a potentially safer drug to market involves complex trade-offs that courts are ill-equipped to second-guess—particularly in the limited context of a products-liability determination.

Conclusion

The Court of Appeal’s novel “duty to market” any known, less-harmful alternative to an existing product would deter innovation to the detriment of consumers. The Court of Appeal failed to consider how its decision would distort incentives in a way that harms the very patients the tort system is meant to protect. This Court should grant review to address these important legal and policy issues and to prevent this unprecedented expansion of tort liability from distorting manufacturers’ incentives to develop new and better products.

[1] No party or counsel for a party authored or paid for this amicus letter in whole or in part.

[2] It is important to note that this number varies with the kind of medicine involved, but across all categories of medicines there is a high likelihood of failure subsequent to Phase I trials.

[3] To the extent the concern is with disclosure of information regarding a potentially better product, that is properly a function of the patent system, which requires public disclosure of new ideas in exchange for the receipt of a patent. (See Brenner v. Manson, 383 U.S. 519, 533 (1966) (“one of the purposes of the patent system is to encourage dissemination of information concerning discoveries and inventions.”)). Of course, the patent system preserves innovation incentives despite the mandatory disclosure of information by conferring an exclusive right to the inventor to use the new knowledge. By contrast, using the tort system as an information-forcing device in this context would impose risks and costs on innovation without commensurate benefit, ensuring less, rather than more, innovation.

[4] The Court of Appeal makes a related argument when it claims that “the duty does not require manufacturers to perfect their drugs, but simply to act with reasonable care for the users of the existing drug when the manufacturer has developed an alternative that it knows is safer and at least equally efficacious. Manufacturers already engage in this type of innovation in the ordinary course of their business, and most plaintiffs would likely face a difficult road in establishing a breach of the duty of reasonable care.” (Op. at 52-3).

COMMENTS & STATEMENTS

ICLE Comments to NTIA on Dual-Use Foundation AI Models with Widely Available Model Weights

I. Introduction We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual . . .

I. Introduction

We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights” proceeding. In these comments, we endeavor to offer recommendations to foster the innovative and responsible production of artificial intelligence (AI), encompassing both open-source and proprietary models. Our comments are guided by a belief in the transformative potential of AI, while recognizing NTIA’s critical role in guiding the development of regulations that not only protect consumers but also enable this dynamic field to flourish. The agency should seek to champion a balanced and forward-looking approach toward AI technologies that allows them to evolve in ways that maximize their social benefits, while navigating the complexities and challenges inherent in their deployment.

NTIA’s question “How should [the] potentially competing interests of innovation, competition, and security be addressed or balanced?”[1] gets to the heart of ongoing debates about AI regulation. There is no panacea to be discovered, as all regulatory choices require balancing tradeoffs. It is crucial to bear this in mind when evaluating, e.g., regulatory proposals that implicitly treat AI as inherently dangerous and regard as obvious that stringent regulation is the only effective strategy to mitigate such risks.[2] Such presumptions discount AI’s unknown but potentially enormous capacity to produce innovation, and inadequately account for other tradeoffs inherent to imposing a risk-based framework (e.g., requiring disclosure of trade secrets or particular kinds of transparency that could yield new cybersecurity attack vectors). Adopting an overly cautious stance risks not only stifling AI’s evolution, but may also preclude a fulsome exploration of its potential to foster social, economic, and technological advancement. A more restrictive regulatory environment may also render AI technologies more homogenous and smother development of the kinds of diverse AI applications needed to foster robust competition and innovation.

We observe this problematic framing in the executive order (EO) that serves as the provenance of this RFC.[3] The EO repeatedly proclaims the importance of “[t]he responsible development and use of AI” in order to “mitigate[e] its substantial risks.”[4] Specifically, the order highlights concerns over “dual-use foundation models”—i.e., AI systems that, while beneficial, could pose serious risks to national security, national economic security, national public health, or public safety.[5] Concerningly, one of the categories the EO flags as illicit “dual use” are systems “permitting the evasion of human control or oversight through means of deception or obfuscation.”[6] This open-ended category could be interpreted so broadly that essentially any general-purpose generative-AI system would classify.

The EO also repeatedly distinguishes “open” versus “closed” approaches to AI development, while calling for “responsible” innovation and competition.[7] On our reading, the emphasis the EO places on this distinction raises alarm bells about the administration’s inclination to stifle innovation through overly prescriptive regulatory frameworks, diminishment of the intellectual property rights that offer incentives for innovation, and regulatory capture that favors incumbents over new entrants. In favoring one model of AI development over another, the EO’s prescriptions could inadvertently hamper the dynamic competitive processes that are crucial both for technological progress and for the discovery of solutions to the challenges that AI technology poses.

Given the inchoate nature of AI technology—much less the uncertain markets in which that technology will ultimately be deployed and commercialized—NTIA has an important role to play in elucidating for policymakers the nuances that might lead innovators to choose an open or closed development model, without presuming that one model is inherently better than the other—or that either is necessarily “dangerous.” Ultimately, the preponderance of AI risks will almost certainly emerge idiosyncratically. It will be incumbent on policymakers to address such risks in an iterative fashion as they become apparent. For now, it is critical to resist the urge to enshrine crude and blunt categories for the heterogeneous suite of technologies currently gathered under the broad banner of  “AI.”

Section II of these comments highlights the importance of grounding AI regulation in actual harms, rather than speculative risks, while outlining the diversity of existing AI technologies and the need for tailored approaches. Section III starts with discussion of some of the benefits and challenges posed by both open and closed approaches to AI development, while cautioning against overly prescriptive definitions of “openness” and advocating flexibility in regulatory frameworks. It proceeds to examine the EO’s prescription to regulate so-called “dual-use” foundation models, underscoring some potential unintended consequences for open-source AI development and international collaboration. Section IV offers some principles to craft an effective regulatory model for AI, including distinguishing between low-risk and high-risk applications, avoiding static regulatory approaches, and adopting adaptive mechanisms like regulatory sandboxes and iterative rulemaking. Section V concludes.

II. Risk Versus Harm in AI Regulation

In many of the debates surrounding AI regulation, disproportionate focus is placed on the need to mitigate risks, without sufficient consideration of the immense benefits that AI technologies could yield. Moreover, because these putative risks remain largely hypothetical, proposals to regulate AI descend quickly into an exercise in shadowboxing.

Indeed, there is no single coherent definition of what even constitutes “AI.” The term encompasses a wide array of technologies, methodologies, and applications, each with distinct characteristics, capabilities, and implications for society. From foundational models that can generate human-like text, to algorithms capable of diagnosing diseases with greater accuracy than human doctors, to “simple” algorithms that facilitate a more tailored online experience, AI applications and their underlying technologies are as varied as they are transformative.

This diversity has profound implications for the regulation and development of AI. Very different regulatory considerations are relevant to AI systems designed for autonomous vehicles than for those used in financial algorithms or creative-content generation. Each application domain comes with its own set of risks, benefits, ethical dilemmas, and potential social impacts, necessitating tailored approaches to each use case. And none of these properties of AI map clearly onto the “open” and “closed” designations highlighted by the EO and this RFC. This counsels for focus on specific domains and specific harms, rather than how such technologies are developed.[8]

As in prior episodes of fast-evolving technologies, what is considered cutting-edge AI today may be obsolete tomorrow. This rapid pace of innovation further complicates the task of crafting policies and regulations that will be both effective and enduring. Policymakers and regulators must navigate this terrain with a nuanced understanding of AI’s multifaceted nature, including by embracing flexible and adaptive regulatory frameworks that can accommodate AI’s continuing evolution.[9] A one-size-fits-all approach could inadvertently stifle innovation or entrench the dominance of a few large players by imposing barriers that disproportionately affect smaller entities or emerging technologies.

Experts in law and economics have long scrutinized both market conduct and regulatory rent seeking that serve to enhance or consolidate market power by disadvantaging competitors, particularly through increasing the costs incurred by rivals.[10] Various tactics may be employed to undermine competitors or exclude them from the market that do not involve direct price competition. It is widely recognized that “engaging with legislative bodies or regulatory authorities to enact regulations that negatively impact competitors” produces analogous outcomes.[11] It is therefore critical that the emerging markets for AI technologies not engender opportunities for firms to acquire regulatory leverage over rivals. Instead, recognizing the plurality of AI technologies and encouraging a multitude of approaches to AI development could help to cultivate a more vibrant and competitive ecosystem, driving technological progress forward and maximizing AI’s potential social benefits.

This overarching approach counsels skepticism about risk-based regulatory frameworks that fail to acknowledge how the theoretical harms of one type of AI system may be entirely different from those of another. Obviously, the regulation of autonomous drones is a very different sort of problem than the regulation of predictive policing or automated homework tutors. Even within a single circumscribed domain of generative AI—such as “smart chatbots” like ChatGPT or Claude—different applications may present entirely different kinds of challenges. A highly purpose-built version of such a system might be employed by government researchers to develop new materiel for the U.S. Armed Forces, while a general-purpose commercial chatbot would employ layers of protection to ensure that ordinary users couldn’t learn how to make advanced weaponry. Rather treating “chatbots” as possible vectors for weapons development, a more appropriate focus would target high-capability systems designed to assist in developing such systems. Were it the case that a general-purpose chatbot inadvertently revealed some information on building weapons, all incentives would direct that AI’s creators to treat that as a bug to fix, not a feature to expand.

Take, for example, the recent public response to the much less problematic AI-system malfunctions that accompanied Google’s release of its Gemini program.[12] Gemini was found to generate historically inaccurate images, such as ethnically diverse U.S. senators from the 1800s, including women.[13] Google quickly acknowledged that it did not intend for Gemini to create inaccurate historical images and turned off the image-generation feature to allow time for the company to work on significant improvements before re-enabling it.[14] While Google blundered in its initial release, it had every incentive to discover and remedy the problem. The market response provided further incentive for Google to get it right in the future.[15] Placing the development of such systems under regulatory scrutiny because some users might be able to jailbreak a model and generate some undesirable material would create disincentives to the production of AI systems more generally, with little gained in terms of public safety.

Rather than focus on the speculative risks of AI, it is essential to ground regulation in the need to address tangible harms that stem from the observed impacts of AI technologies on society. Moreover, focusing on realistic harms would facilitate a more dynamic and responsive regulatory approach. As AI technologies evolve and new applications emerge, so too will the  potential harms. A regulatory framework that prioritizes actual harms can adapt more readily to these changes, enabling regulators to update or modify policies in response to new evidence or social impacts. This flexibility is particularly important for a field like AI, where technological advancements could quickly outpace regulation, creating gaps in oversight that may leave individuals and communities vulnerable to harm.

Furthermore, like any other body of regulatory law, AI regulation must be grounded in empirical evidence and data-driven decision making. Demanding a solid evidentiary basis as a threshold for intervention would help policymakers to avoid the pitfalls of reacting to sensationalized or unfounded AI fears. This would not only enhance regulators’ credibility with stakeholders, but would also ensure that resources are dedicated to addressing the most pressing and substantial issues arising from the development of AI.

III. The Regulation of Foundation Models

NTIA is right to highlight the tremendous promise that attends the open development of AI technologies:

Dual use foundation models with widely available weights (referred to here as open foundation models) could play a key role in fostering growth among less resourced actors, helping to widely share access to AI’s benefits…. Open foundation models can be readily adapted and fine-tuned to specific tasks and possibly make it easier for system developers to scrutinize the role foundation models play in larger AI systems, which is important for rights- and safety-impacting AI systems (e.g. healthcare, education, housing, criminal justice, online platforms etc.)

…Historically, widely available programming libraries have given researchers the ability to simultaneously run and understand algorithms created by other programmers. Researchers and journals have supported the movement towards open science, which includes sharing research artifacts like the data and code required to reproduce results.[16]

The RFC proceeds to seek input on how to define “open” and “widely available.”[17] These, however, are the wrong questions. NTIA should instead proceed from the assumption that there are no harms inherent to either “open” or “closed” development models; it should be seeking input on anything that might give rise to discrete harms in either open or closed systems.

NTIA can play a valuable role by recommending useful alterations to existing law where gaps currently exist, regardless of the business or distribution model employed by the AI developer. In short, there is nothing necessarily more or less harmful about adopting an “open” or a “closed” approach to software systems. The decision to pursue one path over the other will be made based on the relevant tradeoffs that particular firms face. Embedding such distinctions in regulation is arbitrary, at best, and counterproductive to the fruitful development of AI, at worst.

A. ‘Open’ or ‘Widely Available’ Model Weights

To the extent that NTIA is committed to drawing distinctions between “open” and “closed” approaches to developing foundation models, it should avoid overly prescriptive definitions of what constitutes “open” or “widely available” model weights that could significantly hamper the progress and utility of AI technologies.

Imposing narrow definitions risks creating artificial boundaries that fail to accurately reflect AI’s technical and operational realities. They could also inadvertently exclude or marginalize innovative AI models that fall outside those rigid parameters, despite their potential to contribute positively to technological advancement and social well-being. For instance, a definition of “open” that requires complete public accessibility without any form of control or restriction might discourage organizations from sharing their models, fearing misuse or loss of intellectual property.

Moreover, prescriptive definitions could stifle the organic growth and evolution of AI technologies. The AI field is characterized by its rapid pace of change, where today’s cutting-edge models may become tomorrow’s basic tools. Prescribing fixed criteria for what constitutes “openness” or “widely available” risks anchoring the regulatory landscape to this specific moment in time, leaving the regulatory framework less able to adapt to future developments and innovations.

Given AI developers’ vast array of applications, methodologies, and goals, it is imperative that any definitions of “open” or “widely available” model weights embrace flexibility. A flexible approach would acknowledge how the various stakeholders within the AI ecosystem have differing needs, resources, and objectives, from individual developers and academic researchers to startups and large enterprises. A one-size-fits-all definition of “openness” would fail to accommodate this diversity, potentially privileging certain forms of innovation over others and skewing the development of AI technologies in ways that may not align with broader social needs.

Moreover, flexibility in defining “open” and “widely available” must allow for nuanced understandings of accessibility and control. There can, for example, be legitimate reasons to limit openness, such as protecting sensitive data, ensuring security, and respecting intellectual-property rights, while still promoting a culture of collaboration and knowledge sharing. A flexible regulatory approach would seek a balanced ecosystem where the benefits of open AI models are maximized, and potential risks are managed effectively.

B. The Benefits of ‘Open’ vs ‘Closed’ Business Models

NTIA asks:

What benefits do open model weights offer for competition and innovation, both in the AI marketplace and in other areas of the economy? In what ways can open dual-use foundation models enable or enhance scientific research, as well as education/training in computer science and related fields?[18]

An open approach to AI development has obvious benefits, as NTIA has itself acknowledged in other contexts.[19] Open-foundation AI models represent a transformative force, characterized by their accessibility, adaptability, and potential for widespread application across various sectors. The openness of these models may serve to foster an environment conducive to innovation, wherein developers, researchers, and entrepreneurs can build on existing technologies to create novel solutions tailored to diverse needs and challenges.

The inherent flexibility of open-foundation models can also catalyze a competitive market, encouraging a healthy ecosystem where entities ranging from startups to established corporations may all participate on roughly equal footing. By lowering some entry barriers related to access to basic AI technologies, this competitive environment can further drive technological advancements and price efficiencies, ultimately benefiting consumers and society at-large.

But more “closed” approaches can also prove very valuable. As NTIA notes in this RFC, it is rarely the case that a firm pursues a purely open or closed approach. These terms exist along a continuum, and firms blend models as necessary.[20] And just as firms readily mix elements of open and closed business models, a regulator should be agnostic about the precise mix that firms employ, which ultimately must align with the realities of market dynamics and consumer preferences.

Both open and closed approaches offer distinct benefits and potential challenges. For instance, open approaches might excel in fostering a broad and diverse ecosystem of applications, thereby appealing to users and developers who value customization and variety. They can also facilitate a more rapid dissemination of innovation, as they typically impose fewer restrictions on the development and distribution of new applications. Conversely, closed approaches, with their curated ecosystems, often provide enhanced security, privacy, and a more streamlined user experience. This can be particularly attractive to users less inclined to navigate the complexities of open systems. Under the right conditions, closed systems can likewise foster a healthy ecosystem of complementary products.

The experience of modern digital platforms demonstrates that there is no universally optimal approach to structuring business activities, thus illustrating the tradeoffs inherent in choosing among open and closed business models. The optimal choice depends on the specific needs and preferences of the relevant market participants. As Jonathan M. Barnett has noted:

Open systems may yield no net social gain over closed systems, can pose a net social loss under certain circumstances, and . . . can impose a net social gain under yet other circumstances.[21]

Similar considerations apply in the realm of AI development. Closed or semi-closed ecosystems can offer such advantages as enhanced security and curated offerings, which may appeal to certain users and developers. These benefits, however, may come at the cost of potentially limited innovation, as a firm must rely on its own internal processes for research and development. Open models, on the other hand, while fostering greater collaboration and creativity, may also introduce risks related to quality control, intellectual-property protection, and a host of other concerns that may be better controlled in a closed business model. Even along innovation dimensions, closed platforms can in many cases outperform open models.

With respect to digital platforms like the App Store and Google Play Store, there is a “fundamental welfare tradeoff between two-sided proprietary…platforms and two-sided platforms which allow ‘free entry’ on both sides of the market.”[22] Consequently, “it is by no means obvious which type of platform will create higher product variety, consumer adoption and total social welfare.”[23]

To take another example, consider the persistently low adoption rates for consumer versions of the open-source Linux operating system, versus more popular alternatives like Windows or MacOS.[24] A closed model like Apple’s MacOS is able to outcompete open solutions by better leveraging network effects and developing a close relationship with end users.[25] Even in this example, adoption of open versus closed models varies across user types, with, e.g., developers showing a strong preference for Linux over Mac, and only a slight preference for Windows over Linux.[26] This underscores the point that the suitability of an open or closed model varies not only by firm and product, nor even solely by user, but by the unique fit of a particular model for a particular user in a particular context. Many of those Linux-using developers will likely not use it on their home computing device, for example, even if they prefer it for work.

The dynamics among consumers and developers further complicate prevailing preferences for open or closed models. For some users, the security and quality assurance provided by closed ecosystems outweigh the benefits of open systems’ flexibility. On the developer side, the lower barriers to entry in more controlled ecosystems that smooth the transaction costs associated with developing and marketing applications can democratize application development, potentially leading to greater innovation within those ecosystems. Moreover, distinctions between open and closed models can play a critical role in shaping inter-brand competition. A regulator placing its thumb on the business-model scale would push the relevant markets toward less choice and lower overall welfare.[27]

By differentiating themselves through a focus on ease-of-use, quality, security, and user experience, closed systems contribute to a vibrant competitive landscape where consumers have clear choices between differing “brands” of AI. Forcing an AI developer to adopt practices that align with a regulator’s preconceptions about the relative value of “open” and “closed” risks homogenizing the market and diminishing the very competition that spurs innovation and consumer choice.

Consider some of the practical benefits sought by deployers when choosing between open and closed models. For example, it’s not straightforward to say close is inherently better than open when considering issues of data sharing or security; even here, there are tradeoffs. Open innovation in AI—characterized by the sharing of data, algorithms, and methodologies within the research community and beyond—can mitigate many of the risks associated with model development. This openness fosters a culture of transparency and accountability, where AI models and their applications are subject to scrutiny by a broad community of experts, practitioners, and the general public. This collective oversight can help to identify and address potential safety and security concerns early in the development process, thus enhancing AI technologies’ overall trustworthiness.

By contrast, a closed system may implement and enforce standardized security protocols more quickly. A closed system may have a sharper, more centralized focus on providing data security to users, which may perform better along some dimensions. And while the availability of code may provide security in some contexts, in other circumstances, closed systems perform better.[28]

In considering ethical AI development, different types of firms should be free to experiment with different approaches, even blending them where appropriate. For example, Claude’s approach to “Collective Constitutional AI” adopts what is arguably a “semi-open” model, blending proprietary elements with certain aspects of openness to foster innovation, while also maintaining a level of control.[29] This model might strike an appropriate balance, in that it ensures some degree of proprietary innovation and competitive advantage while still benefiting from community feedback and collaboration.

On the other hand, fully open-source development could lead to a different, potentially superior result that meets a broader set of needs through community-driven evolution and iteration. There is no way to determine, ex ante, that either an open or a closed approach to AI development will inherently provide superior results for developing “ethical” AI. Each has its place, and, most likely, the optimal solutions will involve elements of both approaches.

In essence, codifying a regulatory preference for one business model over the other would oversimplify the intricate balance of tradeoffs inherent to platform ecosystems. Economic theory and empirical evidence suggest that both open and closed platforms can drive innovation, serve consumer interests, and stimulate healthy competition, with all of these considerations depending heavily on context. Regulators should therefore aim for flexible policies that support coexistence of diverse business models, fostering an environment where innovation can thrive across the continuum of openness.

C. Dual-Use Foundation Models and Transparency Requirements

The EO and the RFC both focus extensively on so-called “dual-use” foundation models:

Foundation models are typically defined as, “powerful models that can be fine-tuned and used for multiple purposes.” Under the Executive Order, a “dual-use foundation model” is “an AI model that is trained on broad data; generally uses self-supervision, contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters….”[30]

But this framing will likely do more harm than good. As noted above, the terms “AI” or “AI model” are frequently invoked to refer to very different types of systems. Further defining these models as “dual use” is also unhelpful, as virtually any tool in existence can be “dual use” in this sense. Certainly, from a certain perspective, all software—particularly highly automated software—can pose a serious risk to “national security” or “safety.” Encryption and other privacy-protecting tools certainly fit this definition.[31] While it is crucial to mitigate harms associated with the misuse of AI technologies, the blanket treatment of all foundation models under this category is overly simplistic.

The EO identifies certain clear risks, such as the possibility that models could aid in the creation of chemical, biological, or nuclear weaponry. These categories are obvious subjects for regulatory control, but the EO then appears to open a giant definitional loophole that threatens to subsume virtually any useful AI system. It employs expansive terminology to describe a more generalized threat—specifically, that dual-use models could “[permit] the evasion of human control or oversight through means of deception or obfuscation.”[32] Such language could encompass a wide array of general-purpose AI models. Furthermore, by labeling systems capable of bypassing human decision making as “dual use,” the order implicitly suggests that all AI could pose such risk as warrants national-security levels of scrutiny.

Given the EO’s broad definition of AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” numerous software systems not typically even considered AI might be categorized as “dual-use” models.[33] Essentially, any sufficiently sophisticated statistical-analysis tool could qualify under this definition.

A significant repercussion of the EO’s very broad reporting mandates for dual-use systems, and one directly relevant to the RFC’s interest in promoting openness, is that these might chill open-source AI development.[34] Firms dabbling in AI technologies—many of which might not consider their projects to be dual use—might keep their initiatives secret until they are significantly advanced. Faced with the financial burden of adhering to the EO’s reporting obligations, companies that lack a sufficiently robust revenue model to cover both development costs and legal compliance might be motivated to dodge regulatory scrutiny in the initial phases, consequently dampening the prospects for transparency.

It is hard to imagine how open-source AI projects could survive in such an environment. Open-source AI code libraries like TensorFlow[35] and PyTorch[36] foster remarkable innovation by allowing developers to create new applications that use cutting-edge models. How could a paradigmatic startup developer working out of a garage genuinely commit to open-source development if tools like these fall under the EO’s jurisdiction? Restricting access to the weights that models use—let alone avoiding open-source development entirely—may hinder independent researchers’ ability to advance the forefront of AI technology.

Moreover, scientific endeavors typically benefit from the contributions of researchers worldwide, as collaborative efforts on a global scale are known to fast-track innovation. The pressure the EO applies to open-source development of AI tools could curtail international cooperation, thereby distancing American researchers from crucial insights and collaborations. For example, AI’s capacity to propel progress in numerous scientific areas is potentially vast—e.g., utilizing MRI images and deep learning for brain-tumor diagnoses[37] or employing machine learning to push the boundaries of materials science.[38] Such research does not benefit from stringent secrecy, but thrives on collaborative development. Enabling a broader community to contribute to and expand upon AI advancements supports this process.

Individuals respond to incentives. Just as how well-intentioned seatbelt laws paradoxically led to an uptick in risky driving behaviors,[39] ill-considered obligations placed on open-source AI developers could unintentionally stifle the exchange of innovative concepts crucial to maintain the United States’ leadership in AI innovation.

IV. Regulatory Models that Support Innovation While Managing Risks Effectively

In the rapidly evolving landscape of artificial intelligence (AI), it is paramount to establish governance and regulatory frameworks that both encourage innovation and ensure safety and ethical integrity. An effective regulatory model for AI should be adaptive, principles-based, and foster a collaborative environment among regulators, developers, researchers, and the broader community. A number of principles can help in developing this regime.

A. Low-Risk vs High-Risk AI

First, a clear distinction should be made between low-risk AI applications that enhance operational efficiency or consumer experience and high-risk applications that could have significant safety implications. Low-risk applications like search algorithms and chatbots should be governed by a set of baseline ethical guidelines and best practices that encourage innovation, while ensuring basic standards are met. On the other hand, high-risk applications—such as those used by law enforcement or the military—would require more stringent review processes, including impact assessments, ethical reviews, and ongoing monitoring to mitigate potentially adverse effects.

Contrast this with the recently enacted AI Act in the European Union, and its decision to create presumptions of risk for general purpose AI (GPAI) systems, such as large language models (LLMs), that present what the EU has termed so-called “systemic risk.”[40] Article 3(65) of the AI Act defines systemic risk as “a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.”[41]

This definition bears similarities to the “Hand formula” in U.S. tort law, which balances the burden of precautions against the probability and severity of potential harm to determine negligence.[42] The AI Act’s notion of systemic risk, however, is applied more broadly to entire categories of AI systems based on their theoretical potential for widespread harm, rather than on a case-by-case basis.

The designation of LLMs as posing “systemic risk” is problematic for several reasons. It creates a presumption of risk merely based on a GPAI system’s scale of operations, without any consideration of the actual likelihood or severity of harm in specific use cases. This could lead to unwarranted regulatory intervention and unintended consequences that hinder the development and deployment of beneficial AI technologies. And this broad definition of systemic risk gives regulators significant leeway to intervene in how firms develop and release their AI products, potentially blocking access to cutting-edge tools for European citizens, even in the absence of tangible harms.

While it is important to address potential risks associated with AI systems, the AI Act’s approach risks stifling innovation and hindering the development of beneficial AI technologies within the EU.

B. Avoid Static Regulatory Approaches

AI regulators are charged with overseeing a dynamic and rapidly developing market, and should therefore avoid erecting a rigid framework that force new innovations into ill-fitting categories. The “regulatory sandbox” may provide a better model to balance innovation with risk management. By allowing developers to test and refine AI technologies in a controlled environment under regulatory oversight, sandboxes can be used to help identify and address potential issues before wider deployment, all while facilitating dialogue between innovators and regulators. This approach not only accelerates the development of safe and ethical AI solutions, but also builds mutual understanding and trust. Where possible, NTIA should facilitate policy experimentation with regulatory sandboxes in the AI context.

Meta’s Open Loop program is an example of this kind of experimentation.[43] This program is a policy prototyping research project focused on evaluating the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0.[44] The goal is to assess whether the framework is understandable, applicable, and effective in assisting companies to identify and manage risks associated with generative AI. It also provides companies an opportunity to familiarize themselves with the NIST AI RMF and its application in risk-management processes for generative AI systems. Additionally, it aims to collect data on existing practices and offer feedback to NIST, potentially influencing future RMF updates.

1. Regulation as a discovery process

Another key principle is to ensure that regulatory mechanisms are adaptive. Some examples of adaptive mechanisms are iterative rulemaking and feedback loops that allow regulations to be updated continuously in response to new developments and insights. Such mechanisms enable policymakers to respond swiftly to technological breakthroughs, ensuring that regulations remain relevant and effective, without stifling innovation.

Geoffrey Manne & Gus Hurwitz have recently proposed a framework for “regulation as a discovery process” that could be adapted to AI.[45] They argue for a view of regulation not merely as a mechanism for enforcing rules, but as a process for discovering information that can inform and improve regulatory approaches over time. This perspective is particularly pertinent to AI, where the pace of innovation and the complexity of technologies often outstrip regulators’ understanding and ability to predict future developments. This framework:

in its simplest formulation, asks regulators to consider that they might be wrong. That they might be asking the wrong questions, collecting the wrong information, analyzing it the wrong way—or even that Congress has given them the wrong authority or misunderstood the problem that Congress has tasked them to address.[46]

That is to say, an adaptive approach to regulation requires epistemic humility, with the understanding that, particularly for complex, dynamic industries:

there is no amount of information collection or analysis that is guaranteed to be “enough.” As Coase said, the problem of social cost isn’t calculating what those costs are so that we can eliminate them, but ascertaining how much of those social costs society is willing to bear.[47]

In this sense, modern regulators’ core challenge is to develop processes that allow for iterative development of knowledge, which is always in short supply. This requires a shift in how an agency conceptualizes its mission, from one of writing regulations to one of assisting lawmakers to assemble, filter, and focus on the most relevant and pressing information needed to understand a regulatory subject’s changing dynamics.[48]

As Hurwitz & Manne note, existing efforts to position some agencies as information-gathering clearinghouses suffer from a number of shortcomings—most notably, that they tend to operate on an ad hoc basis, reporting to Congress in response to particular exigencies.[49] The key to developing a “discovery process” for AI regulation would instead require setting up ongoing mechanisms to gather and report on data, as well as directing the process toward “specifications for how information should be used, or what the regulator anticipated to find in the information, prior to its collection.”[50]

Embracing regulation as a discovery process means acknowledging the limits of our collective knowledge about AI’s potential risks and benefits. This underscores why regulators should prioritize generating and utilizing new information through regulatory experiments, iterative rulemaking, and feedback loops. A more adaptive regulatory framework could respond to new developments and insights in AI technologies, thereby ensuring that regulations remain relevant and effective, without stifling innovation.

Moreover, Hurwitz & Manne highlight the importance of considering regulation as an information-producing activity.[51] In AI regulation, this could involve setting up mechanisms that allow regulators, innovators, and the public to contribute to and benefit from a shared pool of knowledge about AI’s impacts. This could include public databases of AI incidents, standardized reporting of AI-system performance, or platforms for sharing best practices in AI safety and ethics.

Static regulatory approaches may fail to capture the evolving landscape of AI applications and their societal implications. Instead, a dynamic, information-centric regulatory strategy that embraces the market as a discovery process could better facilitate beneficial innovations, while identifying and mitigating harms.

V. Conclusion

As the NTIA navigates the complex landscape of AI regulation, it is imperative to adopt a nuanced, forward-looking approach that balances the need to foster innovation with the imperatives of ensuring public safety and ethical integrity. The rapid evolution of AI technologies necessitates a regulatory framework that is both adaptive and principles-based, eschewing static snapshots of the current state of the art in favor of flexible mechanisms that could accommodate the dynamic nature of this field.

Central to this approach is to recognize that the field of AI encompasses a diverse array of technologies, methodologies, and applications, each with its distinct characteristics, capabilities, and implications for society. A one-size-fits-all regulatory model would not only be ill-suited to the task at-hand, but would also risk stifling innovation and hindering the United States’ ability to maintain its leadership in the global AI industry. NTIA should focus instead on developing tailored approaches that distinguish between low-risk and high-risk applications, ensuring that regulatory interventions are commensurate with the potential identifiable harms and benefits associated with specific AI use cases.

Moreover, the NTIA must resist the temptation to rely on overly prescriptive definitions of “openness” or to favor particular business models over others. The coexistence of open and closed approaches to AI development is essential to foster a vibrant, competitive ecosystem that drives technological progress and maximizes social benefits. By embracing a flexible regulatory framework that allows for experimentation and iteration, the NTIA can create an environment conducive to innovation while still ensuring that appropriate safeguards are in place to mitigate potential risks.

Ultimately, the success of the U.S. AI industry will depend on the ability of regulators, developers, researchers, and the broader community to collaborate in developing governance frameworks that are both effective and adaptable. By recognizing the importance of open development and diverse business models, the NTIA can play a crucial role in shaping the future of AI in ways that promote innovation, protect public interests, and solidify the United States’ position as a global leader in this transformative field.

[1] Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights, Docket No. 240216-0052, 89 FR 14059, National Telecommunications and Information Administration (Mar. 27, 2024) at 14063, question 8(a) [hereinafter “RFC”].

[2] See, e.g., Kristian Stout, Systemic Risk and Copyright in the EU AI Act, Truth on the Market (Mar. 19, 2024), https://truthonthemarket.com/2024/03/19/systemic-risk-and-copyright-in-the-eu-ai-act.

[3] Exec. Order No. 14110, 88 F.R. 75191 (2023), https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence?_fsi=C0CdBzzA [hereinafter “EO”].

[4] See, e.g., EO at §§ 1; 2(c), 5.2(e)(ii); and § 8(c);

[5] Id. at § 3(k).

[6] Id. at § (k)(iii).

[7] Id. at § 4.6. As NTIA notes, the administration refers to “widely available model weight,” which is equivalent to “open foundation models” in this proceeding. RFC at 14060.

[8] For more on the “open” vs “closed” distinction and its poor fit as a regulatory lens, see, infra, at nn. 19-41 and accompanying text.

[9] Adaptive regulatory frameworks are discussed, infra, at nn. 42-53 and accompanying text.

[10] See Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73:2 Am. Econ. R. 267, 267–71 (1983), http://www.jstor.org/stable/1816853.

[11] See Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36:1 J. Indus. Econ. 19 (1987), https://doi.org/10.2307/2098594.

[12] Cindy Gordon, Google Pauses Gemini AI Model After Latest Debacle, Forbes (Feb. 29, 2024), https://www.forbes.com/sites/cindygordon/2024/02/29/google-latest-debacle-has-paused-gemini-ai-model/?sh=3114d093536c.

[13] Id.

[14] Id.

[15] Breck Dumas, Google Loses $96B in Value on Gemini Fallout as CEO Does Damage Control, Yahoo Finance (Feb. 28, 2024), https://finance.yahoo.com/news/google-loses-96b-value-gemini-233110640.html.

[16] RFC at 14060.

[17] RFC at 14062, question 1.

[18] RFC at 14062, question 3(a).

[19] Department of Commerce, Competition in the Mobile Application Ecosystem (2023), https://www.ntia.gov/report/2023/competition-mobile-app-ecosystem (“While retaining appropriate latitude for legitimate privacy, security, and safety measures, Congress should enact laws and relevant agencies should consider measures (such as rulemaking) designed to open up distribution of lawful apps, by prohibiting… barriers to the direct downloading of applications.”).

[20] RFC at 14061 (“‘openness’ or ‘wide availability’ of model weights are also terms without clear definition or consensus. There are gradients of ‘openness,’ ranging from fully ‘closed’ to fully ‘open’”).

[21] See Jonathan M. Barnett, The Host’s Dilemma: Strategic Forfeiture in Platform Markets for Informational Goods, 124 Harv. L. Rev. 1861, 1927 (2011).

[22] Id. at 2.

[23] Id. at 3.

[24]  Desktop Operating System Market Share Worldwide Feb 2023 – Feb 2024, statcounter, https://gs.statcounter.com/os-market-share/desktop/worldwide (last visited Mar. 27, 2024).

[25]  Andrei Hagiu, Proprietary vs. Open Two-Sided Platforms and Social Efficiency (Harv. Bus. Sch. Strategy Unit, Working Paper No. 09-113, 2006).

[26] Joey Sneddon, More Developers Use Linux than Mac, Report Shows, Omg Linux (Dec. 28, 2022), https://www.omglinux.com/devs-prefer-linux-to-mac-stackoverflow-survey.

[27] See Michael L. Katz & Carl Shapiro, Systems Competition and Network Effects, 8 J. Econ. Persp. 93, 110 (1994), (“[T]he primary cost of standardization is loss of variety: consumers have fewer differentiated products to pick from, especially if standardization prevents the development of promising but unique and incompatible new systems”).

[28] See. e.g., Nokia, Threat Intelligence Report 2020 (2020), https://www.nokia.com/networks/portfolio/cyber-security/threat-intelligence-report-2020; Randal C. Picker, Security Competition and App Stores, Network Law Review (Aug. 23, 2021), https://www.networklawreview.org/picker-app-stores.

[29] Collective Constitutional AI: Aligning a Language Model with Public Input, Anthropic (Oct. 17, 2023), https://www.anthropic.com/news/collective-constitutional-ai-aligning-a-language-model-with-public-input.

[30] RFC at 14061.

[31] Encryption and the “Going Dark” Debate, Congressional Research Service (2017), https://crsreports.congress.gov/product/pdf/R/R44481.

[32] EO at. § 3(k)(iii).

[33] EO at § 3(b).

[34] EO at § 4.2 (requiring companies developing dual-use foundation models to provide ongoing reports to the federal government on their activities, security measures, model weights, and red-team testing results).

[35] An End-to-End Platform for Machine Learning, TensorFlow, https://www.tensorflow.org (last visited Mar. 27, 2024).

[36] Learn the Basics, PyTorch, https://pytorch.org/tutorials/beginner/basics/intro.html (last visited Mar. 27, 2024).

[37] Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov, & Taeg Keun Whangbo, Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging, 15(16) Cancers (Basel) 4172 (2023), available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10453020.

[38] Keith T. Butler, et al., Machine Learning for Molecular and Materials Science, 559 Nature 547 (2018), available at https://www.nature.com/articles/s41586-018-0337-2.

[39] The Peltzman Effect, The Decision Lab, https://thedecisionlab.com/reference-guide/psychology/the-peltzman-effect (last visited Mar. 27, 2024).

[40] European Parliament, European Parliament legislative Resolution of 13 March 2024 on the Proposal for a Regulation of the European Parliament and of the Council on Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206, available at https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html [hereinafter “EU AI Act”].

[41] Id. at Art. 3(65).

[42] See Stephen G. Gilles, On Determining Negligence: Hand Formula Balancing, the Reasonable Person Standard, and the Jury, 54 Vanderbilt L. Rev. 813, 842-49 (2001).

[43] See Open Loop’s First Policy Prototyping Program in the United States, Meta, https://www.usprogram.openloop.org (last visited Mar. 27. 2024).

[44] Id.

[45] Justin (Gus) Hurwitz & Geoffrey A. Manne, Pigou’s Plumber: Regulation as a Discovery Process, SSRN (2024), available at https://laweconcenter.org/resources/pigous-plumber.

[46] Id. at 32.

[47] Id. at 33.

[48] See id. at 28-29

[49] Id. at 37.

[50] Id. at 37-38.

[51] Id.

ICLE Announces Speakers Series Grants 2024-2025

PORTLAND, Ore. (Mar. 27, 2024) – The International Center for Law & Economics (ICLE) is excited to announce its second year of Speakers Series Grants . . .

PORTLAND, Ore. (Mar. 27, 2024) – The International Center for Law & Economics (ICLE) is excited to announce its second year of Speakers Series Grants to support law & economics scholarship on campus. These grants allow recipients to bring several speakers to their campus over the course of the academic year to present current scholarship.

In evaluating proposals for the 2024-2025 academic year, we will place special emphasis on bringing new voices into the law & economics community, including both more junior scholars and scholars from cognate disciplines such as business and engineering.

Proposals should include a list of potential invited speakers, a copy of your CV, and a short paragraph noting reasons you believe such a series will be well-received at your school, any challenges you anticipate facing, and opportunities you may have for speakers to engage with non-law faculty or students.

This program is modeled on workshop series common in law schools. That is not, however, intended to prescribe a format necessary to receive support through this program. We welcome applications from both law and non-law faculty, and encourage faculty to submit applications for support of programs that “break the mold” described in this call for proposals.

Typical awards will be $10,000. Awards will be made as a gift to the recipient’s school, restricted for the use of a speakers series with speakers identified at the sole discretion of the award recipient. Speakers invited to this series should be given the same level of support afforded to any other speaker invited to your school (e.g., in terms of room availability, promotion, scheduling, etc). The precise administration of funds shall be at the discretion of the recipient. ICLE would anticipate the funds being used to support speakers’ travel and hotel costs, food and promotional materials, and a group dinner with the speaker. ICLE does not allow overhead to be charged against gifts of this size.

At the end of each semester, award recipients should report to ICLE the names and affiliations of speakers and the number of faculty and students who attended their presentations (including their disciplines if they are not all law faculty and students). At the end of the academic year, award recipients should report to ICLE the amount of unspent funds.

ICLE is planning to execute these series starting in Fall 2024, but is open to proposals that envision an earlier timeline. 

Please submit your proposal to [email protected] by Friday, May 31, 2024.

Bill C-59 and the Use of Structural Merger Presumptions in Canada

We, the undersigned, are scholars from the International Center for Law & Economics (ICLE) with experience in the academy, enforcement agencies, and private practice in . . .

We, the undersigned, are scholars from the International Center for Law & Economics (ICLE) with experience in the academy, enforcement agencies, and private practice in competition law. We write to address a key aspect of proposed amendments to Canadian competition law. Specifically, we focus on clauses in Bill C-59 pertinent to mergers and acquisitions and, in particular, the Bureau of Competition’s recommendation that the Bill should:

Amend Clauses 249-250 to enact rebuttable presumptions for mergers consistent with those set out in the U.S. Merger Guidelines.[1]

The Bureau’s recommendation seeks to codify in Canadian competition law the structural presumptions outlined in the 2023 U.S. Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) Merger Guidelines.  On balance, however, adoption of that recommendation would impede, rather than promote, fair competition and the welfare of Canadian consumers.

The cornerstone of the proposed change lies in the introduction of rebuttable presumptions of illegality for mergers that exceed specified market-share or concentration thresholds. While this approach may seem intuitive, the economic literature and U.S. enforcement experience militate against its adoption in Canadian law.

The goal of enhancing—indeed, strengthening—Canadian competition law should not be conflated with the adoption of foreign regulatory guidelines. The most recent U.S. Merger Guidelines establish new structural thresholds, based primarily on the Herfindahl-Hirschman Index (HHI) and market share, to establish presumptions of anticompetitive effects and illegality. Those structural presumptions, adopted a few short months ago, are inconsistent with established economic literature and are untested in U.S. courts. Those U.S. guidelines should not be codified in Canadian law without robust deliberation to ensure alignment with Canadian legal principles, on the one hand, and with economic realities and evidence, on the other.

Three points are especially important. First, concentration measures are widely considered to be a poor proxy for the level of competition that prevails in a given market. Second, lower merger thresholds may lead to enforcement errors that discourage investment and entrepreneurial activity and allocate enforcement resources to the wrong cases. Finally, these risks are particularly acute when concentration thresholds are used not as useful indicators but, instead, as actual legal presumptions (albeit rebuttable ones). We discuss each of these points in more detail below.

What Concentration Measures Can and Cannot Tell Us About Competition

While the use of concentration measures and thresholds can provide a useful preliminary-screening mechanism to identify potentially problematic mergers, substantially lowering the thresholds to establish a presumption of illegality is inadvisable for several reasons.

First, too strong a reliance on concentration measures lacks economic foundation and is likely prone to frequent error. Economists have been studying the relationship between concentration and various potential indicia of anticompetitive effects—price, markup, profits, rate of return, etc.—for decades.[2] There are hundreds of empirical studies addressing this topic.[3]

The assumption that “too much” concentration is harmful assumes both that the structure of a market is what determines economic outcomes and that anyone could know what the “right” amount of concentration is. But as economists have understood since at least the 1970s (and despite an extremely vigorous, but futile, effort to show otherwise), market structure does not determine outcomes.[4]

This skepticism toward concentration measures as a guide for policy is well-supported, and is held by scholars across the political spectrum.  To take one prominent, recent example, professors Fiona Scott Morton (deputy assistant U.S. attorney general for economics in the DOJ Antitrust Division under President Barack Obama, now at Yale University); Martin Gaynor (former director of the FTC Bureau of Economics under President Obama, now serving as special advisor to Assistant U.S. Attorney General Jonathan Kanter, on leave from Carnegie Mellon University), and Steven Berry (an industrial-organization economist at Yale University) surveyed the industrial-organization literature and found that presumptions based on measures of concentration are unlikely to provide sound guidance for public policy:

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration.…

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates.[5]

As Chad Syverson recently summarized:

Perhaps the deepest conceptual problem with concentration as a measure of market power is that it is an outcome, not an immutable core determinant of how competitive an industry or market is… As a result, concentration is worse than just a noisy barometer of market power. Instead, we cannot even generally know which way the barometer is oriented.[6]

This does not mean that concentration measures have no use in merger screening. Rather, market concentration is often unrelated to antitrust-enforcement goals because it is driven by factors that are endogenous to each industry. Enforcers should not rely too heavily on structural presumptions based on concentration measures, as these may be poor indicators of the instances in which antitrust enforcement is most beneficial to competition and consumers.

At What Level Should Thresholds Be Set?

Second, if concentration measures are to be used in some fashion, at what level or levels should they be set?

The U.S. 2010 Horizontal Merger Guidelines were “b?ased on updated HHI thresholds that more accurately reflect actual enforcement practice.”[7] These numbers were updated in 2023, but without clear justification. While the U.S. enforcement authorities cite several old cases (cases that implicated considerably higher levels of concentration than those in their 2023 guidelines), we agree with comments submitted in 2022 by now-FTC Bureau of Economics Director Aviv Nevo and colleagues, who argued against such a change. They wrote:

Our view is that this would not be the most productive route for the agencies to pursue to successfully prevent harmful mergers, and could backfire by putting even further emphasis on market definition and structural presumptions.

If the agencies were to substantially change the presumption thresholds, they would also need to persuade courts that the new thresholds were at the right level. Is the evidence there to do so? The existing body of research on this question is, today, thin and mostly based on individual case studies in a handful of industries. Our reading of the literature is that it is not clear and persuasive enough, at this point in time, to support a substantially different threshold that will be applied across the board to all industries and market conditions. (emphasis added) [8]

Lower merger thresholds create several risks. One is that such thresholds will lead to excessive “false positives”; that is, too many presumptions against mergers that are likely to be procompetitive or benign. This is particularly likely to occur if enforcers make it harder for parties to rebut the presumptions, e.g., by requiring stronger evidence the higher the parties are above the (now-lowered) threshold. Raising barriers to establishing efficiencies and other countervailing factors makes it more likely that procompetitive mergers will be blocked. This not only risks depriving consumers of lower prices and greater innovation in specific cases, but chills beneficial merger-and-acquisition activity more broadly. The prospect of an overly stringent enforcement regime discourages investment and entrepreneurial activity. It also allocates scarce enforcement resources to the wrong cases.

Changing the Character of Structural Presumptions

Finally, the risks described above are particularly acute, given the change in the character of structural presumptions described in the U.S. Merger Guidelines. The 2023 Merger Guidelines—and only the 2023 Merger Guidelines—state that certain structural features of mergers will raise a “presumption of illegality.”[9]

U.S. merger guidelines published in 1982,[10] 1992 (revised in 1997),[11] and 2010[12] all describe structural thresholds seen by the agencies as pertinent to merger screening. None of them mention a “presumption of illegality.” In fact, as the U.S. agencies put it in the 2010 Horizontal Merger Guidelines:

The purpose of these thresholds is not to provide a rigid screen to separate competitively benign mergers from anticompetitive ones, although high levels of concentration do raise concerns. Rather, they provide one way to identify some mergers unlikely to raise competitive concerns and some others for which it is particularly important to examine whether other competitive factors confirm, reinforce, or counteract the potentially harmful effects of increased concentration.[13]

The most worrisome category of mergers identified in the 1992 U.S. merger guidelines were said to be presumed “likely to create or enhance market power or facilitate its exercise.” The 1982 guidelines did not describe “presumptions” so much as that certain mergers that may be matters of “significant competitive concern” and “likely” to be subject to challenge.

Hence, earlier editions of the U.S. merger guidelines describe the ways that structural features of mergers might inform, but not determine, internal agency analysis of those mergers. That was useful information for industry, the bar, and the courts. Equally useful were descriptions of mergers that were “unlikely to have adverse competitive effects and ordinarily require no further analysis,”[14] as well as intermediate types of mergers that “potentially raise significant competitive concerns and often warrant scrutiny.”[15]

Similarly, the 1992 U.S. merger guidelines identified a tier of mergers deemed “unlikely to have adverse competitive effects and ordinarily require no further analysis,” as well as intermediate categories of mergers either unlikely to have anticompetitive effects or, in the alternative, potentially raising significant competitive concerns, depending on various factors described elsewhere in the guidelines.[16]

By way of contrast, the new U.S. guidelines include no description of any mergers that are unlikely to have adverse competitive effects. And while the new merger guidelines do stipulate that the “presumption of illegality can be rebutted or disproved,” they offer very limited means of rebuttal.

This is at odds with prior U.S. agency practice and established U.S. law. Until very recently, U.S. agency staff sought to understand proposed mergers under the totality of their circumstances, much as U.S. courts came to do. Structural features of mergers (among many others) might raise concerns of greater or lesser degrees. These might lead to additional questions in some instances; more substantial inquiries under a “second request” in a minority of instances; or, eventually, a complaint against a very small minority of proposed mergers. In the alternative, they might help staff avoid wasting scarce resources on mergers “unlikely to have anticompetitive effects.”

Prior to a hearing or a trial on the merits, there might be strong, weak, or no appreciable assessments of likely liability, but there was no prima facie determination of illegality.

And while U.S. merger trials did tend to follow a burden-shifting framework for plaintiff and defendant production, they too looked to the “totality of the circumstances”[17] and a transaction’s “probable effect on future competition”[18] to determine liability, and they looked away from strong structural presumptions. As then-U.S. Circuit Judge Clarence Thomas observed in the Baker-Hughes case:

General Dynamics began a line of decisions differing markedly in emphasis from the Court’s antitrust cases of the 1960s. Instead of accepting a firm’s market share as virtually conclusive proof of its market power, the Court carefully analyzed defendants’ rebuttal evidence.[19]

Central to the holding in Baker Hughes—and contra the 2023 U.S. merger guidelines—was that, because the government’s prima facie burden of production was low, the defendant’s rebuttal burden should not be unduly onerous.[20] As the U.S. Supreme Court had put it, defendants would not be required to clearly disprove anticompetitive effects, but rather, simply to “show that the concentration ratios, which can be unreliable indicators of actual market behavior . . . did not accurately depict the economic characteristics of the [relevant] market.”[21]

Doing so would not end the matter. Rather, “the burden of producing additional evidence of anticompetitive effects shifts to the government, and merges with the ultimate burden of persuasion, which remains with the government at all times.”[22]

As the U.S. Supreme Court decision in Marine Bancorporation underscores, even by 1974, it was well understood that concentration ratios “can be unreliable indicators” of market behavior and competitive effects.

As explained above, research and enforcement over the ensuing decades have undermined reliance on structural presumptions even further. As a consequence, the structure/conduct/performance paradigm has been largely abandoned, because it’s widely recognized that market structure is not outcome–determinative.

That is not to say that high concentration cannot have any signaling value in preliminary agency screening of merger matters. But concentration metrics that have proven to be unreliable indicators of firm behavior and competitive effects should not be enshrined in Canadian statutory law. That would be a step back, not a step forward, for merger enforcement.

 

[1] Matthew Boswell, Letter to the Chair and Members of the House of Commons Standing Committee on Finance, Competition Bureau Canada (Mar. 1, 2024), available at https://sencanada.ca/Content/Sen/Committee/441/NFFN/briefs/SM-C-59_CompetitionBureauofCND_e.pdf.

[2] For a few examples from a very large body of literature, see, e.g., Steven Berry, Martin Gaynor, & Fiona Scott Morton, Do Increasing Markups Matter? Lessons from Empirical Industrial Organization, 33J. Econ. Perspectives 44 (2019); Richard Schmalensee, Inter-Industry Studies of Structure and Performance, in 2 Handbook of Industrial Organization 951-1009 (Richard Schmalensee & Robert Willig, eds., 1989); William N. Evans, Luke M. Froeb, & Gregory J. Werden, Endogeneity in the Concentration-Price Relationship: Causes, Consequences, and Cures, 41 J. Indus. Econ. 431 (1993); Steven Berry, Market Structure and Competition, Redux, FTC Micro Conference (Nov. 2017), available at https://www.ftc.gov/system/files/documents/public_events/1208143/22_-_steven_berry_keynote.pdf; Nathan Miller, et al., On the Misuse of Regressions of Price on the HHI in Merger Review, 10 J. Antitrust Enforcement 248 (2022).

[3] Id.

[4] See Harold Demsetz, Industry Structure, Market Rivalry, and Public Policy, 16 J. L. & Econ. 1 (1973).

[5] Berry, Gaynor, & Scott Morton, supra note 2.

[6] Chad Syverson, Macroeconomics and Market Power: Context, Implications, and Open Questions 33 J. Econ. Persp. 23, (2019) at 26.

[7] Joseph Farrell & Carl Shapiro, The 2010 Horizontal Merger Guidelines After 10 Years, 58 REV. IND. ORG. 58, (2021). https://link.springer.com/article/10.1007/s11151-020-09807-6.

[8] John Asker et al, Comments on the January 2022 DOJ and FTC RFI on Merger Enforcement (Apr. 20, 2022), available at https://www.regulations.gov/comment/FTC-2022-0003-1847 at 15-6.

[9] U.S. Dep’t Justice & Fed. Trade Comm’n, Merger Guidelines (Guideline One) (Dec. 18, 2023), available at https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[10] U.S. Dep’t Justice, 1982 Merger Guidelines (1982), https://www.justice.gov/archives/atr/1982-merger-guidelines.

[11] U.S. Dep’t Justice & Fed. Trade Comm’n, 1992 Merger Guidelines (1992), https://www.justice.gov/archives/atr/1992-merger-guidelines; U.S. Dep’t Justice & Fed. Trade Comm’n, 1997 Merger Guidelines (1997), https://www.justice.gov/archives/atr/1997-merger-guidelines.

[12] U.S. Dep’t Justice & Fed. Trade Comm’n, Horizontal Merger Guidelines (Aug. 19, 2010), https://www.justice.gov/atr/horizontal-merger-guidelines-08192010; The U.S. antitrust agencies also issued Vertical Merger Guidelines in 2020. Although these were formally withdrawn in 2021 by the FTC, but not DOJ, they too are supplanted by the 2023 Merger Guidelines. See U.S. Dep’t Justice & Fed. Trade Comm’n, Vertical Merger Guidelines (Jun. 30, 2020), available at https://www.ftc.gov/system/files/documents/public_statements/1580003/vertical_merger_guidelines_6-30-20.pdf.

[13] 2010 Horizontal Merger Guidelines.

[14] Id.

[15] Id.

[16] 1992 Merger Guidelines.

[17]  United States v. Baker-Hughes Inc., 908 F.2d 981, 984 (D.C. Cir. 1990).

[18] Id. at 991.

[19] Id. at 990 (citing Hospital Corp. of Am. v. FTC, 807 F.2d 1381, 1386 (7th Cir.1986), cert. denied, 481 U.S. 1038, 107 S.Ct. 1975, 95 L.Ed.2d 815 (1987).

[20]  Id. at 987, 992.

[21]  United States v. Marine Bancorporation Inc., 418 U.S. 602, 631 (1974) (internal citations omitted).

[22]  Baker-Hughes, 908 F.2d at 983.

ICLE Statement on the EU’s AI Act

PORTLAND, Ore. (11 March 2024) – The International Center for Law & Economics (ICLE) offers the following statement from ICLE Senior Scholar Miko?aj Barczentewicz in . . .

PORTLAND, Ore. (11 March 2024) – The International Center for Law & Economics (ICLE) offers the following statement from ICLE Senior Scholar Miko?aj Barczentewicz in response to today’s vote by the European Parliament to adopt the Artificial Intelligence (AI) Act:

The AI Act focuses on restraining AI, while putting very little attention on supporting EU developers. It is difficult to say whether the AI Act will have much of an effect overall, positively or negatively. Its application will depend heavily on implementing rules that are yet to be designed. There is some hope that the whole framework will emerge as more friendly to innovation than a reading of the AI Act’s text would suggest.

What we do already know is that the AI Act does not address the key ways in which EU-based AI developers are held back by national and EU law. Developers face the risk of privacy and copyright laws being applied to them in disproportionate ways by myopic enforcers who do not consider technological and economic growth to be serious values. The chance for a considered legislative decision on how to address those difficult problems has been abandoned, with the false justification that existing laws provide sufficient clarity.

For more on the topic, see Miko?aj’s March 2022 issue brief on the AI Act, as well as ICLE’s more recent comments to the European Commission on competition in generative-AI markets. To schedule an interview with Miko?aj or other ICLE scholars about the topic, contact ICLE Media and Communications Manager Elizabeth Lincicome at [email protected] or (919) 744-8087.

ICLE Comments to FTC on Children’s Online Privacy Protection Rule NPRM

Introduction We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online . . .

Introduction

We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online Privacy Protection Rule (“COPPA Rule”).

The International Center for Law and Economics (ICLE) is a nonprofit, nonpartisan research center whose work promotes the use of law & economics methodologies to inform public-policy debates. We believe that intellectually rigorous, data-driven analysis will lead to efficient policy solutions that promote consumer welfare and global economic growth.[1]

ICLE’s scholars have written extensively on privacy and data-security issues, including those related to children’s online safety and privacy. We also previously filed comments as part of the COPPA Rule Review and will make some of the same points below.[2]

The Children’s Online Privacy Protection Act (COPPA) sought to strike a balance in protecting children without harming the utility of the internet for children. As Sen. Richard Bryan (D-Nev.) put it when he laid out the purpose of COPPA:

The goals of this legislation are: (1) to enhance parental involvement in a child’s online activities in order to protect the privacy of children in the online environment; (2) to enhance parental involvement to help protect the safety of children in online fora such as chatrooms, home pages, and pen-pal services in which children may make public postings of identifying information; (3) to maintain the security of personally identifiable information of children collected online; and (4) to protect children’s privacy by limiting the collection of personal information from children without parental consent. The legislation accomplishes these goals in a manner that preserves the interactivity of children’s experience on the Internet and preserves children’s access to information in this rich and valuable medium.[3]

In other words, COPPA was designed to protect children from online threats by promoting parental involvement in a way that also preserves a rich and vibrant marketplace for children’s content online. Consequently, the pre-2013 COPPA Rule did not define personal information to include persistent identifiers standing alone. It is these persistent identifiers that are critical for the targeted advertising that funds the interactive online platforms and the creation of children’s content the legislation was designed to preserve.

COPPA applies to the “operator of any website or online service” that is either “directed to children that collects personal information from children” or that has “actual knowledge that it is collecting personal information from a child.”[4] These operators must “obtain verifiable parental consent for the collection, use, or disclosure of personal information.” The NPRM, following the mistaken 2013 amendments to the COPPA Rule, continues to define “personal information” to include persistent identifiers that are necessary for the targeted advertising undergirding the internet ecosystem.

Below, we argue that, before the FTC moves further toward restricting platform operators and content creators’ ability to monetize their work through targeted advertising, it must consider the economics of multisided platforms. The current path will lead to less available free content for children and more restrictions on their access to online platforms that depend on targeted advertising. Moreover, the proposed rules are inconsistent with the statutory text of COPPA, as persistent identifiers do not by themselves enable contacting specific individuals. Including them in the definition of “personal information” is also contrary to the statute’s purpose, as it will lead to a less vibrant internet ecosystem for children.

Finally, there are better ways to protect children online, including by promoting the use of available technological and practical solutions to avoid privacy harms. To comply with existing First Amendment jurisprudence regarding online speech, it is necessary to rely on these less-restrictive means to serve the goal of protecting children without unduly impinging their speech interests online.

I. The Economics of Online Multisided Platforms

Most of the “operators of websites and online services” subject to the COPPA Rule are what economists call multisided markets, or platforms.[5] Such platforms derive their name from the fact that they serve at least two different types of customers and facilitate their interaction. Multisided platforms generate “indirect network effects,” described by one economist as a situation where “participants on one side value being able to interact with participants on the other side… lead[ing] to interdependent demand.”[6]

Online platforms provide content to one side and access to potential consumers on the other side. In order to keep demand high, online platforms often offer free access to users, whose participation is subsidized by those participants on the other side of the platform (such as advertisers) that wish to reach them.[7] This creates a positive feedback loop in which more participants on one side of the platform leads to more participants on the other.

This dynamic is also true of platforms with content “directed to children.” Revenue is collected not from those users, but primarily from the other side of the platform—i.e., advertisers who pay for access to the platform’s users. To be successful, online platforms must keep enough—and the right type of—users engaged to maintain demand for advertising.

Moreover, many “operators” under COPPA are platforms that rely on user-generated content. Thus, they must also consider how to attract and maintain high-demand content creators, often accomplished by sharing advertising revenue. If platforms fail to serve the interests of high-demand content creators, those creators may leave the platform, thus reducing its value.

Online platforms acting within the market process are usually going to be the parties best-positioned to make decisions on behalf of platforms users. Operators with content directed to children may even compete on privacy policies and protections for children by providing tools to help users avoid what they (or, in this context, their parents and guardians) perceive to be harms, while keeping users on the platform and maintaining value for advertisers.[8]

There may, however, be examples where negative externalities[9] stemming from internet use are harmful to society more broadly. A market failure could result, for instance, if platforms’ incentives lead them to collect too much (or the wrong types of) information for targeted advertising, or to offer up content that is harmful for children or keeps them hooked to using the platform.

In situations where there are negative externalities from internet use, there may be a case to regulate online platforms in various ways. Any case for regulation must, however, acknowledge potential transaction costs, as well as how platforms and users may respond to changes in those costs. To get regulation right, the burden of avoiding a negative externality should fall on the least-cost avoider.

The Coase Theorem, derived from the work of Nobel-winning economist Ronald Coase[10] and elaborated on in the subsequent literature,[11] helps to explain the issue at-hand:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the least-cost avoider, while taking into consideration the total social costs of the institutional framework.

In one of Coase’s examples, the noise from a confectioner using his candy-making machine is a potential cost to the doctor next door, who consequently cannot use his office to conduct certain testing. Simultaneously, the doctor moving his office next door to the confectioner is a potential cost to the confectioner’s ability to use his equipment.

In a world of well-defined property rights and low transaction costs, the initial allocation of rights would not matter, because the parties could bargain to overcome the harm in a mutually beneficial manner—i.e., the confectioner could pay the doctor for lost income or to set up sound-proof walls, or conversely, the doctor could pay the confectioner to reduce the sound of his machines.[12] But since there are transaction costs that prevent this sort of bargain, it is important whether the initial right is allocated to the doctor or the confectioner. To maximize societal welfare, the cost should be placed on the entity that can avoid the harm at the lowest cost.[13]

In the context of the COPPA Rule, website operators and online services create incredible value for their users, but they also can, at times, impose negative externalities relevant to children who use their services. In the absence of transaction costs, it would not matter whether operators must obtain verifiable parental consent before collecting, using, or disclosing personal information, or whether the initial burden is placed on parents and children to avoid the harms associated with such collection, use, or disclosure.

But given that there are transaction costs involved in obtaining (and giving) verifiable parental consent,[14] it matters how the law defines personal information (which serves as a proxy for a property right, in Coase’s framing). If personal information is defined too broadly and the transaction costs for providers to gain verifiable parental consent are too high, the result may be that the societal benefits of children’s internet use will be lost, as platform operators restrict access beyond the optimum level.

The threat of liability for platform operators under COPPA also risks excessive collateral censorship.[15] This arguably has already occurred, as operators like YouTube have restricted content creators’ ability to monetize their work through targeted advertising, leading on balance to less children’s content. By wrongly placing the burden on operators to avoid harms associated with targeted advertising, societal welfare is reduced, including the welfare of children who no longer get the benefits of that content.

On the other hand, there are situations where website operators and online services are the least-cost avoiders. For example, they may be the parties best-placed to monitor and control harms associated with internet use in cases where it is difficult or impossible to hold those using their platforms accountable for the harms they cause.[16] In other words, operators should still be held liable under COPPA when they facilitate adults’ ability to message children, or to identify a child’s location without parental consent, in ways that could endanger children.[17] Placing the burden on children or their parents to avoid such harms could allow operators to impose un- or undercompensated harms on society.

Thus, in order to get the COPPA Rule’s balance right, it is important to determine whether it is the operators or their users who are the least-cost avoiders. Placing the burden on the wrong parties would harm societal welfare, either by reducing the value that online platforms confer to their users, or in placing more uncompensated negative externalities on society.

II. Persistent Identifiers and ‘Personal Information’

As mentioned above, under COPPA, a website operator or online service that is either directed to children or that has actual knowledge that it collects personal information from a child must obtain “verifiable parental consent” for the “collection, use or disclosure” of that information.[18] But the NPRM continues to apply the expanded definition of “personal information” to include persistent identifiers from the 2013 amendments.

COPPA’s definition for personal information is “individually identifiable information” collected online.[19] The legislation included examples such as first and last name; home or other physical address; as well as email address, telephone number, or Social Security number.[20] These are all identifiers obviously connected to people’s real identities. COPPA does empower the FTC to determine whether other identifiers should be included, but the commission must permit “the physical or online contacting of a specific individual”[21] or “information concerning the child or the parents of that child that the website collects online from the child and combines with an identifier described in this paragraph.”[22]

In 2013, the FTC amended the definition of personal information to include:

A persistent identifier that can be used to recognize a user over time and across different Web sites or online services. Such persistent identifier includes, but is not limited to, a customer number held in a cookie, an Internet Protocol (IP) address, a processor or device serial number, or unique device identifier.[23]

The NPRM here continues this error.

Neither IP addresses nor device identifiers alone “permit the physical or online contacting of a specific individual,” as required by 15 U.S.C. §?6501(8)(F). A website or app could not identify personal identity or whether a person is an adult or child from these pieces of information alone. In order for persistent identifiers, like those relied upon for targeted advertising, to be counted as personal information under 15 U.S.C. §?6501(8)(G), they need to be combined with other identifiers listed in the definitions. In other words, it is only when a persistent identifier is combined with a first and last name, an address, an email, a phone number, or a Social Security number that it should be considered personal information protected by the statute.

While administrative agencies receive Chevron deference in court challenges when definitions are ambiguous, this text, when illuminated by canons of statutory construction,[24] is clear. The canon of ejusdem generis applies when general words follow an enumeration of two or more things.[25] The general words are taken to apply only to persons or things of the same general kind or class as those mentioned specifically. Persistent identifiers, such as cookies, bear little resemblance to the other examples of “personally identifiable information” listed in the statute, such as first and last name, address, phone, email, or Social Security number. Only when combined with such information could a persistent identifier become personal information.

The NPRM states that the Commission is “not persuaded” by this line of argumentation, pointing back to the same reasoning offered in the 2013 amendments. The NPRM states that it is “the reality that at any given moment a specific individual is using that device,” which “underlies the very premise behind behavioral advertising.”[26] Moreover the NPRM reasons that “while multiple people in a single home often use the same phone number, home address, and email address, Congress nevertheless defined these identifiers as ‘individually identifiable information’ in the COPPA statute.”[27] But this reasoning is flawed.

While multiple people regularly share an address, and sometimes even a phone number or email, each of these identifiers allows for contacting an individual person in a way that a persistent identifier simply does not. In each of those cases, bad actors can use such information to send direct messages to people (phone numbers and emails); find their physical location (address); and potentially to cause them harm.

A persistent identifier, on its own, is not the same. Without the subpoena of an internet service provider (ISP) or virtual private network (VPN), a bad actor that intended harm could not tell either where the person to whom the persistent identifier is assigned is located, or to message them directly. Persistent identifiers are useful primarily to online platforms in supporting their internal operations (which the NPRM continues to allow) and serving users targeted advertising.

Moreover, the fact that bills seeking to update COPPA—proposed but never passed by Congress—have proposed expanding the definition of personal information to include persistent identifiers suggests that the FTC has asserted authority that it does not have under the current statute.[28] Under Supreme Court precedent,[29] when considering whether an agency has the authority that it claims to pass rules, courts must consider whether Congress has rejected proposals to expand the agency’s jurisdiction in similar ways.

The NPRM also ignores the practical realities of the relationship between parents and children when it comes to devices and internet use. Parental oversight is already built into any type of advertisement (including targeted ads) that children see. Few children can view those advertisements without their parents providing them a device and the internet access to do so. Even fewer children can realistically make their own purchases. Consequently, the NPRM misunderstands targeted advertising in the context of children’s content, which is not based on any knowledge about the users as individuals, but on the browsing and search history of the device they happen to be using.

Children under age 13, in particular, are extremely unlikely to have purchased the devices they use; to have paid for the internet access to use those devices; or to have any disposable income or means to pay for goods and services online. Thus, contrary to the NPRM’s assumptions, the actual “targets” of this advertising—even on websites or online services that host children’s content—are the children’s parents.

This NPRM continues the 2013 amendments’ mistake and will continue to greatly reduce the ability of children’s content to generate revenue through the use of relatively anonymous persistent identifiers. As we describe in the next section, the damage done by the 2013 amendments is readily apparent, and the Commission should take this opportunity to rectify the problem.

III. More Parental Consent, Less Children’s Content

As outlined above, in a world without transaction costs—or, at least, one in which such costs are sufficiently low—verifiable parental consent would not matter, because it would be extremely easy for a bargain to be struck between operators and parents. In the real world, however, transaction costs exist. In fact, despite the FTC’s best efforts under the COPPA Rule, the transaction costs associated with obtaining verifiable parental consent continue to be sufficiently high as to prevent most operators from seeking that consent for persistent identifiers. As we stated in our previous comments, the economics are simple: if content creators lose access to revenue from targeted advertising, there will be less content created from which children can benefit.

FIGURE 1: Supply Curve for Children’s Online Content

The supply curve for children’s online content shifts left as the marginal cost of monetizing it increases. The marginal cost of monetizing such content is driven upward by the higher compliance costs of obtaining verifiable parental consent before serving targeted advertising. This supply shift means that less online content will be created for children.

These results are not speculative at this point. Scholars who have studied the issue have found the YouTube settlement, made pursuant to the 2013 amendments, has resulted in less child-directed online content, due to creators’ inability to monetize that content through targeted advertising. In their working paper “COPPAcalypse? The YouTube Settlement’s Impact on Kids Content,”[30] Garrett Johnson, Tesary Lin, James C. Cooper, & Liang Zhong summarized the issue as follows:

The Children’s Online Privacy Protection Act (COPPA), and its implementing regulations, broadly prohibit operators of online services directed at children under 13 from collecting personal information without providing notice of its data collection and use practices and obtaining verifiable parental consent. Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA essentially acts as a de facto ban on the collection of personal information by providers of free child-directed content. In 2013, the FTC amended the COPPA rules to include in the definition of personal information “persistent identifier that can be used to recognize a user over time and across different Web sites or online services,” such as a “customer number held in a cookie . . . or unique device identifier.” This regulatory change meant that, as a practical matter, online operators who provide child-directed content could no longer engage in personalized advertising.

On September 4, 2019, the FTC entered into a consent agreement with YouTube to settle charges that it had violated COPPA. The FTC’s allegations focused on YouTube’s practice of serving personalized advertising on child-directed content at children without obtaining verifiable parental consent. Although YouTube maintains it is a general audience website and users must be at least 13 years old to obtain a Google ID (which makes personalized advertising possible), the FTC complaint alleges that YouTube knew that many of its channels were popular with children under 13, citing YouTube’s own claims to advertisers. The settlement required YouTube to identify child-directed channels and videos and to stop collecting personal information from visitors to these channels. In response, YouTube required channel owners producing [“made-for-kids”] MFK content to designate either their entire channels or specific videos as MFK, beginning on January 1, 2020. YouTube supplemented these self-designations with an automated classifier designed to identify content that was likely directed at children younger than 13. In so doing, YouTube effectively shifted liability under COPPA to the channel owners, who could face up to $42,530 in fines per video if they fail to self-designate and are not detected by YouTube’s classifier.[31]

By requiring verifiable parental consent, the rule change and settlement increased the transaction costs imposed on online platforms that host content created by others. YouTube’s economically rational response was to restrict content creators’ ability to benefit from (considerably more lucrative) personalized advertising. The result was less content created for children, including by driving out less-profitable content creators:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.[32]

This is not the only finding regarding COPPA’s role in reducing the production of content for children. Morgan Reed—president of the App Association, a global trade association for small and medium-sized technology companies—presented extensively at the FTC’s 2019 COPPA Workshop.[33] Reed’s testimony detailed that the transaction costs associated with obtaining verifiable parental consent did little to enhance parental control, but much to reduce the quality and quantity of content directed to children.

It is worth highlighting, in particular, Reed’s repeated use of the words “friction,” “restriction,” and “cost” to describe how COPPA’s institutional features affect the behavior of social-media platforms, parents, and children. While noting that general audience content is “unfettered, meaning that you do not feel restricted by what you can get to, how you do it. It’s easy, it’s low friction. Widely available. I can get it on any platform, in any case, in any context and I can get to it rapidly,” Reed said that COPPA-regulated apps and content are, by contrast, all about:

Friction, restriction, and cost. Every layer of friction you add alters parent behavior significantly. We jokingly refer to it as the over the shoulder factor. If a parent wants access to something and they have to pass it from the back seat to the front seat of the car more than one time, the parent moves on to the next thing. So the more friction you add to an application directed at children the less likely it is that the parent is going to take the steps necessary to get through it because the competition, of course, is as I said, free, unfettered, widely available. Restriction. Kids balk against some of the restrictions. I can’t get to this, I can’t do that. And they say that to the parent. And from the parent’s perspective, fine, I’ll just put in a different age date. They’re participating, they’re parenting but they’re not using the regulatory construction that we all understand.

The COPPA side, expensive, onerous or friction full. We have to find some way around that. Restrictive, fewer features, fewer capabilities, less known or available, and it’s entertaining-ish. …

Is COPPA the barrier? I thought this quote really summed it up. “Seamlessness is expected. But with COPPA, seamlessness is impossible.” And that has been one of the single largest areas of concern. Our folks are looking to provide a COPPA compliant environment. And they’re finding doing VPC is really hard. We want to make it this way, we just walked away. And why do they want to do it? We wanted to create a hub for kids to promote creativity. So these are not folks who are looking to take data and provide interest based advertising. They’re trying to figure out how to do it so they can build an engaging product. Parental consent makes the whole process very complicated. And this is the depressing part. …

We say that VPC is intentional friction. It’s clear from everything we’ve heard in the last two panels that the authors of COPPA, we don’t really want information collected on kids. So friction is intentional. And this is leading to the destruction of general audience applications basically wiping out COPPA apps off the face of the map.[34]

Reed’s use of the word “friction” is particularly enlightening. The economist Mike Munger of Duke University has often described transaction costs as frictions—explaining that, to consumers, all costs are transaction costs.[35] When higher transaction costs are imposed on social-media platforms, end users feel the impact. In this case, the result is that children and parents receive less quality children’s apps and content.

Thus, when the NPRM states that “the Commission [doesn’t] find compelling the argument that the 2013 persistent identifier modification has caused harm by hindering the ability of operators to monetize online content through targeted advertising,”[36] in part because “the 2013 Amendments permit monetization… through providing notice and seeking parental consent for the use of personal information for targeted advertising,”[37] it misses how transaction costs prevent this outcome. The FTC should not ignore the data provided by scholars who have researched the question, nor the direct testimony of app developers.

IV. Lower-Cost Ways to Avoid Harms to Children

Widely available practical and technological means are a lower-cost way to avoid the negative externalities associated with internet use, relative to verifiable-parental-consent laws. As NetChoice put it in the complaint the group filed against Arkansas’ social-media age-verification law, “[p]arents have myriad ways to restrict their children’s access to online services and to keep their children safe on such services.”[38]

NetChoice’s complaint recognized the subjective nature of negative externalities, stating:

Just as people inevitably have different opinions about what books, television shows, and video games are appropriate for minors, people inevitably have different views about whether and to what degree online services are appropriate for minors. While many minors use online services in wholesome and productive ways, online services, like many other technologies, can be abused in ways that may harm minors.[39]

They proceeded to list all the ways that parents can take control and help their children avoid online harms, including with respect to the decisions to buy devices for their children and to set terms for how and when they are permitted to use them.[40] Parents can also choose to use tools offered by cell-phone carriers and broadband providers to block certain apps and sites from their children’s devices, or to control with whom their children can communicate and for how long they can use the devices.[41]

NetChoice also pointed to wireless routers that allow parents to filter and monitor online content;[42] parental controls at the device level;[43] third-party filtering applications;[44] and numerous tools offered by NetChoice members that offer relatively low-cost monitoring and control by parents, or even by teen users acting on their own behalf.[45] Finally, they noted that, in response to market demand,[46] NetChoice members expend significant resources curating content to ensure that it is appropriate.[47]

Similarly, parents can protect their children’s privacy simply by taking control of the devices they allow their children to use. Tech-savvy parents can, if they so choose, install software or use ad-blockers to prevent collection of persistent identifiers.[48] Even less tech-savvy parents can make sure that their children are not subject to ads and tracking simply by monitoring their device usage and ensuring they only use YouTube Kids or other platforms created explicitly for children. In fact, most devices and operating systems now have built-in, easy-to-use controls that enable both monitoring and blocking of children’s access to specific apps and websites.[49]

This litany of less-restrictive means to accomplish the goal of protecting children online bears repeating, because even children have some First Amendment interests in receiving online speech.[50] If a court were to examine the COPPA Rule as a speech regulation that forecloses children’s access to online content, it would be subject to strict scrutiny. This means the rules would need to be the least-restrictive possible in order to fulfill the statute’s purpose. Educating parents and children on the available practical and technological means to avoid harms associated with internet use, including the collection of data for targeted advertising, would clearly be a less-restrictive alternative to a de facto ban of targeted advertising.

A less-restrictive COPPA rule could still enhance parental involvement and protect children from predators without impairing the marketplace for children’s online content significantly. Parents already have the ability to review their children’s content-viewing habits on devices they buy for them. A COPPA rule that enhances parental control by requiring verifiable parental consent when children are subject to sharing personal information—like first and last name, address, phone number, email address, or Social Security number—obviously makes sense, along with additions like geolocation data. But it is equally obvious that it is possible to avoid, at lower cost, the relatively anonymized collection of persistent identifiers used to support targeted ads through practical and technological means, without requiring costly verifiable parental consent.

V. Perils of Bringing More Entities Under the COPPA Rule

The costs of the COPPA Rule would be further exacerbated by the NPRM’s proposal to modify the criteria for determining whether a site or service is directed toward children.[51] These proposed changes, particularly the reliance on third-party services and comparisons with “similar websites or online services,” raise significant concerns about both their practical implementation and potential unintended consequences. The latter could include further losses of online content for both children and adults, as content creators drawn into COPPA’s orbit lose access to revenue from targeted advertising.

The FTC’s current practice employs a multi-factor test to ascertain whether a site or service is directed at children under 13. This comprehensive approach considers various elements, including subject matter, visual and audio content, and empirical evidence regarding audience composition.[52] The proposed amendments aim to expand this test by introducing such factors as marketing materials, representations to third parties and, notably, reviews by users or third parties and comparisons with similar websites or services.[53]

The inclusion of third-party reviews and comparisons with similar services as factors in determining a site’s target audience introduces a level of ambiguity and unreliability that would be counterproductive to COPPA’s goals. Without clear standards to evaluate their competence or authority, relying on third-party reviews would leave operators without a solid foundation upon which to assess compliance. This ambiguity could lead to overcompliance. In particular, online platforms that carry third-party content may err on the side of caution in order to align with the spirit of the rule. This threatens to stifle innovation and free expression by restricting creators’ ability to monetize content that has any chance to be considered “directed to children.” Moreover, to avoid this loss of revenue, content creators could shift their focus exclusively to content clearly aimed only at adults, rather than that which could be interesting to adults and children alike.

Similarly, the proposal to compare operators with “similar websites or online services” is fraught with challenges. The lack of guidance on how to evaluate similarity or to determine which service sets the standard for compliance would increase burdens on operators, with little evidence of tangible realized benefits. It’s also unclear who would make these determinations and how disputes would be resolved, leading to further compliance costs and potential litigation. Moreover, operators may be left in a position where it is impractical to accurately assess the audience of similar services, thereby further complicating compliance efforts.

Given these considerations, the FTC should not include reliance on third-party services or comparisons with similar websites or online services in its criteria for determining whether content is directed at children under 13. These approaches introduce a level of uncertainty and unreliability that could lead to overcompliance, increased costs, and unintended negative impacts on online content and services, including further restrictions on content creators who create content interesting to both adults and children. Instead, the FTC should focus on providing clear, direct guidelines that allow operators to assess their compliance with COPPA confidently, without the need to rely on potentially biased or manipulative third-party assessments. This approach will better serve the FTC’s goal of protecting children’s online privacy, while ensuring a healthy, innovative online ecosystem.

Conclusion

The FTC should reconsider the inclusion of standalone persistent identifiers in the definition of “personal information.” The NPRM continues to enshrine the primary mistake of the 2013 amendments. This change was inconsistent with the purposes and text of the COPPA statute. It already has reduced, and will continue to reduce, the availability of children’s online content.

[1] ICLE has received financial support from numerous companies, organizations, and individuals, including firms with interests both supportive of and in opposition to the ideas expressed in this and other ICLE-supported works. Unless otherwise noted, all ICLE support is in the form of unrestricted, general support. The ideas expressed here are the authors’ own and do not necessarily reflect the views of ICLE’s advisors, affiliates, or supporters.

[2] Much of these comments are adapted from ICLE’s 2019 COPPA Rule Review Comments, available at https://laweconcenter.org/wp-content/uploads/2019/12/COPPA-Comments-2019.pdf; Ben Sperry, A Law & Economics Approach to Social-Media Regulation, CPI TechREG Chronicle (Feb. 29, 2022), https://laweconcenter.org/resources/a-law-economics-approach-to-social-media-regulation; Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes (ICLE Issue Brief, Nov. 9, 2023), available at https://laweconcenter.org/wp-content/uploads/2023/11/Issue-Brief-Transaction-Costs-of-Protecting-Children-Under-the-First-Amendment-.pdf.

[3] 144 Cong. Rec. 11657 (1998) (Statement of Sen. Richard Bryan), available at https://www.congress.gov/crec/1998/10/07/CREC-1998-10-07.pdf#page=303.

[4] 15 U.S.C. §?6502(b)(1)(A).

[5] See, e.g., Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Euro. Econ. Ass’n 990 (2003).

[6] David S. Evans, Multisided Platforms in Antitrust Practice, at 3 (Oct. 17, 2023), forthcoming, Michael Noel, Ed., Elgar Encyclopedia on the Economics of Competition and Regulation, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4606511.

[7] For instance, many nightclubs hold “ladies’ night” events in which female patrons receive free admission or discounted drinks in order to attract more men, who pay full fare for both.

[8] See, e.g., Ben Sperry, Congress Should Focus on Protecting Teens from Real Harms, Not Targeted Ads, The Hill (Feb. 16, 2023), https://thehill.com/opinion/congress-blog/3862238-congress-should-focus-on-protecting-teens-from-real-harms-not-targeted-ads.

[9] An externality is a side effect of an activity that is not reflected in the cost of that activity—basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action.

[10] See Ronald H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960)

[11] See Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[12] See Coase, supra note 8, at 8-10.

[13] See id. at 34 (“When an economist is comparing alternative social arrangements, the proper procedure is to compare the total social product yielded by these different arrangements.”).

[14] See Part III below.

[15] See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Liability, 87 Notre Dame L. Rev. 293, 295-96 (2011); Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L J. 26, 39 (2022); Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-firstamendment-and-online-intermediary-liability.

[16] See Geoffrey A. Manne, Kristian Stout, & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (Sep. 6, 2023), https://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach.

[17] See Statement of Commissioner Alvaro M. Bedoya On the Issuance of the Notice of Proposed Rulemaking to Update the Children’s Online Privacy Protection Rule (COPPA Rule), at 3-4 (Dec. 20, 2023), available at https://www.ftc.gov/system/files/ftc_gov/pdf/BedoyaStatementonCOPPARuleNPRMFINAL12.20.23.pdf (listing examples of these types of enforcement actions).

[18] 15 U.S.C. §?6502(b)(1)(A)(ii).

[19] 15 U.S.C. §?6501(8).

[20] 15 U.S.C. §?6501(8)(A)-(E).

[21] 15 U.S.C. §?6501(8)(F).

[22] 15 U.S.C. §?6501(8)(G).

[23] 16 CFR § 312.2 (Personal information)(7).

[24] See Chevron U.S.A. Inc. v. Natural Resources Defense Council, Inc., 467 U. S. 837, 843 n.9 (1984) (“If a court, employing traditional tools of statutory construction, ascertains that Congress had an intention on the precise question at issue, that intention is the law and must be given effect.”).

[25] What is EJUSDEM GENERIS?, The Law Dictionary: Featuring Black’s Law Dictionary Free Online Legal Dictionary 2nd Ed. (last accessed Dec. 9, 2019), https://thelawdictionary.org/ejusdem-generis.

[26] NPRM at 2043.

[27] Id.

[28] See, e.g., Children and Teens’ Online Privacy Protection Act, S. 1418, §2(a)(3) 118th Cong. (2024).

[29] See FDA v. Brown & Williamson, 529 U.S. 120, 148-50 (2000).

[30] Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[31] Id. at 6-7 (emphasis added).

[32] Id. at 1.

[33] The Future of the COPPA Rule: An FTC Workshop Part 2, Federal Trade Commission (Oct. 7, 2019), available at https://www.ftc.gov/system/files/documents/public_events/1535372/transcript_of_coppa_workshop_part_2_1.pdf.

[34] Id. at 6 (emphasis added).

[35] See Michael Munger, To Consumers, All Costs are Transaction Costs, Am. Inst. Econ. Rsch. (June 13, 2023), https://www.aier.org/article/to-consumers-all-costs-are-transaction-costs.

[36] NPRM at 2043.

[37] Id. at 2034, n. 121.

[38] See NetChoice Complaint, NetChoice LLC v. Griffin, NO. 5:23-CV-05105, 2023 U.S. Dist. LEXIS 154571 (W.D. Ark. 2023), available at https://netchoice.org/wp-content/uploads/2023/06/NetChoice-v-Griffin_-Complaint_2023-06-29.pdf.

[39] Id. at para. 13.

[40] See id. at para. 14

[41] See id.

[42] See id. at para 15.

[43] See id. at para 16.

[44] See id.

[45] See id. at para. 17, 19-21

[46] Sperry, supra note 8.

[47] See NetChoice Complaint, supra note 36, at para. 18.

[48] See, e.g., Mary James & Catherine McNally, The Best Ad Blockers 2024, all about cookies (last updated Feb. 29, 2024), https://allaboutcookies.org/best-ad-blockers.

[49] See, e.g., Parental Controls for Apple, Android, and Other Devices, internet matters (last accessed Mar. 7, 2024), https://www.internetmatters.org/parental-controls/smartphones-and-other-devices.

[50] See, e.g., Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 794-95 (2011); NetChoice, LLC v. Griffin, 2023 WL 5660155, at *17 (W.D. Ark. Aug. 31, 2023) (finding Arkansas’s Act 689 “obviously burdens minors’ First Amendment rights” by “bar[ring] minors from opening accounts on a variety of social media platforms.”).

[51] See NPRM at 2047.

[52] See id. at 2046-47.

[53] Id. at 2047 (“Additionally, the Commission believes that other factors can help elucidate the intended or actual audience of a site or service, including user or third-party reviews and the age of users on similar websites or services.”).

ICLE Comments to European Commission on Competition in Virtual Worlds

Executive Summary We welcome the opportunity to comment on the European Commission’s call for contributions on competition in “Virtual Worlds”.[1] The International Center for Law . . .

Executive Summary

We welcome the opportunity to comment on the European Commission’s call for contributions on competition in “Virtual Worlds”.[1] The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

The metaverse is an exciting and rapidly evolving set of virtual worlds. As with any new technology, concerns about the potential risks and negative consequences that the metaverse may bring have moved policymakers to explore how best to regulate this new space.

From the outset, it is important to recognize that simply because the metaverse is new does not mean that competition in this space is unregulated or somehow ineffective. Existing regulations may not explicitly or exclusively target metaverse ecosystems, but a vast regulatory apparatus already covers most aspects of business in virtual worlds. This includes European competition law, the Digital Markets Act (“DMA”), the General Data Protection Act (“GDPR), the Digital Services Act (“DSA”), and many more. Before it intervenes in this space, the commission should carefully consider whether there are any metaverse-specific problems not already addressed by these legal provisions.

This sense that competition intervention would be premature is reinforced by three important factors.

The first is that competition appears particularly intense in this space (Section I). There are currently multiple firms vying to offer compelling virtual worlds. At the time of writing, however, none appears close to dominating the market. In turn, this intense competition will encourage platforms to design services that meet consumers’ demands, notably in terms of safety and privacy. Nor does the market appear likely to fall into the hands of one of the big tech firms that command a sizeable share of more traditional internet services. Meta notoriously has poured more than $3.99 billion into its metaverse offerings during the first quarter of 2023, in addition to $13.72 billion the previous calendar year.[2] Despite these vast investments and a strategic focus on metaverse services, the company has, thus far, struggled to achieve meaningful traction in the space.[3]

Second, the commission’s primary concern appears to be that metaverses will become insufficiently “open and interoperable”.[4] But to the extent that these ecosystems do, indeed, become closed and proprietary, there is no reason to believe this to be a problem. Closed and proprietary ecosystems have several features that may be attractive to consumers and developers (Section II). These include improved product safety, performance, and ease of development. This is certainly not to say that closed ecosystems are always better than more open ones, but rather that it would be wrong to assume that one model or the other is optimal. Instead, the proper balance depends on tradeoffs that markets are better placed to decide.

Finally, timing is of the essence (Section III). Intervening so early in a fledgling industry’s life cycle is like shooting a moving target from a mile away. New rules or competition interventions might end up being irrelevant. Worse, by signaling that metaverses will be subject to heightened regulatory scrutiny for the foreseeable future, the commission may chill investment from the very firms is purports to support. In short, the commission should resist the urge to intervene so long as the industry is not fully mature.

I. Competing for Consumer Trust

The Commission is right to assume, in its call for contributions, that the extent to which metaverse services compete with each other (and continue to do so in the future) will largely determine whether they fulfil consumers’ expectations and meet the safety and trustworthiness requirements to which the commission aspires. As even the left-leaning Lessig put it:

Markets regulate behavior in cyberspace too. Prices structures often constrain access, and if they do not, then busy signals do. (America Online (AOL) learned this lesson when it shifted from an hourly to a flat-rate pricing plan.) Some sites on the web charge for access, as on-line services like AOL have for some time. Advertisers reward popular sites; online services drop unpopular forums. These behaviors are all a function of market constraints and market opportunity, and they all reflect the regulatory role of the market.[5]

Indeed, in a previous call for contributions, the Commission implicitly recognized the important role that competition plays, although it frames the subject primarily in terms of the problems that would arise if competition ceased to operate:

There is a risk of having a small number of big players becoming future gatekeepers of virtual worlds, creating market entry barriers and shutting out EU start-ups and SMEs from this emerging market. Such a closed ecosystem with the prevalence of proprietary systems can negatively affect the protection of personal information and data, the cybersecurity and the freedom and openness of virtual worlds at the same time.[6]

It is thus necessary to ask whether there is robust competition in the market for metaverse services. The short answer is a resounding yes.

A. Competition Without Tipping

While there is no precise definition of what constitutes a metaverse—much less a precise definition of the relevant market—available data suggests the space is highly competitive. This is evident in the fact that even a major global firm like Meta—having invested billions of dollars in its metaverse branch (and having rebranded the company accordingly)—has struggled to gain traction.[7]

Other major players in the space include the likes of Roblox, Fortnite, and Minecraft, which all have somewhere between 70 and 200 million active users.[8] This likely explains why Meta’s much-anticipated virtual world struggled to gain meaningful traction with consumers, stalling at around 300,000 active users.[9] Alongside these traditional players, there are also several decentralized platforms that are underpinned by blockchain technology. While these platforms have attracted massive investments, they are largely peripheral in terms of active users, with numbers often only in the low thousands.[10]

There are several inferences that can be drawn from these limited datasets. For one, it is clear that the metaverse industry is not yet fully mature. There are still multiple paradigms competing for consumer attention: game-based platforms versus social-network platforms; traditional platforms versus blockchain platforms, etc. In the terminology developed by David Teece, the metaverse industry has not yet reached a “paradigmatic” stage. It is fair to assume there is still significant scope for the entry of differentiated firms.[11]

It is also worth noting that metaverse competition does not appear to exhibit the same sort of network effects and tipping that is sometimes associated with more traditional social networks.[12] Despite competing for nearly a decade, no single metaverse project appears to be running away with the market.[13] This lack of tipping might be because these projects are highly differentiated.[14] It may also be due to the ease of multi-homing among them.[15]

More broadly, it is far from clear that competition will lead to a single metaverse for all uses. Different types of metaverse services may benefit from different user interfaces, graphics, and physics engines. This cuts in favor of multiple metaverses coexisting, rather than all services coordinating within a single ecosystem. Competition therefore appears likely lead to the emergence of multiple differentiated metaverses, rather than a single winner.

Ultimately, competition in the metaverse industry is strong and there is little sense these markets are about to tip towards a single firm in the year future.

B. Competing for Consumer Trust

As alluded to in the previous subsection, the world’s largest and most successful metaverse entrants to date are traditional videogaming platforms that have various marketplaces and currencies attached.[16] In other words, decentralized virtual worlds built upon blockchain technology remain marginal.

This has important policy implications. The primary legal issues raised by metaverses are the same as those encountered on other digital marketplaces. This includes issues like minor fraud, scams, and children buying content without their parents’ authorization.[17] To the extent these harms are not adequately deterred by existing laws, metaverse platforms themselves have important incentives to police them. In turn, these incentives may be compounded by strong competition among platforms.

Metaverses are generally multi-sided platforms that bring together distinct groups of users, including consumers and content creators. In order to maximize the value of their ecosystems, platforms have an incentive to balance the interests of these distinct groups.[18] In practice, this will often mean offering consumers various forms of protection against fraud and scams and actively policing platforms’ marketplaces. As David Evans puts it:

But as with any community, there are numerous opportunities for people and businesses to create negative externalities, or engage in other bad behavior, that can reduce economic efficiency and, in the extreme, lead to the tragedy of the commons. Multi-sided platforms, acting selfishly to maximize their own profits, often develop governance mechanisms to reduce harmful behavior. They also develop rules to manage many of the same kinds of problems that beset communities subject to public laws and regulations. They enforce these rules through the exercise of property rights and, most importantly, through the “Bouncer’s Right” to exclude agents from some quantum of the platform, including prohibiting some agents from the platform entirely…[19]

While there is little economic research to suggest that competition directly increases hosts’ incentive to policy their platforms, it stands to reason that doing so effectively can help platforms to expand the appeal of their ecosystems. This is particularly important for metaverse services whose userbases remain just a fraction of the size they could ultimately reach. While 100 or 200 million users already comprises a vast ecosystem, it pales in comparison to the sometimes billions of users that “traditional” online platforms attract.

The bottom line is that the market for metaverses is growing. This likely compounds platforms’ incentives to weed out undesirable behavior, thereby complementing government efforts to achieve the same goal.

II. Opening Platforms or Opening Pandora’s Box?

In its call for contributions, the commission seems concerned that the metaverse competition may lead to closed ecosystems that may be less beneficial to consumers than more open ones. But if this is indeed the commission’s fear, it is largely unfounded.

There are many benefits to closed ecosystems. Choosing the optimal degree of openness entails tradeoffs. At the very least, this suggests that policymakers should be careful not to assume that opening platforms up will systematically provide net benefits to consumers.

A. Antitrust Enforcement and Regulatory Initiatives

To understand why open (and weakly propertized) platforms are not always better for consumers, it is worth looking at past competition enforcement in the online space. Recent interventions by competition authorities have generally attempted (or are attempting) to move platforms toward more openness and less propertization. For their part, these platforms are already tremendously open (as the “platform” terminology implies) and attempt to achieve a delicate balance between centralization and decentralization.

Figure I: Directional Movement of Antitrust Intervention

The Microsoft cases and the Apple investigation both sought or seek to bring more openness and less propertization to those respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and to open its platform to rival media players and web browsers (more openness).[20] The same applies to Apple. Plaintiffs in private antitrust litigation brought in the United States[21] and government enforcement actions in Europe[22] are seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as to ensure that it cannot exclude rival mobile-payments solutions from its platform (more openness).

The various cases that were brought by EU and U.S. authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property.[23] The European Union’s Amazon investigation centers on the ways in which the company uses data from third-party sellers (and, ultimately, the distribution of revenue between those sellers and Amazon).[24] In both cases, authorities are ultimately trying to limit the extent to which firms can propertize their assets.

Finally, both of the EU’s Google cases sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals.[25] The separate Android decision sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing litigation brought by state attorneys general in the United States.[26]

Much of the same can be said of the numerous regulatory initiatives pertaining to digital markets. Indeed, draft regulations being contemplated around the globe mimic the features of the antitrust/competition interventions discussed above. For instance, it is widely accepted that Europe’s DMA effectively transposes and streamlines the enforcement of the theories harm described above.[27] Similarly, several scholars have argued that the proposed American Innovation and Choice Online Act (“AICOA”) in the United States largely mimics European competition policy.[28] The legislation would ultimately require firms to open up their platforms, most notably by forcing them to treat rival services as they would their own and to make their services more interoperable with those rivals.[29]

What is striking about these decisions and investigations is the extent to which authorities are pushing back against the very features that distinguish the platforms they are investigating. Closed (or relatively closed) platforms are forced to open up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

B. The Empty Quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be vanishingly few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems in both the mobile and desktop segments. Most have ended in failure. Ubuntu and other flavors of the Linux operating system remain fringe products. There have been attempts to create open-source search engines, but they have not met with success.[30] The picture is similar in the online retail space. Amazon appears to have beaten eBay, despite the latter being more open and less propertized. Indeed, Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the ways in which they may sell their goods.[31]

This theme is repeated in the standardization space. There have been innumerable attempts to impose open, royalty-free standards. At least in the mobile-internet industry, few (if any) of these have taken off. Instead, proprietary standards such as 5G and WiFi have been far more successful. That pattern is repeated in other highly standardized industries, like digital-video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format.[32]

Figure II: Open and Shared Platforms

This is not to say that there haven’t been any successful examples of open, royalty-free standards. Internet protocols, blockchain, and Wikipedia all come to mind. Nor does it mean that we will not see more decentralized goods in the future. But by and large, firms and consumers have not yet taken to the idea of fully open and shared platforms. Or, at least, those platforms have not yet achieved widespread success in the marketplace (potentially due to supply-side considerations, such as the difficulty of managing open platforms or the potentially lower returns to innovation in weakly propertized ones).[33] And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase in the blockchain space, or Android’s use of Linux).

C. Potential Explanations

The preceding section posited a recurring reality: the digital platforms that competition authorities wish to bring into existence are fundamentally different from those that emerge organically. But why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success?

Three potential explanations come to mind. First, “closed” and “propertized” platforms might systematically—and perhaps anticompetitively—thwart their “open” and “shared” rivals. Second, shared platforms might fail to persist (or grow pervasive) because they are much harder to monetize, and there is thus less incentive to invest in them. This is essentially a supply-side explanation. Finally, consumers might opt for relatively closed systems precisely because they prefer these platforms to marginally more open ones—i.e., a demand-side explanation.

In evaluating the first conjecture, the key question is whether successful “closed” and “propertized” platforms overcame their rivals before or after they achieved some measure of market dominance. If success preceded dominance, then anticompetitive foreclosure alone cannot explain the proliferation of the “closed” and “propertized” model.[34]

Many of today’s dominant platforms, however, often overcame open/shared rivals, well before they achieved their current size. It is thus difficult to make the case that the early success of their business models was due to anticompetitive behavior. This is not to say these business models cannot raise antitrust issues, but rather that anticompetitive behavior is not a good explanation for their emergence.

Both the second and the third conjectures essentially ask whether “closed” and “propertized” might be better adapted to their environment than “open” and “shared” rivals.

In that respect, it is not unreasonable to surmise that highly propertized platforms would generally be easier to monetize than shared ones. For example, to monetize open-source platforms often requires relying on complementarities, which tend to be vulnerable to outside competition and free-riding.[35] There is thus a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platform’s ability to propertize their assets may harm innovation.

Similarly, authorities should reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design. The European Commission, for example, has a long track record of seeking to open digital platforms, notably by requiring that platform owners do not preinstall their own web browsers (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept using the very business model that the commission reprimanded, rather than the “pro-consumer” model it sought to impose on the industry. For example, Apple tied the Safari browser to its iPhones; Google went to some length to ensure that Chrome was preloaded on devices; and Samsung phones come with Samsung Internet as default.[36] Yet this has not ostensibly steered consumers away from those platforms.

Along similar lines, a sizable share of consumers opt for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). In other words, it is hard to claim that opening platforms is inherently good for consumers when those same consumers routinely opt for platforms with the very features that policymakers are trying to eliminate.

Finally, it is worth noting that the remedies imposed by competition authorities have been anything but successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unmitigated flop, selling a paltry 1,787 copies.[37] Likewise, the internet-browser “ballot box” imposed by the commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the commission’s decision.[38]

One potential inference is that consumers do not value competition interventions that make dominant ecosystems marginally more open and less propertized. There are also many reasons why consumers might prefer “closed” systems (at least, relative to the model favored by many policymakers), even when they must pay a premium for them.

Take the example of app stores. Maintaining some control over the apps that can access the store enables platforms to easily weed out bad actors. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. Indeed, it may be that a measure of control facilitates the very innovations that consumers demand. Therefore, “authorities and courts should not underestimate the indispensable role control plays in achieving coordination and coherence in the context of systemic ef?ciencies. Without it, the attempted novelties and strategies might collapse under their own complexity.”[39]

Relatively centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers.[40] This is especially true when consumers will tend to attribute dips in performance to the overall platform, rather than to a particular app.[41] At the same time, they can take advantage of positive externalities to improve the quality of the overall platform.

And it is surely the case that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Absent false information at the time of the initial platform decision, this decision will effectively incorporate expectations about subsequent constraints.[42]

Furthermore, forcing users to make too many “within-platform” choices may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different.[43] In short, contrary to what antitrust authorities appear to believe, closed platforms might give most users exactly what they desire.

All of this suggests that consumers and firms often gravitate spontaneously toward both closed and highly propertized platforms, the opposite of what the commission and other competition authorities tend to favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. Instead, what some regard as “market failures” may in fact be features that explain the rapid emergence of the digital economy.

When considering potential policy reforms targeting the metaverse, policymakers would be wrong to assume openness (notably, in the form of interoperability) and weak propertization are always objectively superior. Instead, these platform designs entail important tradeoffs. Closed metaverse ecosystems may lead to higher consumer safety and better performance, while interoperable systems may reduce the frictions consumers face when moving from one service to another. There is little reason to believe policymakers are in a better position to weigh these tradeoffs than consumers, who vote with their virtual feet.

III. Conclusion: Competition Intervention Would be Premature

A final important argument against intervening today is that the metaverse industry is nowhere near mature. Tomorrow’s competition-related challenges and market failures might not be the same as today’s. This makes it exceedingly difficult for policymakers to design appropriate remedies and increases the risk that intervention might harm innovation.

As of 2023, the entire metaverse industry (both hardware and software) is estimated to be worth somewhere in the vicinity of $80 billion, and projections suggest this could grow by a factor of 10 by 2030.[44] Growth projections of this sort are notoriously unreliable. But in this case, they do suggest there is some consensus that the industry is not fully fledged.

Along similar lines, it remains unclear what types of metaverse services will gain the most traction with consumers, what sorts of hardware consumers will use to access these services, and what technologies will underpin the most successful metaverse platforms. In fact, it is still an open question whether the metaverse industry will foster any services that achieve widespread consumer adoption in the foreseeable future.[45] In other words, it is not exactly clear what metaverse products and services the Commission should focus on in the first place.

Given these uncertainties, competition intervention in the metaverse appears premature. Intervening so early in the industry’s life cycle is like aiming at a moving target. Ensuing remedies might end up being irrelevant before they have any influence on the products that firms develop. More worryingly, acting now signals that the metaverse industry will be subject to heightened regulatory scrutiny for the foreseeable future. In turn, this may deter large platforms from investing in the European market. It also may funnel venture-capital investments away from the European continent.

Competition intervention in burgeoning industries is no free lunch. The best evidence concerning these potential costs comes from the GDPR. While privacy regulation is obviously not the same as competition law, the evidence concerning the GDPR suggests that heavy-handed intervention may, at least in some instances, slow down innovation and reduce competition.

The most-cited empirical evidence concerning the effects of the GDPR comes from a paper by Garrett Johnson and co-authors, who link the GDPR to widespread increases to market concentration, particularly in the short-term:

We show that websites’ vendor use falls after the European Union’s (EU’s) General Data Protection Regulation (GDPR), but that market concentration also increases among technology vendors that provide support services to websites…. The week after the GDPR’s enforcement, website use of web technology vendors falls by 15% for EU residents. Websites are relatively more likely to retain top vendors, which increases the concentration of the vendor market by 17%. Increased concentration predominantly arises among vendors that use personal data, such as cookies, and from the increased relative shares of Facebook and Google-owned vendors, but not from website consent requests. Although the aggregate changes in vendor use and vendor concentration dissipate by the end of 2018, we find that the GDPR impact persists in the advertising vendor category most scrutinized by regulators.[46]

Along similar lines, an NBER working paper by Jian Jia and co-authors finds that enactment of the GDPR markedly reduced venture-capital investments in Europe:

Our findings indicate a negative differential effect on EU ventures after the rollout of GDPR relative to their US counterparts. These negative effects manifest in the overall number of financing rounds, the overall dollar amount raised across rounds, and in the dollar amount raised per individual round. Specifically, our findings suggest a $3.38 million decrease in the aggregate dollars raised by EU ventures per state per crude industry category per week, a 17.6% reduction in the number of weekly venture deals, and a 39.6% decrease in the amount raised in an average deal following the rollout of GDPR.[47]

In another paper, Samuel Goldberg and co-authors find that the GDPR led to a roughly 12% reduction in website pageviews and e-commerce revenue in Europe.[48] Finally, Rebecca Janssen and her co-authors show that the GDPR decreased the number of apps offered on Google’s Play Store between 2016 and 2019:

Using data on 4.1 million apps at the Google Play Store from 2016 to 2019, we document that GDPR induced the exit of about a third of available apps; and in the quarters following implementation, entry of new apps fell by half.[49]

Of course, the body of evidence concerning the GDPR’s effects is not entirely unambiguous. For example, Rajkumar Vekatesean and co-authors find that the GDPR had mixed effects on the returns of different types of firms.[50] Other papers also show similarly mixed effects.[51]

Ultimately, the empirical literature concerning the effects of the GDPR shows that regulation—in this case, privacy protection—is no free lunch. Of course, this does not mean that competition intervention targeting the metaverse would necessarily have these same effects. But in the absence of a clear market failure to solve, it is unclear why policymakers should run such a risk in the first place.

In the end, competition intervention in the metaverse is unlikely to be costless. The metaverse is still in its infancy, regulation could deter essential innovation, and the commission has thus far failed to identify any serious market failures that warrant public intervention. The result is that the commission’s call for contributions appears premature or, in other words, that the commission is putting the meta-cart before the meta-horse.

 

[1] Competition in Virtual Worlds and Generative AI – Calls for contributions, European Commission (Jan. 9, 2024) https://competition-policy.ec.europa.eu/document/download/e727c66a-af77-4014-962a-7c9a36800e2f_en?filename=20240109_call-for-contributions_virtual-worlds_and_generative-AI.pdf (hereafter, “Call for Contributions”).

[2] Jonathan Vaian, Meta’s Reality Labs Records $3.99 Billion Quarterly Loss as Zuckerberg Pumps More Cash into Metaverse, CNBC (Apr. 26, 2023), https://www.cnbc.com/2023/04/26/metas-reality-labs-unit-records-3point99-billion-first-quarter-loss-.html.

[3] Alan Truly, Horizon Worlds Leak: Only 1 in 10 Users Return & Web Launch Is Coming, Mixed News (Mar. 3, 2023), https://mixed-news.com/en/horizon-worlds-leak-only-1-in-10-users-return-web-launch-coming; Kevin Hurler, Hey Fellow Kids: Meta Is Revamping Horizon Worlds to Attract More Teen Users, Gizmodo (Feb. 7, 2023), https://gizmodo.com/meta-metaverse-facebook-horizon-worlds-vr-1850082068; Emma Roth, Meta’s Horizon Worlds VR Platform Is Reportedly Struggling to Keep Users, The Verge (Oct. 15, 2022),
https://www.theverge.com/2022/10/15/23405811/meta-horizon-worlds-losing-users-report; Paul Tassi, Meta’s ‘Horizon Worlds’ Has Somehow Lost 100,000 Players in Eight Months, Forbes, (Oct. 17, 2022), https://www.forbes.com/sites/paultassi/2022/10/17/metas-horizon-worlds-has-somehow-lost-100000-players-in-eight-months/?sh=57242b862a1b.

[4] Call for Contributions, supra note 1. (“6) Do you expect the technology incorporated into Virtual World platforms, enabling technologies of Virtual Worlds and services based on Virtual Worlds to be based mostly on open standards and/or protocols agreed through standard-setting organisations, industry associations or groups of companies, or rather the use of proprietary technology?”).

[5] Less Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 508 (1999).

[6] Virtual Worlds (Metaverses) – A Vision for Openness, Safety and Respect, European Commission, https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/13757-Virtual-worlds-metaverses-a-vision-for-openness-safety-and-respect/feedback_en?p_id=31962299H.

[7] Catherine Thorbecke, What Metaverse? Meta Says Its Single Largest Investment Is Now in ‘Advancing AI’, CNN Business (Mar. 15, 2023), https://www.cnn.com/2023/03/15/tech/meta-ai-investment-priority/index.html; Ben Marlow, Mark Zuckerberg’s Metaverse Is Shattering into a Million Pieces, The Telegraph (Apr. 23, 2023), https://www.telegraph.co.uk/business/2023/04/21/mark-zuckerbergs-metaverse-shattering-million-pieces; Will Gendron, Meta Has Reportedly Stopped Pitching Advertisers on the Metaverse, BusinessInsider (Apr. 18, 2023), https://www.businessinsider.com/meta-zuckerberg-stopped-pitching-advertisers-metaverse-focus-reels-ai-report-2023-4.

[8] Mansoor Iqbal, Fortnite Usage and Revenue Statistics, Business of Apps (Jan. 9, 2023), https://www.businessofapps.com/data/fortnite-statistics; Matija Ferjan, 76 Little-Known Metaverse Statistics & Facts (2023 Data), Headphones Addict (Feb. 13, 2023), https://headphonesaddict.com/metaverse-statistics.

[9] James Batchelor, Meta’s Flagship Metaverse Horizon Worlds Struggling to Attract and Retain Users, Games Industry (Oct. 17, 2022), https://www.gamesindustry.biz/metas-flagship-metaverse-horizon-worlds-struggling-to-attract-and-retain-users; Ferjan, id.

[10] Richard Lawler, Decentraland’s Billion-Dollar ‘Metaverse’ Reportedly Had 38 Active Users in One Day, The Verge (Oct. 13, 2022), https://www.theverge.com/2022/10/13/23402418/decentraland-metaverse-empty-38-users-dappradar-wallet-data; The Sandbox, DappRadar, https://dappradar.com/multichain/games/the-sandbox (last visited May 3, 2023); Decentraland, DappRadar, https://dappradar.com/multichain/social/decentraland (last visited May 3, 2023).

[11] David J. Teece, Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing and Public Policy, 15 Research Policy 285-305 (1986), https://www.sciencedirect.com/science/article/abs/pii/0048733386900272.

[12] Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo. Mason L. Rev. 1279 (2021).

[13] Roblox, Wikipedia, https://en.wikipedia.org/wiki/Roblox (last visited May 3, 2023); Minecraft, Wikipedia, https://en.wikipedia.org/wiki/Minecraft (last visited May 3, 2023); Fortnite, Wikipedia, https://en.wikipedia.org/wiki/Fortnite (last visited May 3, 2023); see Fiza Chowdhury, Minecraft vs Roblox vs Fortnite: Which Is Better?, Metagreats (Feb. 20, 2023), https://www.metagreats.com/minecraft-vs-roblox-vs-fortnite.

[14]  Marc Rysman, The Economics of Two-Sided Markets, 13 J. Econ. Perspectives 134 (2009) (“First, if standards can differentiate from each other, they may be able to successfully coexist (Chou and Shy, 1990; Church and Gandal, 1992). Arguably, Apple and Microsoft operating systems have both survived by specializing in different markets: Microsoft in business and Apple in graphics and education. Magazines are an obvious example of platforms that differentiate in many dimensions and hence coexist.”).

[15] Id. at 134 (“Second, tipping is less likely if agents can easily use multiple standards. Corts and Lederman (forthcoming) show that the fixed cost of producing a video game for one more standard have reduced over time relative to the overall fixed costs of producing a game, which has led to increased distribution of games across multiple game systems (for example, PlayStation, Nintendo, and Xbox) and a less-concentrated game system market.”).

[16] What Are Fortnite, Roblox, Minecraft and Among Us? A Parent’s Guide to the Most Popular Online Games Kids Are Playing, FTC Business (Oct. 5, 2021), https://www.ftc.net/blog/what-are-fortnite-roblox-minecraft-and-among-us-a-parents-guide-to-the-most-popular-online-games-kids-are-playing; Jay Peters, Epic Is Merging Its Digital Asset Stores into One Huge Marketplace, The Verge (Mar. 22, 2023), https://www.theverge.com/2023/3/22/23645601/epic-games-fab-asset-marketplace-state-of-unreal-2023-gdc.

[17] Luke Winkie, Inside Roblox’s Criminal Underworld, Where Kids Are Scamming Kids, IGN (Jan. 2, 2023), https://www.ign.com/articles/inside-robloxs-criminal-underworld-where-kids-are-scamming-kids; Fake Minecraft Updates Pose Threat to Users, Tribune (Sept. 11, 2022), https://tribune.com.pk/story/2376087/fake-minecraft-updates-pose-threat-to-users; Ana Diaz, Roblox and the Wild West of Teenage Scammers, Polygon (Aug. 24, 2019) https://www.polygon.com/2019/8/24/20812218/roblox-teenage-developers-controversy-scammers-prison-roleplay; Rebecca Alter, Fortnite Tries Not to Scam Children and Face $520 Million in FTC Fines Challenge, Vulture (Dec. 19, 2022), https://www.vulture.com/2022/12/fortnite-epic-games-ftc-fines-privacy.html; Leonid Grustniy, Swindle Royale: Fortnite Scammers Get Busy, Kaspersky Daily (Dec. 3, 2020), https://www.kaspersky.com/blog/top-four-fortnite-scams/37896.

[18] See, generally, David Evans & Richard Schmalensee, Matchmakers: The New Economics of Multisided Platforms (Harvard Business Review Press, 2016).

[19] David S. Evans, Governing Bad Behaviour By Users of Multi-Sided Platforms, Berkley Technology Law Journal 27:2 (2012), 1201.

[20] See Case COMP/C-3/37.792, Microsoft, OJ L 32 (May 24, 2004). See also, Case COMP/39.530, Microsoft (Tying), OJ C 120 (Apr. 26, 2013).

[21] See Complaint, Epic Games, Inc. v. Apple Inc., 493 F. Supp. 3d 817 (N.D. Cal. 2020) (4:20-cv-05640-YGR).

[22] See European Commission Press Release IP/20/1073, Antitrust: Commission Opens Investigations into Apple’s App Store Rules (Jun. 16, 2020); European Commission Press Release IP/20/1075, Antitrust: Commission Opens Investigation into Apple Practices Regarding Apple Pay (Jun. 16, 2020).

[23] See European Commission Press Release IP/18/421, Antitrust: Commission Fines Qualcomm €997 Million for Abuse of Dominant Market Position (Jan. 24, 2018); Federal Trade Commission v. Qualcomm Inc., 969 F.3d 974 (9th Cir. 2020).

[24] See European Commission Press Release IP/19/4291, Antitrust: Commission Opens Investigation into Possible Anti-Competitive Conduct of Amazon (Jul. 17, 2019).

[25] See Case AT.39740, Google Search (Shopping), 2017 E.R.C. I-379. See also, Case AT.40099 (Google Android), 2018 E.R.C.

[26] See Complaint, United States v. Google, LLC, (2020), https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws; see also, Complaint, Colorado et al. v. Google, LLC, (2020), available at https://coag.gov/app/uploads/2020/12/Colorado-et-al.-v.-Google-PUBLIC-REDACTED-Complaint.pdf.

[27] See, e.g., Giorgio Monti, The Digital Markets Act: Institutional Design and Suggestions for Improvement, Tillburg L. & Econ. Ctr., Discussion Paper No. 2021-04 (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3797730 (“In sum, the DMA is more than an enhanced and simplified application of Article 102 TFEU: while the obligations may be criticised as being based on existing competition concerns, they are forward-looking in trying to create a regulatory environment where gatekeeper power is contained and perhaps even reduced.”) (Emphasis added).

[28] See, e.g., Aurelien Portuese, “Please, Help Yourself”: Toward a Taxonomy of Self-Preferencing, Information Technology & Innovation Foundation (Oct. 25, 2021), available at https://itif.org/sites/default/files/2021-self-preferencing-taxonomy.pdf. (“The latest example of such weaponization of self-preferencing by antitrust populists is provided by Sens. Amy Klobuchar (D-MN) and Chuck Grassley (R-IA). They introduced legislation in October 2021 aimed at prohibiting the practice.2 However, the legislation would ban self-preferencing only for a handful of designated companies—the so-called “covered platforms,” not the thousands of brick-and-mortar sellers that daily self-preference for the benefit of consumers. Mimicking the European Commission’s Digital Markets Act prohibiting self-preferencing, Senate and the House bills would degrade consumers’ experience and undermine competition, since self-preferencing often benefits consumers and constitutes an integral part, rather than an abnormality, of the process of competition.”).

[29] Efforts to saddle platforms with “non-discrimination” constraints are tantamount to mandating openness. See Geoffrey A. Manne, Against the Vertical Discrimination Presumption, Foreword, Concurrences No. 2-2020 (2020) at 2 (“The notion that platforms should be forced to allow complementors to compete on their own terms, free of constraints or competition from platforms is a species of the idea that platforms are most socially valuable when they are most ‘open.’ But mandating openness is not without costs, most importantly in terms of the effective operation of the platform and its own incentives for innovation.”).

[30] See, e.g., Klint Finley, Your Own Private Google: The Quest for an Open Source Search Engine, Wired (Jul. 12, 2021), https://www.wired.com/2012/12/solar-elasticsearch-google.

[31] See Brian Connolly, Selling on Amazon vs. eBay in 2021: Which Is Better?, JungleScout (Jan. 12, 2021), https://www.junglescout.com/blog/amazon-vs-ebay; Crucial Differences Between Amazon and eBay, SaleHOO, https://www.salehoo.com/educate/selling-on-amazon/crucial-differences-between-amazon-and-ebay (last visited Feb. 8, 2021).

[32] See, e.g., Dolby Vision Is Winning the War Against HDR10 +, It Requires a Single Standard, Tech Smart, https://voonze.com/dolby-vision-is-winning-the-war-against-hdr10-it-requires-a-single-standard (last visited June 6, 2022).

[33] On the importance of managers, see, e.g., Nicolai J Foss & Peter G Klein, Why Managers Still Matter, 56 MIT Sloan Mgmt. Rev., 73 (2014) (“In today’s knowledge-based economy, managerial authority is supposedly in decline. But there is still a strong need for someone to define and implement the organizational rules of the game.”).

[34] It is generally agreed upon that anticompetitive foreclosure is possible only when a firm enjoys some degree of market power. Frank H. Easterbrook, Limits of Antitrust, 63 Tex. L. Rev. 1, 20 (1984) (“Firms that lack power cannot injure competition no matter how hard they try. They may injure a few consumers, or a few rivals, or themselves (see (2) below) by selecting ‘anticompetitive’ tactics. When the firms lack market power, though, they cannot persist in deleterious practices. Rival firms will offer the consumers better deals. Rivals’ better offers will stamp out bad practices faster than the judicial process can. For these and other reasons many lower courts have held that proof of market power is an indispensable first step in any case under the Rule of Reason. The Supreme Court has established a market power hurdle in tying cases, despite the nominally per se character of the tying offense, on the same ground offered here: if the defendant lacks market power, other firms can offer the customer a better deal, and there is no need for judicial intervention.”).

[35] See, e.g., Josh Lerner & Jean Tirole, Some Simple Economics of Open Source, 50 J. Indus. Econ. 197 (2002).

[36] See Matthew Miller, Thanks, Samsung: Android’s Best Mobile Browser Now Available to All, ZDNet (Aug. 11, 2017), https://www.zdnet.com/article/thanks-samsung-androids-best-mobile-browser-now-available-to-all.

[37] FACT SHEET: Windows XP N Sales, RegMedia (Jun. 12, 2009), available at https://regmedia.co.uk/2009/06/12/microsoft_windows_xp_n_fact_sheet.pdf.

[38] See Case COMP/39.530, Microsoft (Tying), OJ C 120 (Apr. 26, 2013).

[39] Konstantinos Stylianou, Systemic Efficiencies in Competition Law: Evidence from the ICT Industry, 12 J. Competition L. & Econ. 557 (2016).

[40] See, e.g., Steven Sinofsky, The App Store Debate: A Story of Ecosystems, Medium (Jun. 21, 2020), https://medium.learningbyshipping.com/the-app-store-debate-a-story-of-ecosystems-938424eeef74.

[41] Id.

[42] See, e.g., Benjamin Klein, Market Power in Aftermarkets, 17 Managerial & Decision Econ. 143 (1996).

[43] See, e.g., Simon Hill, What Is Android Fragmentation, and Can Google Ever Fix It?, DigitalTrends (Oct. 31, 2018), https://www.digitaltrends.com/mobile/what-is-android-fragmentation-and-can-google-ever-fix-it.

[44] Metaverse Market Revenue Worldwide from 2022 to 2030, Statista, https://www.statista.com/statistics/1295784/metaverse-market-size (last visited May 3, 2023); Metaverse Market by Component (Hardware, Software (Extended Reality Software, Gaming Engine, 3D Mapping, Modeling & Reconstruction, Metaverse Platform, Financial Platform), and Professional Services), Vertical and Region – Global Forecast to 2027, Markets and Markets (Apr. 27, 2023), https://www.marketsandmarkets.com/Market-Reports/metaverse-market-166893905.html; see also, Press Release, Metaverse Market Size Worth $ 824.53 Billion, Globally, by 2030 at 39.1% CAGR, Verified Market Research (Jul. 13, 2022), https://www.prnewswire.com/news-releases/metaverse-market-size-worth–824-53-billion-globally-by-2030-at-39-1-cagr-verified-market-research-301585725.html.

[45] See, e.g., Megan Farokhmanesh, Will the Metaverse Live Up to the Hype? Game Developers Aren’t Impressed, Wired (Jan. 19, 2023), https://www.wired.com/story/metaverse-video-games-fortnite-zuckerberg; see also Mitch Wagner, The Metaverse Hype Bubble Has Popped. What Now?, Fierce Electronics (Feb. 24, 2023), https://www.fierceelectronics.com/embedded/metaverse-hype-bubble-has-popped-what-now.

[46] Garret A. Johnson, et al., Privacy and Market Concentration: Intended and Unintended Consequences of the GDPR, Forthcoming Management Science 1 (2023).

[47] Jian Jia, et al., The Short-Run Effects of GDPR on Technology Venture Investment, NBER Working Paper 25248, 4 (2018), available at https://www.nber.org/system/files/working_papers/w25248/w25248.pdf.

[48] Samuel G. Goldberg, Garrett A. Johnson, & Scott K. Shriver, Regulating Privacy Online: An Economic Evaluation of GDPR (2021), available at https://www.ftc.gov/system/files/documents/public_events/1588356/johnsongoldbergshriver.pdf.

[49] Rebecca Janßen, Reinhold Kesler, Michael Kummer, & Joel Waldfogel, GDPR and the Lost Generation of Innovative Apps, Nber Working Paper 30028, 2 (2022), available at https://www.nber.org/system/files/working_papers/w30028/w30028.pdf.

[50] Rajkumar Venkatesan, S. Arunachalam & Kiran Pedada, Short Run Effects of Generalized Data Protection Act on Returns from AI Acquisitions, University of Virginia Working Paper 6 (2022), available at: https://conference.nber.org/conf_papers/f161612.pdf. (“On average, GDPR exposure reduces the ROA of firms. We also find that GDPR exposure increases the ROA of firms that make AI acquisitions for improving customer experience, and cybersecurity. Returns on AI investments in innovation and operational efficiencies are unaffected by GDPR.”)

[51] For a detailed discussion of the empirical literature concerning the GDPR, see Garrett Johnson, Economic Research on Privacy Regulation: Lessons From the GDPR And Beyond, NBER Working Paper 30705 (2022), available at https://www.nber.org/system/files/working_papers/w30705/w30705.pdf.

ICLE Comments to European Commission on AI Competition

Executive Summary We thank the European Commission for launching this consultation on competition in generative AI. The International Center for Law & Economics (“ICLE”) is . . .

Executive Summary

We thank the European Commission for launching this consultation on competition in generative AI. The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

In our comments, we express concern that policymakers may equate the rapid rise of generative AI services with a need to intervene in these markets when, in fact, the opposite is true. As we explain, the rapid growth of AI markets, as well as the fact that new market players are thriving, suggests competition is intense. If incumbent firms could easily leverage their dominance into burgeoning generative AI markets, we would not have seen the growth of generative AI unicorns such as OpenAI, Midjourney, and Anthropic, to name but a few.

Of course, this is not to say that generative AI markets are not important—quite the opposite. Generative AI is already changing the ways that many firms do business and improving employee productivity in many industries.[1] The technology is also increasingly useful in the field of scientific research, where it has enabled creation of complex models that expand scientists’ reach.[2] Against this backdrop, Commissioner Margrethe Vestager was right to point out that it “is fundamental that these new markets stay competitive, and that nothing stands in the way of businesses growing and providing the best and most innovative products to consumers.”[3]

But while sensible enforcement is of vital importance to maintain competition and consumer welfare, knee-jerk reactions may yield the opposite outcomes. As our comments explain, overenforcement in the field of generative AI could cause the very harms that policymakers seek to avert. For instance, preventing so-called “big tech” firms from competing in these markets (for example, by threatening competition intervention as soon as they embed generative AI services in their ecosystems or seek to build strategic relationships with AI startups) may thwart an important source of competition needed to keep today’s leading generative-AI firms in check. In short, competition in AI markets is important, but trying naïvely to hold incumbent tech firms back out of misguided fears they will come to dominate this space is likely to do more harm than good.

Our comment proceeds as follows. Section I summarizes recent calls for competition intervention in generative AI markets. Section II argues that many of these calls are underpinned by fears of data-related incumbency advantages (often referred to as “data-network effects”). Section III explains why these effects are unlikely to play a meaningful role in generative-AI markets. Section IV concludes by offering five key takeaways to help policymakers (including the Commission) better weigh the tradeoffs inherent to competition intervention in generative-AI markets.

I. Calls for Intervention in AI Markets

It was once (and frequently) said that Google’s “data monopoly” was unassailable: “If ‘big data’ is the oil of the information economy, Google has Standard Oil-like monopoly dominance—and uses that control to maintain its dominant position.”[4] Similar claims of data dominance have been attached to nearly all large online platforms, including Facebook (Meta), Amazon, and Uber.[5]

While some of these claims continue even today (for example, “big data” is a key component of the U.S. Justice Department’s (DOJ) Google Search and adtech antitrust suits),[6] a shiny new data target has emerged in the form of generative artificial intelligence (AI). The launch of ChatGPT in November 2022, as well as the advent of AI image-generation services like Midjourney and Dall-E, have dramatically expanded the public’s conception of what is—and what might be—possible to achieve with generative-AI technologies built on massive datasets.

While these services remain in the early stages of mainstream adoption and remain in the throes of rapid, unpredictable technological evolution, they nevertheless already appear to be on the radar of competition policymakers around the world. Several antitrust enforcers appear to believe that, by acting now, they can avoid the “mistakes” that were purportedly made during the formative years of Web 2.0.[7] These mistakes, critics assert, include failing to appreciate the centrality of data in online markets, as well as letting mergers go unchecked and allowing early movers to entrench their market positions.[8] As Lina Khan, chair of the U.S. Federal Trade Commission (FTC), put it: “we are still reeling from the concentration that resulted from Web 2.0, and we don’t want to repeat the mis-steps of the past with AI”.[9]

This response from the competition-policy world is deeply troubling. Rather than engage in critical self-assessment and adopt an appropriately restrained stance, the enforcement community appears to be champing at the bit. Rather than assessing their prior assumptions based on the current technological moment, enforcers’ top priority appears to be figuring out how to rapidly and almost reflexively deploy existing competition tools to address the presumed competitive failures presented by generative AI.[10]

It is increasingly common for competition enforcers to argue that so-called “data-network effects” serve not only to entrench incumbents in those markets where the data is collected, but also confer similar, self-reinforcing benefits in adjacent markets. Several enforcers have, for example, prevented large online platforms from acquiring smaller firms in adjacent markets, citing the risk that they could use their vast access to data to extend their dominance into these new markets.[11]

They have also launched consultations to ascertain the role that data plays in AI competition. For instance, in an ongoing consultation, the European Commission asks: “What is the role of data and what are its relevant characteristics for the provision of generative AI systems and/or components, including AI models?”[12] Unsurprisingly, the FTC has likewise been bullish about the risks posed by incumbents’ access to data. In comments submitted to the U.S. Copyright Office, for example, the FTC argued that:

The rapid development and deployment of AI also poses potential risks to competition. The rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms. These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.[13]

Certainly, it stands to reason that the largest online platforms—including Alphabet, Meta, Apple, and Amazon—should have a meaningful advantage in the burgeoning markets for generative-AI services. After all, it is widely recognized that data is an essential input for generative AI.[14] This competitive advantage should be all the more significant, given that these firms have been at the forefront of AI technology for more than a decade. Over this period, Google’s DeepMind and AlphaGo and Meta’s have routinely made headlines.[15] Apple and Amazon also have vast experience with AI assistants, and all of these firms use AI technology throughout their platforms.[16]

Contrary to what one might expect, however, the tech giants have, to date, been largely unable to leverage their vast data troves to outcompete startups like OpenAI and Midjourney. At the time of writing, OpenAI’s ChatGPT appears to be, by far, the most successful chatbot,[17] despite the large tech platforms’ apparent access to far more (and more up-to-date) data.

In these comments, we suggest that there are important lessons to glean from these developments, if only enforcers would stop to reflect. The meteoric rise of consumer-facing AI services should offer competition enforcers and policymakers an opportunity for introspection. As we explain, the rapid emergence of generative-AI technology may undercut many core assumptions of today’s competition-policy debates, which have largely focused on the rueful after-effects of the purported failure of 20th-century antitrust to address the allegedly manifest harms of 21st-century technology. These include the notions that data advantages constitute barriers to entry and can be leveraged to project dominance into adjacent markets; that scale itself is a market failure to be addressed by enforcers; and that the use of consumer data is inherently harmful to those consumers.

II. Data-Network Effects Theory and Enforcement

Proponents of tougher interventions by competition enforcers into digital markets often cite data-network effects as a source of competitive advantage and barrier to entry (though terms like “economies of scale and scope” may offer more precision).[18] The crux of the argument is that “the collection and use of data creates a feedback loop of more data, which ultimately insulates incumbent platforms from entrants who, but for their data disadvantage, might offer a better product.”[19] This self-reinforcing cycle purportedly leads to market domination by a single firm. Thus, it is argued, for example, that Google’s “ever-expanding control of user personal data, and that data’s critical value to online advertisers, creates an insurmountable barrier to entry for new competition.”[20]

Right off the bat, it is important to note the conceptual problem these claims face. Because data can be used to improve the quality of products and/or to subsidize their use, the idea of data as an entry barrier suggests that any product improvement or price reduction made by an incumbent could be a problematic entry barrier to any new entrant. This is tantamount to an argument that competition itself is a cognizable barrier to entry. Of course, it would be a curious approach to antitrust if competition were treated as a problem, as it would imply that firms should under-compete—i.e., should forego consumer-welfare enhancements—in order to inculcate a greater number of firms in a given market simply for its own sake.[21]

Meanwhile, actual economic studies of data-network effects have been few and far between, with scant empirical evidence to support the theory.[22] Andrei Hagiu and Julian Wright’s theoretical paper offers perhaps the most comprehensive treatment of the topic to date.[23] The authors ultimately conclude that data-network effects can be of different magnitudes and have varying effects on firms’ incumbency advantage.[24] They cite Grammarly (an AI writing-assistance tool) as a potential example: “As users make corrections to the suggestions offered by Grammarly, its language experts and artificial intelligence can use this feedback to continue to improve its future recommendations for all users.”[25]

This is echoed by other economists who contend that “[t]he algorithmic analysis of user data and information might increase incumbency advantages, creating lock-in effects among users and making them more reluctant to join an entrant platform.”[26] Crucially, some scholars take this logic a step further, arguing that platforms may use data from their “origin markets” in order to enter and dominate adjacent ones:

First, as we already mentioned, data collected in the origin market can be used, once the enveloper has entered the target market, to provide products more efficiently in the target market. Second, data collected in the origin market can be used to reduce the asymmetric information to which an entrant is typically subject when deciding to invest (for example, in R&D) to enter a new market. For instance, a search engine could be able to predict new trends from consumer searches and therefore face less uncertainty in product design.[27]

This possibility is also implicit in Hagiu and Wright’s paper.[28] Indeed, the authors’ theoretical model rests on an important distinction between within-user data advantages (that is, having access to more data about a given user) and across-user data advantages (information gleaned from having access to a wider user base). In both cases, there is an implicit assumption that platforms may use data from one service to gain an advantage in another market (because what matters is information about aggregate or individual user preferences, regardless of its origin).

Our review of the economic evidence suggests that several scholars have, with varying degrees of certainty, raised the possibility that incumbents may leverage data advantages to stifle competitors in their primary market or in adjacent ones (be it via merger or organic growth). As we explain below, however, there is ultimately little evidence to support such claims. Policymakers have, however, been keenly receptive to these limited theoretical findings, basing multiple decisions on these theories, often with little consideration given to the caveats that accompany them.[29]

Indeed, it is remarkable that, in its section on “[t]he data advantage for incumbents,” the “Furman Report” created for the UK government cited only two empirical economic studies, and they offer directly contradictory conclusions with respect to the question of the strength of data advantages.[30] Nevertheless, the Furman Report concludes that data “may confer a form of unmatchable advantage on the incumbent business, making successful rivalry less likely,”[31] and adopts without reservation “convincing” evidence from non-economists that have no apparent empirical basis.[32]

In the Google/Fitbit merger proceedings, the European Commission found that the combination of data from Google services with that of Fitbit devices would reduce competition in advertising markets:

Giving [sic] the large amount of data already used for advertising purposes that Google holds, the increase in Google’s data collection capabilities, which goes beyond the mere number of active users for which Fitbit has been collecting data so far, the Transaction is likely to have a negative impact on the development of an unfettered competition in the markets for online advertising.[33]

As a result, the Commission cleared the merger on the condition that Google refrain from using data from Fitbit devices for its advertising platform.[34] The Commission will likely focus on similar issues during its ongoing investigation of Microsoft’s investment into OpenAI.[35]

Along similar lines, the FTC’s complaint to enjoin Meta’s purchase of a virtual-reality (VR) fitness app called “Within” relied, among other things, on the fact that Meta could leverage its data about VR-user behavior to inform its decisions and potentially outcompete rival VR-fitness apps: “Meta’s control over the Quest platform also gives it unique access to VR user data, which it uses to inform strategic decisions.”[36]

The DOJ’s twin cases against Google also implicate data leveraging and data barriers to entry. The agency’s adtech complaint charges that “Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”[37] Similarly, in its search complaint, the agency argues that:

Google’s anticompetitive practices are especially pernicious because they deny rivals scale to compete effectively. General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms.[38]

Finally, updated merger guidelines published in recent years by several competition enforcers cite the acquisition of data as a potential source of competition concerns. For instance, the FTC and DOJ’s newly published guidelines state that “acquiring data that helps facilitate matching, sorting, or prediction services may enable the platform to weaken rival platforms by denying them that data.”[39] Likewise, the UK Competition and Markets Authority (CMA) warns against incumbents acquiring firms in order to obtain their data and foreclose other rivals:

Incentive to foreclose rivals…

7.19(e) Particularly in complex and dynamic markets, firms may not focus on short term margins but may pursue other objectives to maximise their long-run profitability, which the CMA may consider. This may include… obtaining access to customer data….[40]

In short, competition authorities around the globe have been taking an increasingly aggressive stance on data-network effects. Among the ways this has manifested is in basing enforcement decisions on fears that data collected by one platform might confer a decisive competitive advantage in adjacent markets. Unfortunately, these concerns rest on little to no empirical evidence, either in the economic literature or the underlying case records.

III. Data-Incumbency Advantages in Generative-AI Markets

Given the assertions canvassed in the previous section, it would be reasonable to assume that firms such as Google, Meta, and Amazon should be in pole position to dominate the burgeoning market for generative AI. After all, these firms have not only been at the forefront of the field for the better part of a decade, but they also have access to vast troves of data, the likes of which their rivals could only dream when they launched their own services. Thus, the authors of the Furman Report caution that “to the degree that the next technological revolution centres around artificial intelligence and machine learning, then the companies most able to take advantage of it may well be the existing large companies because of the importance of data for the successful use of these tools.”[41]

To date, however, this is not how things have unfolded—although it bears noting these markets remain in flux and the competitive landscape is susceptible to change. The first significantly successful generative-AI service was arguably not from either Meta—which had been working on chatbots for years and had access to, arguably, the world’s largest database of actual chats—or Google. Instead, the breakthrough came from a previously unknown firm called OpenAI.

OpenAI’s ChatGPT service currently holds an estimated 60% of the market (though reliable numbers are somewhat elusive).[42] It broke the record for the fastest online service to reach 100 million users (in only a couple of months), more than four times faster than the previous record holder, TikTok.[43] Based on Google Trends data, ChatGPT is nine times more popular worldwide than Google’s own Bard service, and 14 times more popular in the United States.[44] In April 2023, ChatGPT reportedly registered 206.7 million unique visitors, compared to 19.5 million for Google’s Bard.[45] In short, at the time we are writing, ChatGPT appears to be the most popular chatbot. The entry of large players such as Google Bard or Meta AI appear to have had little effect thus far on its market position.[46]

The picture is similar in the field of AI-image generation. As of August 2023, Midjourney, Dall-E, and Stable Diffusion appear to be the three market leaders in terms of user visits.[47] This is despite competition from the likes of Google and Meta, who arguably have access to unparalleled image and video databases by virtue of their primary platform activities.[48]

This raises several crucial questions: how have these AI upstarts managed to be so successful, and is their success just a flash in the pan before Web 2.0 giants catch up and overthrow them? While we cannot answer either of these questions dispositively, we offer what we believe to be some relevant observations concerning the role and value of data in digital markets.

A first important observation is that empirical studies suggest that data exhibits diminishing marginal returns. In other words, past a certain point, acquiring more data does not confer a meaningful edge to the acquiring firm. As Catherine Tucker put it following a review of the literature: “Empirically there is little evidence of economies of scale and scope in digital data in the instances where one would expect to find them.”[49]

Likewise, following a survey of the empirical literature on this topic, Geoffrey Manne and Dirk Auer conclude that:

Available evidence suggests that claims of “extreme” returns to scale in the tech sector are greatly overblown. Not only are the largest expenditures of digital platforms unlikely to become proportionally less important as output increases, but empirical research strongly suggests that even data does not give rise to increasing returns to scale, despite routinely being cited as the source of this effect.[50]

In other words, being the firm with the most data appears to be far less important than having enough data. This lower bar may be accessible to far more firms than one might initially think possible. And obtaining enough data could become even easier—that is, the volume of required data could become even smaller—with technological progress. For instance, synthetic data may provide an adequate substitute to real-world data,[51] or may even outperform real-world data.[52] As Thibault Schrepel and Alex Pentland surmise:

[A]dvances in computer science and analytics are making the amount of data less relevant every day. In recent months, important technological advances have allowed companies with small data sets to compete with larger ones.[53]

Indeed, past a certain threshold, acquiring more data might not meaningfully improve a service, where other improvements (such as better training methods or data curation) could have a large impact. In fact, there is some evidence that excessive data impedes a service’s ability to generate results appropriate for a given query: “[S]uperior model performance can often be achieved with smaller, high-quality datasets than massive, uncurated ones. Data curation ensures that training datasets are devoid of noise, irrelevant instances, and duplications, thus maximizing the efficiency of every training iteration.”[54]

Consider, for instance, a user who wants to generate an image of a basketball. Using a model trained on an indiscriminate range and number of public photos in which a basketball appears surrounded by copious other image data, the user may end up with an inordinately noisy result. By contrast, a model trained with a better method on fewer, more carefully selected images, could readily yield far superior results.[55] In one important example:

[t]he model’s performance is particularly remarkable, given its small size. “This is not a large language model trained on the whole Internet; this is a relatively small transformer trained for these tasks,” says Armando Solar-Lezama, a computer scientist at the Massachusetts Institute of Technology, who was not involved in the new study…. The finding implies that instead of just shoving ever more training data into machine-learning models, a complementary strategy might be to offer AI algorithms the equivalent of a focused linguistics or algebra class.[56]

Platforms’ current efforts are thus focused on improving the mathematical and logical reasoning of large language models (LLMs), rather than maximizing training datasets.[57] Two points stand out. The first is that firms like OpenAI rely largely on publicly available datasets—such as GSM8K—to train their LLMs.[58] Second, the real challenge to create cutting-edge AI is not so much in collecting data, but rather in creating innovative AI-training processes and architectures:

[B]uilding a truly general reasoning engine will require a more fundamental architectural innovation. What’s needed is a way for language models to learn new abstractions that go beyond their training data and have these evolving abstractions influence the model’s choices as it explores the space of possible solutions.

We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.[59]

Furthermore, it is worth noting that the data most relevant to startups in a given market may not be those data held by large incumbent platforms in other markets, but rather data specific to the market in which the startup is active or, even better, to the given problem it is attempting to solve:

As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use—they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market and mounted their successful challenges—not before.[60]

The bottom line is that data is not the be-all and end-all that many in competition circles make it out to be. While data may often confer marginal benefits, there is little sense these are ultimately decisive.[61] As a result, incumbent platforms’ access to vast numbers of users and data in their primary markets might only marginally affect their AI competitiveness.

A related observation is that firms’ capabilities and other features of their products arguably play a more important role than the data they own.[62] Examples of this abound in digital markets. Google overthrew Yahoo, despite initially having access to far fewer users and far less data; Google and Apple overcame Microsoft in the smartphone OS market despite having comparatively tiny ecosystems (at the time) to leverage; and TikTok rose to prominence despite intense competition from incumbents like Instagram, which had much larger user bases. In each of these cases, important product-design decisions (such as the PageRank algorithm, recognizing the specific needs of mobile users,[63] and TikTok’s clever algorithm) appear to have played a far more significant role than initial user and data endowments (or lack thereof).

All of this suggests that the early success of OpenAI likely has more to do with its engineering decisions than what data it did (or did not) own. Going forward, OpenAI and its rivals’ ability to offer and monetize compelling stores offering custom versions of their generative-AI technology will arguably play a much larger role than (and contribute to) their ownership of data.[64] In other words, the ultimate challenge is arguably to create a valuable platform, of which data ownership is a consequence, but not a cause.

It is also important to note that, in those instances where it is valuable, data does not just fall from the sky. Instead, it is through smart business and engineering decisions that firms can generate valuable information (which does not necessarily correlate with owning more data).

For instance, OpenAI’s success with ChatGPT is often attributed to its more efficient algorithms and training models, which arguably have enabled the service to improve more rapidly than its rivals.[65] Likewise, the ability of firms like Meta and Google to generate valuable data for advertising arguably depends more on design decisions that elicit the right data from users, rather than the raw number of users in their networks.

Put differently, setting up a business so as to extract and organize the right information is more important than simply owning vast troves of data.[66] Even in those instances where high-quality data is an essential parameter of competition, it does not follow that having vaster databases or more users on a platform necessarily leads to better information for the platform.

Indeed, if data ownership consistently conferred a significant competitive advantage, these new firms would not be where they are today. This does not mean that data is worthless, of course. Rather, it means that competition authorities should not assume that the mere possession of data is a dispositive competitive advantage, absent compelling empirical evidence to support such a finding. In this light, the current wave of decisions and competition-policy pronouncements that rely on data-related theories of harm are premature.

IV. Five Key Takeaways: Reconceptualizing the Role of Data in Generative-AI Competition

As we explain above, data (network effects) are not the source of barriers to entry that they are sometimes made out to be. The picture is far more nuanced. Indeed, as economist Andres Lerner demonstrated almost a decade ago (and the assessment is only truer today):

Although the collection of user data is generally valuable for online providers, the conclusion that such benefits of user data lead to significant returns to scale and to the entrenchment of dominant online platforms is based on unsupported assumptions. Although, in theory, control of an “essential” input can lead to the exclusion of rivals, a careful analysis of real-world evidence indicates that such concerns are unwarranted for many online businesses that have been the focus of the “big data” debate.[67]

While data can be an important part of the competitive landscape, incumbents’ data advantages are far less pronounced than today’s policymakers commonly assume. In that respect, five main lessons emerge:

  1. Data can be (very) valuable, but beyond a certain threshold, those benefits tend to diminish. In other words, having the most data is less important than having enough;
  2. The ability to generate valuable information does not depend on the number of users or the amount of data a platform has previously acquired;
  3. The most important datasets are not always proprietary;
  4. Technological advances and platforms’ engineering decisions affect their ability to generate valuable information, and this effect swamps effects stemming from the amount of data they own; and
  5. How platforms use data is arguably more important than what data or how much data they own.

These lessons have important ramifications for competition-policy debates over the competitive implications of data in technologically evolving areas.

First, it is not surprising that startups, rather than incumbents, have taken an early lead in generative AI (and in Web 2.0 before it). After all, if data-incumbency advantages are small or even nonexistent, then smaller and more nimble players may have an edge over established tech platforms. This is all the more likely given that, despite significant efforts, the biggest tech platforms were unable to offer compelling generative-AI chatbots and image-generation services before the emergence of ChatGPT, Dall-E, Midjourney, etc.

This failure suggests that, in a process akin to Clayton Christensen’s “innovator’s dilemma,”[68] something about the incumbent platforms’ existing services and capabilities was holding them back in those markets. Of course, this does not necessarily mean that those same services or capabilities could not become an advantage when the generative-AI market starts addressing issues of monetization and scale.[69] But it does mean that assumptions about a firm’s market power based on its possession of data are off the mark.

Another important implication is that, paradoxically, policymakers’ efforts to prevent Web 2.0 platforms from competing freely in generative AI markets may ultimately backfire and lead to less, not more, competition. Indeed, OpenAI is currently acquiring a sizeable lead in generative AI. While competition authorities might like to think that other startups will emerge and thrive in this space, it is important not to confuse desires with reality. While there currently exists a vibrant AI-startup ecosystem, there is at least a case to be made that the most significant competition for today’s AI leaders will come from incumbent Web 2.0 platforms—although nothing is certain at this stage. Policymakers should beware not to stifle that competition on the misguided assumption that competitive pressure from large incumbents is somehow less valuable to consumers than that which originates from smaller firms.

Finally, even if there were a competition-related market failure to be addressed in the field of generative AI (which is anything but clear), it is unclear that the remedies being contemplated would do more good than harm. Some of the solutions that have been put forward have highly ambiguous effects on consumer welfare. Scholars have shown that, e.g., mandated data sharing—a solution championed by EU policymakers, among others—may sometimes dampen competition in generative-AI markets.[70] This is also true of legislation like the General Data Protection Regulation (GDPR), which makes it harder for firms to acquire more data about consumers—assuming such data is, indeed, useful to generative-AI services.[71]

In sum, it is a flawed understanding of the economics and practical consequences of large agglomerations of data that lead competition authorities to believe that data-incumbency advantages are likely to harm competition in generative AI markets—or even in the data-intensive Web 2.0 markets that preceded them. Indeed, competition or regulatory intervention to “correct” data barriers and data network and scale effects is liable to do more harm than good.

 

[1] See, e.g., Michael Chui, et al., The Economic Potential of Generative AI: The Next Productivity Frontier, McKinsey (Jun. 14, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-AI-the-next-productivity-frontier.

[2] See, e. g., Zhuoran Qiao, Weili Nie, Arash Vahdat, Thomas F. Miller III, & Animashree Anandkumar, State-Specific Protein–Ligand Complex Structure Prediction with a Multiscale Deep Generative Model, 6 Nature Machine Intelligence, 195-208 (2024); see also, Jaemin Seo, Sang Kyeun Kim, Azarakhsh Jalalvand, Rory Conlin, Andrew Rothstein, Joseph Abbate, Keith Erickson, Josiah Wai, Ricardo Shousha, & Egemen Kolemen, Avoiding Fusion Plasma Tearing Instability with Deep Reinforcement Learning, 626 Nature, 746-751 (2024).

[3] See, e.g., Press Release, Commission Launches Calls for Contributions on Competition in Virtual Worlds and Generative AI, European Commission (Jan. 9, 2024), https://ec.europa.eu/commission/presscorner/detail/en/IP_24_85.

[4] Nathan Newman, Taking on Google’s Monopoly Means Regulating Its Control of User Data, Huffington Post (Sep. 24, 2013), http://www.huffingtonpost.com/nathan-newman/taking-on-googlesmonopol_b_3980799.html.

[5] See, e.g., Lina Khan & K. Sabeel Rahman, Restoring Competition in the U.S. Economy, in Untamed: How to Check Corporate, Financial, and Monopoly Power (Nell Abernathy, Mike Konczal, & Kathryn Milani, eds., 2016), at 23 (“From Amazon to Google to Uber, there is a new form of economic power on display, distinct from conventional monopolies and oligopolies…, leverag[ing] data, algorithms, and internet-based technologies… in ways that could operate invisibly and anticompetitively.”); Mark Weinstein, I Changed My Mind—Facebook Is a Monopoly, Wall St. J. (Oct. 1, 2021), https://www.wsj.com/articles/facebook-is-monopoly-metaverse-users-advertising-platforms-competition-mewe-big-tech-11633104247 (“[T]he glue that holds it all together is Facebook’s monopoly over data…. Facebook’s data troves give it unrivaled knowledge about people, governments—and its competitors.”).

[6] See, generally, Abigail Slater, Why “Big Data” Is a Big Deal, The Reg. Rev. (Nov. 6, 2023), https://www.theregreview.org/2023/11/06/slater-why-big-data-is-a-big-deal; Amended Complaint at ¶36, United States v. Google, 1:20-cv-03010- (D.D.C. 2020); Complaint at ¶37, United States V. Google, 1:23-cv-00108 (E.D. Va. 2023), https://www.justice.gov/opa/pr/justice-department-sues-google-monopolizing-digital-advertising-technologies (“Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”).

[7] See, e.g., Press Release, European Commission, supra note 3; Krysten Crawford, FTC’s Lina Khan Warns Big Tech over AI, SIEPR (Nov. 3, 2020), https://siepr.stanford.edu/news/ftcs-lina-khan-warns-big-tech-over-ai (“Federal Trade Commission Chair Lina Khan delivered a sharp warning to the technology industry in a speech at Stanford on Thursday: Antitrust enforcers are watching what you do in the race to profit from artificial intelligence.”) (emphasis added).

[8] See, e.g., John M. Newman, Antitrust in Digital Markets, 72 Vand. L. Rev. 1497, 1501 (2019) (“[T]he status quo has frequently failed in this vital area, and it continues to do so with alarming regularity. The laissez-faire approach advocated for by scholars and adopted by courts and enforcers has allowed potentially massive harms to go unchecked.”);
Bertin Martins, Are New EU Data Market Regulations Coherent and Efficient?, Bruegel Working Paper 21/23 (2023), https://www.bruegel.org/working-paper/are-new-eu-data-market-regulations-coherent-and-efficient (“Technical restrictions on access to and re-use of data may result in failures in data markets and data-driven services markets.”); Valéria Faure-Muntian, Competitive Dysfunction: Why Competition Law Is Failing in a Digital World, The Forum Network (Feb. 24, 2021), https://www.oecd-forum.org/posts/competitive-dysfunction-why-competition-law-is-failing-in-a-digital-world.

[9] See Rana Foroohar, The Great US-Europe Antitrust Divide, FT (Feb. 5, 2024), https://www.ft.com/content/065a2f93-dc1e-410c-ba9d-73c930cedc14.

[10] See, e.g., Press Release, European Commission, supra note 3.

[11] See infra, Section II. Commentators have also made similar claims; see, e.g., Ganesh Sitaram & Tejas N. Narechania, It’s Time for the Government to Regulate AI. Here’s How, Politico (Jan. 15, 2024) (“All that cloud computing power is used to train foundation models by having them “learn” from incomprehensibly huge quantities of data. Unsurprisingly, the entities that own these massive computing resources are also the companies that dominate model development. Google has Bard, Meta has LLaMa. Amazon recently invested $4 billion into one of OpenAI’s leading competitors, Anthropic. And Microsoft has a 49 percent ownership stake in OpenAI — giving it extraordinary influence, as the recent board struggles over Sam Altman’s role as CEO showed.”).

[12] Press Release, European Commission, supra note 3.

[13] Comment of U.S. Federal Trade Commission to the U.S. Copyright Office, Artificial Intelligence and Copyright, Docket No. 2023-6 (Oct. 30, 2023), at 4, https://www.ftc.gov/legal-library/browse/advocacy-filings/comment-federal-trade-commission-artificial-intelligence-copyright (emphasis added).

[14] See, e.g. Joe Caserta, Holger Harreis, Kayvaun Rowshankish, Nikhil Srinidhi, & Asin Tavakoli, The Data Dividend: Fueling Generative AI, McKinsey Digital (Sep. 15, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-data-dividend-fueling-generative-ai (“Your data and its underlying foundations are the determining factors to what’s possible with generative AI.”).

[15] See, e.g., Tim Keary, Google DeepMind’s Achievements and Breakthroughs in AI Research, Techopedia (Aug. 11, 2023), https://www.techopedia.com/google-deepminds-achievements-and-breakthroughs-in-ai-research; See, e.g., Will Douglas Heaven, Google DeepMind Used a Large Language Model to Solve an Unsolved Math Problem, MIT Technology Review (Dec. 14, 2023), https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set; see also, A Decade of Advancing the State-of-the-Art in AI Through Open Research, Meta (Nov. 30, 2023), https://about.fb.com/news/2023/11/decade-of-advancing-ai-through-open-research; see also, 200 Languages Within a Single AI Model: A Breakthrough in High-Quality Machine Translation, Meta, https://ai.meta.com/blog/nllb-200-high-quality-machine-translation (last visited Jan. 18, 2023).

[16] See, e.g., Jennifer Allen, 10 Years of Siri: The History of Apple’s Voice Assistant, Tech Radar (Oct. 4, 2021), https://www.techradar.com/news/siri-10-year-anniversary; see also Evan Selleck, How Apple Is Already Using Machine Learning and AI in iOS, Apple Insider (Nov. 20, 2023), https://appleinsider.com/articles/23/09/02/how-apple-is-already-using-machine-learning-and-ai-in-ios; see also, Kathleen Walch, The Twenty Year History Of AI At Amazon, Forbes (July 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/07/19/the-twenty-year-history-of-ai-at-amazon.

[17] See infra Section III.

[18] See, e.g., Cédric Argenton & Jens Prüfer, Search Engine Competition with Network Externalities, 8 J. Comp. L. & Econ. 73, 74 (2012).

[19] John M. Yun, The Role of Big Data in Antitrust, in The Global Antitrust Institute Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg, eds., Nov. 11, 2020) at 233, https://gaidigitalreport.com/2020/08/25/big-data-and-barriers-to-entry/#_ftnref50; see also, e.g., Robert Wayne Gregory, Ola Henfridsson, Evgeny Kaganer, & Harris Kyriakou, The Role of Artificial Intelligence and Data Network Effects for Creating User Value, 46 Acad. of Mgmt. Rev. 534 (2020), final pre-print version at 4, http://wrap.warwick.ac.uk/134220) (“A platform exhibits data network effects if, the more that the platform learns from the data it collects on users, the more valuable the platform becomes to each user.”); see also, Karl Schmedders, José Parra-Moyano, & Michael Wade, Why Data Aggregation Laws Could be the Answer to Big Tech Dominance, Silicon Republic (Feb. 6, 2024), https://www.siliconrepublic.com/enterprise/data-ai-aggregation-laws-regulation-big-tech-dominance-competition-antitrust-imd.

[20] Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. Reg. 401, 409 (2014) (emphasis added); see also id. at 420 & 423 (“While there are a number of network effects that come into play with Google, [“its intimate knowledge of its users contained in its vast databases of user personal data”] is likely the most important one in terms of entrenching the company’s monopoly in search advertising…. Google’s overwhelming control of user data… might make its dominance nearly unchallengeable.”).

[21] See also Yun, supra note 19 at 229 (“[I]nvestments in big data can create competitive distance between a firm and its rivals, including potential entrants, but this distance is the result of a competitive desire to improve one’s product.”).

[22] For a review of the literature on increasing returns to scale in data (this topic is broader than data-network effects) see Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo Mason L. Rev. 1281, 1344 (2021).

[23] Andrei Hagiu & Julian Wright, Data-Enabled Learning, Network Effects, and Competitive Advantage, 54 RAND J. Econ. 638 (2023).

[24] Id. at 639. The authors conclude that “Data-enabled learning would seem to give incumbent firms a competitive advantage. But how strong is this advantage and how does it differ from that obtained from more traditional mechanisms…”.

[25] Id.

[26] Bruno Jullien & Wilfried Sand-Zantman, The Economics of Platforms: A Theory Guide for Competition Policy, 54 Info. Econ. & Pol’y 10080, 101031 (2021).

[27] Daniele Condorelli & Jorge Padilla, Harnessing Platform Envelopment in the Digital World, 16 J. Comp. L. & Pol’y 143, 167 (2020).

[28] See Hagiu & Wright, supra note 23.

[29] For a summary of these limitations, see generally Catherine Tucker, Network Effects and Market Power: What Have We Learned in the Last Decade?, Antitrust (2018) at 72, available at https://sites.bu.edu/tpri/files/2018/07/tucker-network-effects-antitrust2018.pdf; see also Manne & Auer, supra note 22, at 1330.

[30] See Jason Furman, Diane Coyle, Amelia Fletcher, Derek McAuley, & Philip Marsden (Dig. Competition Expert Panel), Unlocking Digital Competition (2019) at 32-35 (“Furman Report”), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[31] Id. at 34.

[32] Id. at 35. To its credit, it should be noted, the Furman Report does counsel caution before mandating access to data as a remedy to promote competition. See id. at 75. That said, the Furman Report does maintain that such a remedy should certainly be on the table because “the evidence suggests that large data holdings are at the heart of the potential for some platform markets to be dominated by single players and for that dominance to be entrenched in a way that lessens the potential for competition for the market.” Id. In fact, the evidence does not show this.

[33] Case COMP/M.9660 — Google/Fitbit, Commission Decision (Dec. 17, 2020) (Summary at O.J. (C 194) 7), available at https://ec.europa.eu/competition/mergers/cases1/202120/m9660_3314_3.pdf at 455.

[34] Id. at 896.

[35] See Natasha Lomas, EU Checking if Microsoft’s OpenAI Investment Falls Under Merger Rules, TechCrunch (Jan. 9, 2024), https://techcrunch.com/2024/01/09/openai-microsoft-eu-merger-rules.

[36] Amended Complaint at 11, Meta/Zuckerberg/Within, Fed. Trade Comm’n. (2022) (No. 605837), available at https://www.ftc.gov/system/files/ftc_gov/pdf/D09411%20-%20AMENDED%20COMPLAINT%20FILED%20BY%20COUNSEL%20SUPPORTING%20THE%20COMPLAINT%20-%20PUBLIC%20%281%29_0.pdf.

[37] Amended Complaint (D.D.C), supra note 6 at ¶37.

[38] Amended Complaint (E.D. Va), supra note 6 at ¶8.

[39] Merger Guidelines, US Dep’t of Justice & Fed. Trade Comm’n (2023) at 25, available at https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[40] Merger Assessment Guidelines, Competition and Mkts. Auth (2021) at  ¶7.19(e), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051823/MAGs_for_publication_2021_–_.pdf.

[41] Furman Report, supra note 30, at ¶4.

[42] See, e.g., Chris Westfall, New Research Shows ChatGPT Reigns Supreme in AI Tool Sector, Forbes (Nov. 16, 2023), https://www.forbes.com/sites/chriswestfall/2023/11/16/new-research-shows-chatgpt-reigns-supreme-in-ai-tool-sector/?sh=7de5de250e9c.

[43] See Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base, Reuters (Feb. 2, 2023), https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01; Google: The AI Race Is On, App Economy Insights (Feb. 7, 2023), https://www.appeconomyinsights.com/p/google-the-ai-race-is-on.

[44] See Google Trends, https://trends.google.com/trends/explore?date=today%205-y&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited, Jan. 12, 2024) and https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024).

[45] See David F. Carr, As ChatGPT Growth Flattened in May, Google Bard Rose 187%, Similarweb Blog (Jun. 5, 2023), https://www.similarweb.com/blog/insights/ai-news/chatgpt-bard.

[46] See Press Release, Introducing New AI Experiences Across Our Family of Apps and Devices, Meta (Sep. 27, 2023), https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools; Sundar Pichai, An Important Next Step on Our AI Journey, Google Keyword Blog (Feb. 6, 2023), https://blog.google/technology/ai/bard-google-ai-search-updates.

[47] See Ion Prodan, 14 Million Users: Midjourney’s Statistical Success, Yon (Aug. 19, 2023), https://yon.fun/midjourney-statistics; see also Andrew Wilson, Midjourney Statistics: Users, Polls, & Growth [Oct 2023], ApproachableAI (Oct. 13, 2023), https://approachableai.com/midjourney-statistics.

[48] See Hema Budaraju, New Ways to Get Inspired with Generative AI in Search, Google Keyword Blog (Oct. 12, 2023), https://blog.google/products/search/google-search-generative-ai-october-update; Imagine with Meta AI, Meta (last visited Jan. 12, 2024), https://imagine.meta.com.

[49] Catherine Tucker, Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility, 54 Rev. Indus. Org. 683, 686 (2019).

[50] Manne & Auer, supra note 22, at 1345.

[51] See, e.g., Stefanie Koperniak, Artificial Data Give the Same Results as Real Data—Without Compromising Privacy, MIT News (Mar. 3, 2017), https://news.mit.edu/2017/artificial-data-give-same-results-as-real-data-0303 (“[Authors] describe a machine learning system that automatically creates synthetic data—with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. While the use of authentic data can cause significant privacy concerns, this synthetic data is completely different from that produced by real users—but can still be used to develop and test data science algorithms and models.”).

[52] See, e.g., Rachel Gordon, Synthetic Imagery Sets New Bar in AI Training Efficiency, MIT News (Nov. 20, 2023), https://news.mit.edu/2023/synthetic-imagery-sets-new-bar-ai-training-efficiency-1120 (“By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional ‘real-image’ training methods.).

[53] Thibault Schrepel & Alex ‘Sandy’ Pentland, Competition Between AI Foundation Models: Dynamics and Policy Recommendations, MIT Connection Science Working Paper (Jun. 2023), at 8.

[54] Igor Susmelj, Optimizing Generative AI: The Role of Data Curation, Lightly (last visited Jan. 15, 2024), https://www.lightly.ai/post/optimizing-generative-ai-the-role-of-data-curation.

[55] See, e.g., Xiaoliang Dai, et al., Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack, ArXiv (Sep. 27, 2023) at 1, https://ar5iv.labs.arxiv.org/html/2309.15807 (“[S]upervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality.”); see also, Hu Xu, et al., Demystifying CLIP Data, ArXiv (Sep. 28, 2023), https://arxiv.org/abs/2309.16671.

[56] Lauren Leffer, New Training Method Helps AI Generalize like People Do, Sci. Am. (Oct. 26, 2023), https://www.scientificamerican.com/article/new-training-method-helps-ai-generalize-like-people-do (discussing Brendan M. Lake & Marco Baroni, Human-Like Systematic Generalization Through a Meta-Learning Neural Network, 623 Nature 115 (2023)).

[57] Timothy B. Lee, The Real Research Behind the Wild Rumors about OpenAI’s Q* Project, Ars Technica (Dec. 8, 2023), https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project.

[58] Id.; see also GSM8K, Papers with Code (last visited Jan. 18, 2023), available at https://paperswithcode.com/dataset/gsm8k; MATH Dataset, GitHub (last visited Jan. 18, 2024), available at https://github.com/hendrycks/math.

[59] Lee, supra note 57.

[60] Geoffrey Manne & Ben Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (Mar. 26, 2015), https://truthonthemarket.com/2015/03/26/debunking-the-myth-of-a-data-barrier-to-entry-for-online-services (citing Andres V. Lerner, The Role of ‘Big Data’ in Online Platform Competition (Aug. 26, 2014), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2482780.).

[61] See Catherine Tucker, Digital Data as an Essential Facility: Control, CPI Antitrust Chron. (Feb. 2020), at 11 (“[U]ltimately the value of data is not the raw manifestation of the data itself, but the ability of a firm to use this data as an input to insight.”).

[62] Or, as John Yun puts it, data is only a small component of digital firms’ production function. See Yun, supra note 19, at 235 (“Second, while no one would seriously dispute that having more data is better than having less, the idea of a data-driven network effect is focused too narrowly on a single factor improving quality. As mentioned in supra Section I.A, there are a variety of factors that enter a firm’s production function to improve quality.”).

[63] Luxia Le, The Real Reason Windows Phone Failed Spectacularly, History–Computer (Aug. 8, 2023), https://history-computer.com/the-real-reason-windows-phone-failed-spectacularly.

[64] Introducing the GPT Store, Open AI (Jan. 10, 2024), https://openai.com/blog/introducing-the-gpt-store.

[65] See Michael Schade, How ChatGPT and Our Language Models are Developed, OpenAI, https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed; Sreejani Bhattacharyya, Interesting Innovations from OpenAI in 2021, AIM (Jan. 1, 2022), https://analyticsindiamag.com/interesting-innovations-from-openai-in-2021; Danny Hernadez & Tom B. Brown, Measuring the Algorithmic Efficiency of Neural Networks, ArXiv (May 8, 2020), https://arxiv.org/abs/2005.04305.

[66] See Yun, supra note 19 at 235 (“Even if data is primarily responsible for a platform’s quality improvements, these improvements do not simply materialize with the presence of more data—which differentiates the idea of data-driven network effects from direct network effects. A firm needs to intentionally transform raw, collected data into something that provides analytical insights. This transformation involves costs including those associated with data storage, organization, and analytics, which moves the idea of collecting more data away from a strict network effect to more of a ‘data opportunity.’”).

[67] Lerner, supra note 60, at 4-5 (emphasis added).

[68] See Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (2013).

[69] See David J. Teece, Dynamic Capabilities and Strategic Management: Organizing for Innovation and Growth (2009).

[70] See Hagiu & Wright, supra note 23, at 23 (“We use our dynamic framework to explore how data sharing works: we find that it in-creases consumer surplus when one firm is sufficiently far ahead of the other by making the laggard more competitive, but it decreases consumer surplus when the firms are sufficiently evenly matched by making firms compete less aggressively, which in our model means subsidizing consumers less.”); see also Lerner, supra note 60.

[71] See, e.g., Hagiu & Wright, id. (“We also use our model to highlight an unintended consequence of privacy policies. If such policies reduce the rate at which firms can extract useful data from consumers, they will tend to increase the incumbent’s competitive advantage, reflecting that the entrant has more scope for new learning and so is affected more by such a policy.”); Jian Jia, Ginger Zhe Jin, & Liad Wagman, The Short-Run Effects of the General Data Protection Regulation on Technology Venture Investment, 40 Marketing Sci. 593 (2021) (finding GDPR reduced investment in new and emerging technology firms, particularly in data-related ventures); James Campbell, Avi Goldfarb, & Catherine Tucker, Privacy Regulation and Market Structure, 24 J. Econ. & Mgmt. Strat. 47 (2015) (“Consequently, rather than increasing competition, the nature of transaction costs implied by privacy regulation suggests that privacy regulation may be anti-competitive.”).

RE: Proposed Amendments to 16 CFR Parts 801–803—Hart-Scott-Rodino Coverage, Exemption, and Transmittal Rules, Project No. P239300

Dear Chair Khan, Commissioners Slaughter and Bedoya, and General Counsel Dasgupta, The International Center for Law & Economics (ICLE) respectfully submits this letter in response . . .

Dear Chair Khan, Commissioners Slaughter and Bedoya, and General Counsel Dasgupta,

The International Center for Law & Economics (ICLE) respectfully submits this letter in response to your June 29, 2023, NPRM regarding amendments to the premerger notification rules that implement the Hart-Scott-Rodino Antitrust Improvements Act (HSR Act) and to the Premerger Notification and Report Form and Instructions.

ICLE is a nonprofit, nonpartisan research center working to promote the use of law & economics methodologies to inform public policy debate. We have a long history of participation in regulatory proceedings relating to competition and antitrust law, including recent revisions to the merger guidelines[1] and the proposed revisions to the HSR premerger notification process.[2] We are consistently grateful for the opportunity to participate in proceedings such as these.

We write to express our concern about an important omission in the FTC’s proposed changes to the premerger notification form: its failure to address the requirements of the Regulatory Flexibility Act (RFA).[3] We appreciate your interest in this matter and the opportunity to share our concern with your offices.

This concern involves two legislative frameworks: the HSR premerger notification process and the requirements of the RFA. Under the HSR Act’s amendments to the Clayton Act, firms engaging in mergers above a statutorily defined minimum value[4]—including many that would involve smaller businesses to which the RFA applies—are required to provide information about a proposed merger to the FTC and the Department of Justice (DOJ) before the transaction can close. To bolster information gathering in merger enforcement, the FTC (with concurrence from the DOJ) proposed an extensive set of amendments to the filing process outlined by the HSR Act.[5] The proposed changes to the HSR process would dramatically expand the disclosure obligations for merging companies that meet the minimum valuation threshold.[6]

Under the RFA, federal agencies “shall prepare and make available for public comment an initial regulatory flexibility analysis. . . describ[ing] the impact of the proposed rule on small entities,”[7] except where “the head of the agency certifies that the rule will not, if promulgated, have a significant economic impact on a substantial number of small entities.”[8] If the agency believes the amendment will not have a substantial impact on small businesses, then it must provide “the factual basis” for its conclusion.[9]

The statement certifying that the proposed HSR changes in the NPRM won’t affect small businesses reads, in full:

Because of the size of the transactions necessary to invoke an HSR Filing, the premerger notification rules rarely, if ever, affect small entities. The 2000 amendments to the Act exempted all transactions valued at $50 million or less, with subsequent automatic adjustments to take account of changes in Gross National Product resulting in a current threshold of $111 million. Further, none of the proposed amendments expands the coverage of the premerger notification rules in a way that would affect small entities. Accordingly, the Commission certifies that these proposed amendments will not have a significant economic impact on a substantial number of small entities.[10]

Unfortunately, this is insufficient to satisfy the requirements of the RFA. Although the FTC stresses the $111 million HSR threshold to assert that small entities will not be affected, the Small Business Administration (SBA) “generally defines a small business as an independent business having fewer than 500 employees.”[11] The SBA also offers more detailed, industry-specific identification of small businesses.[12] Indeed, the NPRM cites to the SBA’s own standards, but these, too, do not align with the FTC’s “factual statement,” and it is not evident that the Commission sufficiently delved into those standards to understand their relevance for the size thresholds under the HSR Act.

Even a quick review of the SBA’s “Small Business Size Standards by NAICS Industry” table reveals that the SBA classifies the size of a firm based on either annual receipts or number of employees,[13] depending upon the characteristics of their industry.[14] Neither of these, it should go without saying, is the same as a “size-of-transaction” threshold under the HSR Act. Nor does the NPRM’s “factual basis” statement contain information sufficient to determine that there is any correlation between the SBA’s size thresholds and the size of a transaction (which typically represents something between the discounted present value of a firm’s expected returns under new ownership and current ownership over an indefinite time period).

Despite the FTC’s claim that the $111 million deal threshold will ensure that small businesses are not substantially affected, the agency’s own data from 2022 shows that nearly a quarter of all HSR filings covered transactions involving firms with sales of $50 million or less.[15] The same data shows that, out of the 3029 reported transactions in 2022, 513 involved firms with between $50 and $100 million in sales and 305 with between $100 and $150 million in sales. Here, again, the SBA’s metrics for identifying small businesses bear emphasis: where the SBA relies on dollar values instead of employee headcounts to define small businesses at all, it does so based on annual average receipts, not on the overall value of the firm.

This distinction underscores a point made in a letter filed by the App Association, a trade group representing small technology firms, that it is important not to conflate valuation with size.[16] A company, such as an innovative tech startup, can have a small number of employees but a high value based on projected sales, intellectual property, and forthcoming products. Indeed, the App Association notes that a number of its members are already subject to HSR disclosures and that that number can only increase under the proposed amendments.[17]

To provide further context regarding whether many of these deals involve small businesses, a 2013 CrunchBase dataset showed that the average successful American startup sold for $242.9 million.[18] Furthermore, the FTC’s 2022 HSR report highlights at least one challenged transaction involving a small business: Meta/Within.[19] Within, a virtual-reality startup with 58 employees, was acquired by Meta for $400 million.[20] Not only was there an HSR filing, but the FTC attempted to challenge the transaction—and lost in district court.[21]

Small businesses are clearly burdened by the HSR premerger notification requirements—and this burden would only increase under the proposed changes. By the FTC’s own estimate the new requirements would quadruple the hours required to prepare an HSR filing and raise costs by $350 million. By other, more realistic estimates, that increase in work hours would entail a cost of more than $1.6 billion[22]—or, indeed, considerably more.[23] There is no question that drastically increasing the cost of merger filings will make it much harder for small businesses to merge or be acquired, which is a primary form of success for small businesses.

Indeed, the NPRM’s proposed changes are, in part, specifically designed to affect small businesses. “Acquisitions of small companies can cause harm, including in sectors where competition occurs on a local level. . . . Thus, the Commission proposes several changes to expand the requirements for information related to prior acquisitions beyond what is currently required by Item 8.”[24] Furthermore, “given the difficulties in determining the value of small or nascent companies, the Commission believes it would be less burdensome for filers to report all acquisitions. . . .”[25]  Indeed, the FTC is aware of the potential burden on small businesses that such an approach would entail, but nevertheless aims to ensure that its proposed rules “still captur[e] acquisitions of entities worth less than $10 million.”[26]

And there is yet a further problem: These concerns take into account only the direct costs the NPRM would impose on small businesses. But, as the National Federation of Independent Business highlighted in 2023, several agencies have arguably failed to comply with the RFA by failing to consider indirect effects on small businesses.[27] Obviously, there is no such analysis provided here—and, indeed, as noted above, a clear intent of the NPRM is to affect the likelihood of small-business acquisition by reducing the incentive for firms to serially acquire small businesses. Doing so, of course, reduces funders’ incentives to invest in startups and small businesses and raises these companies’ cost of capital. Arguably that increase is itself a direct cost, but certainly its indirect effect is incredibly significant to the health of small businesses in the U.S.

The dramatic changes to the HSR premerger notification requirements proposed by the FTC have already created substantial uncertainty within the antitrust bar. Procedural defects such as failing to comply with the requirements of the RFA increase the likelihood that any rules adopted by the FTC will be challenged in court. This would increase the uncertainty (and thus the cost) surrounding the HSR process. This would be an unfortunate outcome. Fortunately, it is one that can be avoided if the FTC addresses these issues prior to finalizing its proposed rules.

[1] Geoffrey A. Manne, Dirk Auer, Brian Albrecht, Eric Fruits, Daniel J. Gilman, & Lazar Radic, Comments of the International Center for Law and Economics on the FTC & DOJ Draft Merger Guidelines, International Center for Law and Economics (Sept 18, 2023), https://laweconcenter.org/resources/comments-of-the-international-center-for-law-and-economics-on-the-ftc-doj-draft-merger-guidelines/.

[2] Brian Albrecht, Dirk Auer, Daniel J. Gilman, Gus Hurwitz, & Geoffrey A. Manne, Comments of the International Center for Law & Economics on Proposed Changes to the Premerger Notification Rules, International Center for Law and Economics (Sept 27,2023), https://laweconcenter.org/resources/comments-of-the-international-center-for-law-economics-on-proposed-changes-to-the-premerger-notification-rules/.

[3] 5 U.S.C. §§ 601-612 (2018).

[4] 15 U.S.C. § 18a(a)(2) (2018).

[5] NPRM, 88 FR 42178 (Jun. 29, 2023).

[6] See id. at 42208 (estimating the hours and expenses required to comply with the new rules). According to antitrust practitioners, however, the NPRM’s estimate likely substantially underestimates the true burden and cost of the proposed rules. See, e.g., Sean Heather, Antitrust Experts Reject FTC/DOJ Changes to Merger Process, Chamber of Commerce (Sept 19, 2023), https://www.uschamber.com/finance/antitrust/antitrust-experts-reject-ftc-doj-changes-to-merger-process.

[7] 5 U.S.C. § 603(a) (2018).

[8] 5 U.S.C. § 605(b) (2018).

[9] Id.

[10] NPRM, 88 FR 42178, 42208 (Jun. 29, 2023).

[11] Frequently Asked Questions, U.S. Small Bus. Admin. Off. of Advoc. (2023), https://advocacy.sba.gov/wp-content/uploads/2023/03/Frequently-Asked-Questions-About-Small-Business-March-2023-508c.pdf.

[12] See 13 CFR § 121.101, et seq. (1996)

[13] 13 CFR § 121.201 (2024).

[14] Indeed, the SBA’s standards entail a review of a wide range of such characteristics. See 13 CFR § 121.102 (1996) (“SBA considers economic characteristics comprising the structure of an industry, including degree of competition, average firm size, start-up costs and entry barriers, and distribution of firms by size. It also considers technological changes, competition from other industries, growth trends, historical activity within an industry, unique factors occurring in the industry which may distinguish small firms from other firms, and the objectives of its programs and the impact on those programs of different size standard levels.”).

[15] See Fed. Trade Comm’n and Dept of Just., Hart-Scott-Rodino Annual Report (2022), at Table IX, available at https://www.ftc.gov/system/files/ftc_gov/pdf/FY2022HSRReport.pdf.

[16] See Letter from Morgan Reed, President of App Association, to Lina Khan, Chair of Fed. Trade. Comm’n and Members of Congress (Feb 1, 2024), available at https://actonline.org/wp-content/uploads/App-Association-HSR-RFA-Ltr-1-Feb-2024-1.pdf.

[17] See id.

[18] See Mark Lennon, CrunchBase Reveals: The Average Successful Startup Raises $41M, Exits at $242.9M, TechCrunch (Dec 14, 2013), https://techcrunch.com/2013/12/14/crunchbase-reveals-the-average-successful-startup-raises-41m-exits-at-242-9m.

[19] See Fed. Trade Comm’n and Dept of Just., Hart-Scott-Rodino Annual Report (2022), available at https://www.ftc.gov/system/files/ftc_gov/pdf/FY2022HSRReport.pdf.

[20] See, e.g., Within (Virtual Reality) Overview, Pitchbook (last visited Feb. 29, 2024), https://pitchbook.com/profiles/company/117068-59#overview.

[21] In the Matter of Meta/Zuckerberg/Within, Fed. Trade Comm’n Docket No. 9411 (Aug. 11, 2022), https://www.ftc.gov/legal-library/browse/cases-proceedings/221-0040-metazuckerbergwithin-matter.

[22] See Albrecht, et al., supra note 2, at 7 (“The U.S. Chamber of Commerce conducted ‘a survey of 70 antitrust practitioners asking them questions about the proposed revisions to the HSR merger form and the new draft merger guides.’ Based on average answers from the survey respondents, the new rules would increase compliance costs by $1.66 billion, almost five times the FTC’s $350 million estimate.”).

[23] See id. (“For the current rules, the average survey response puts the cost of compliance at $79,569. Assuming there are 7,096 filings (as the FTC projects for FY 23), the total cost under the current rules would be $565 million. Under the new rules, the average survey response estimates the expected cost of compliance to be $313,828 per transaction, for a total cost of $2.23 billion.”) (emphasis added).

[24] NPRM, 88 FR 42178, 42203 (Jun. 29, 2023).

[25] Id. at 42204 (emphasis added).

[26] Id. (emphasis added).

[27] See Rob Smith, The Regulatory Flexibility Act: Turning a Paper Tiger Into a Legitimate Constraint on One-Size-Fits-All Agency Rulemaking, NFIB Small Business Legal Center (May 2, 2023), https://strgnfibcom.blob.core.windows.net/nfibcom/NFIB-RFA-White-paper.pdf (collecting examples).

LONG FORM WRITING

A Choice-of-Law Alternative to Federal Preemption of State Privacy Law

Executive Summary A prominent theme in debates about US national privacy legislation is whether federal law should preempt state law. A federal statute could create . . .

Executive Summary

A prominent theme in debates about US national privacy legislation is whether federal law should preempt state law. A federal statute could create one standard for markets that are obviously national in scope. Another approach is to allow states to be “laboratories of democracy” that adopt different laws so they can discover the best ones.

We propose a federal statute requiring states to recognize contractual choice-of-law provisions, so companies and consumers can choose what state privacy law to adopt. Privacy would continue to be regulated at the state level. However, the federal government would provide for jurisdictional competition among states, such that companies operating nationally could comply with the privacy laws of any one state.

Our proposed approach would foster a double competition aimed at discerning and delivering on consumers’ true privacy interests: market competition to deliver privacy policies that consumers prefer and competition among states to develop the best privacy laws.

Unlike a single federal privacy law, this approach would provide 50 competing privacy regimes for national firms. The choice-of-law approach can trigger competition and innovation in privacy practices while preserving a role for meaningful state privacy regulation.

Introduction

The question of preemption of state law by the federal government has bedeviled debates about privacy regulation in the United States. A prominent theme is to propose a national privacy policy that largely preempts state policies to create one standard for markets that are obviously national. Another approach is to allow states to be “laboratories of democracy” that adopt different laws, with the hope that they will adopt the best rules over time. Both approaches have substantial costs and weaknesses.

The alternative approach we propose would foster a double competition aimed at discerning and delivering on consumers’ true privacy interests: market competition to deliver privacy policies that consumers prefer and competition among states to develop the best privacy laws. Indeed, our proposal aims to obtain the best features—and avoid the worst features—of both a federal regime and a multistate privacy law regime by allowing firms and consumers to agree on compliance with the single regime of their choosing.

Thus, we propose a federal statute requiring states to recognize contractual choice-of-law provisions, so companies and consumers can choose what state privacy law to adopt. Privacy would continue to be regulated at the state level. However, the federal government would provide for jurisdictional competition among states, and companies operating nationally could comply with the privacy laws of any one state.

Unlike a single federal privacy law, this approach would provide 50 competing privacy regimes for national firms. Protecting choice of law can trigger competition and innovation in privacy practices while preserving a role for meaningful state privacy regulation.

The Emerging Patchwork of State Privacy Statutes Is a Problem for National Businesses

A strong impetus for federal privacy legislation is the opportunity national and multinational businesses see to alleviate the expense and liability of having a patchwork of privacy statutes with which they must comply in the United States. Absent preemptive legislation, they could conceivably operate under 50 different state regimes, which would increase costs and balkanize their services and policies without coordinate gains for consumers. Along with whether a federal statute should have a private cause of action, preempting state law is a top issue when policymakers roll up their sleeves and discuss federal privacy legislation.

But while the patchwork argument is real, it may be overstated. There are unlikely ever to be 50 distinct state regimes; rather, a small number of state legislation types is likely, as jurisdictions follow each other’s leads and group together, including by promulgating model state statutes.[1] States don’t follow the worst examples from their brethren, as the lack of biometric statutes modeled on Illinois’s legislation illustrates.[2]

Along with fewer “patches,” the patchwork’s costs will tend to diminish over time as states land on relatively stable policies, allowing compliance to be somewhat routinized.

Nonetheless, the patchwork is far from ideal. It is costly to firms doing business nationally. It costs small firms more per unit of revenue, raising the bar to new entry and competition. And it may confuse consumers about what their protections are (though consumers don’t generally assess privacy policies carefully anyway).

But a Federal Privacy Statute Is Far from Ideal as Well

Federal preemption has many weaknesses and costs as well. Foremost, it may not deliver meaningful privacy to consumers. This is partially because “privacy” is a congeries of interests and values that defy capture.[3] Different people prioritize different privacy issues differently. In particular, the elites driving and influencing legislation may prioritize certain privacy values differently from consumers, so legislation may not serve most consumers’ actual interests.[4]

Those in the privacy-regulation community sometimes assume that passing privacy legislation ipso facto protects privacy, but that is not a foregone conclusion. The privacy regulations issued under the Gramm-Leach-Bliley Act (concerning financial services)[5] and the Health Insurance Portability and Accountability Act (concerning health care)[6] did not usher in eras of consumer confidence about privacy in their respective fields.

The short-term benefits of preempting state law may come with greater long-term costs. One cost is the likely drop in competition among firms around privacy. Today, as some have noted, “Privacy is actually a commercial advantage. . . . It can be a competitive advantage for you and build trust for your users.”[7] But federal privacy regulation seems almost certain to induce firms to treat compliance as the full measure of privacy to offer consumers. Efforts to outperform or ace out one another will likely diminish.[8]

Another long-term cost of preempting state law is the drop in competition among states to provide well-tuned privacy and consumer-protection legislation. Our federal system’s practical genius, which Justice Louis Brandeis articulated 90 years ago in New State Ice v. Liebmann, is that state variation allows natural experiments in what best serves society—business and consumer interests alike.[9] Because variations are allowed, states can amend their laws individually, learn from one another, adapt, and converge on good policy.

The economic theory of federalism draws heavily from the Tiebout model.[10] Charles Tiebout argued that competing local governments could, under certain conditions, produce public goods more efficiently than the national government could. Local governments act as firms in a marketplace for taxes and public goods, and consumer-citizens match their preferences to the providers. Efficient allocation requires mobile people and resources, enough jurisdictions with the freedom to set their own laws, and limited spillovers among jurisdictions (effects of one jurisdiction’s policies on others).

A related body of literature on “market-preserving federalism” argues that strong and self-reinforcing limits on national and local power can preserve markets and incentivize economic growth and development.[11] The upshot of this literature is that when local jurisdictions can compete on law, not only do they better match citizens’ policy preferences, but the rules tend toward greater economic efficiency.

In contrast to the economic gains from decentralization, moving authority over privacy from states to the federal government may have large political costs. It may deepen Americans’ growing dissatisfaction with their democracy. Experience belies the ideal of responsive national government when consumers, acting as citizens, want to learn about or influence the legislation and regulation that governs more and more areas of their lives. The “rejectionist” strain in American politics that Donald Trump’s insurgency and presidency epitomized may illustrate deep dissatisfaction with American democracy that has been growing for decades. Managing a highly personal and cultural

issue like privacy through negotiation between large businesses and anonymous federal regulators would deepen trends that probably undermine the government’s legitimacy.

To put a constitutional point on it, preempting states on privacy contradicts the original design of our system, which assigned limited powers to the federal government.[12] The federal government’s enumerated powers generally consist of national public goods—particularly defense. The interstate commerce clause, inspired by state parochialism under the Articles of Confederation, exists to make commerce among states (and with tribes) regular; it is not rightly a font of power to regulate the terms and conditions of commerce generally.[13]

Preempting state law does not necessarily lead to regulatory certainty, as is often imagined. Section 230 of the Communications Decency Act may defeat once and for all the idea that federal legislation creates certainty.[14] More than a quarter century after its passage, it is hotly debated in Congress and threatened in the courts.[15]

The Fair Credit Reporting Act (FCRA) provides a similar example.[16] Passed in 1970, it comprehensively regulated credit reporting. Since then, Congress has amended it dozens of times, and regulators have made countless alterations through interpretation and enforcement.[17] The Consumer Financial Protection Bureau recently announced a new inquiry into data brokering under the FCRA.[18] That is fine, but it illustrates that the FCRA did not solve problems and stabilize the law. It just moved the jurisdiction to Washington, DC.

Meanwhile, as regulatory theory predicts, credit reporting has become a three-horse race.[19] A few slow-to-innovate firms have captured and maintained dominance thanks partially to the costs and barriers to entry that uniform regulation creates.

Legal certainty may be a chimera while business practices and social values are in flux. Certainty develops over time as industries settle into familiar behaviors and roles.

An Alternative to Preemption: Business and Consumer Choice

One way to deal with this highly complex issue is to promote competition for laws. The late, great Larry Ribstein, with several coauthors over the years, proposed one such legal mechanism: a law market empowered by choice-of-law statutes.[20] Drawing on the notion of market competition as a discovery process,[21] Ribstein and Henry Butler explained:

In order to solve the knowledge problem and to create efficient legal technologies, the legal system can use the same competitive process that encourages innovation in the private sector—that is, competition among suppliers of law. As we will see, this entails enforcing contracts among the parties regarding the applicable law. The greater the knowledge problem the more necessary it is to unleash markets for law to solve the problem.[22]

The proposal set forth below promotes just such competition and solves the privacy-law patchwork problem without the costs of federal preemption. It does this through a simple procedural regulation requiring states to enforce choice-of-law terms in privacy contracts, rather than through a heavy-handed, substantive federal law. Inspired by Butler and Ribstein’s proposal for pluralist insurance regulation,[23] the idea is to make the choice of legal regime a locus of privacy competition.

Modeled on the US system of state incorporation law, our proposed legislation would leave firms generally free to select the state privacy law under which they do business nationally. Firms would inform consumers, as they must to form a contract, that a given state’s laws govern their policies. Federal law would ensure that states respect those choice-of-law provisions, which would be enforced like any other contract term.

This would strengthen and deepen competition around privacy. If firms believed privacy was a consumer interest, they could select highly protective state laws and advertise that choice, currying consumer favor. If their competitors chose relatively lax state law, they could advertise to the public the privacy threats behind that choice. The process would help hunt out consumers’ true interests through an ongoing argument before consumers. Businesses’ and consumers’ ongoing choices— rather than a single choice by Congress followed by blunt, episodic amendments—would shape the privacy landscape.

The way consumers choose in the modern marketplace is a broad and important topic that deserves further study and elucidation. It nevertheless seems clear—and it is rather pat to observe—that consumers do not carefully read privacy policies and balance their implications. Rather, a hive mind of actors including competitors, advocates, journalists, regulators, and politicians pore over company policies and practices. Consumers take in branding and advertising, reputation, news, personal recommendations, rumors, and trends to decide on the services they use and how they use them.

That detail should not be overlooked: Consumers may use services differently based on the trust they place in them to protect privacy and related values. Using an information-intensive service is not a proposition to share everything or nothing. Consumers can and do shade their use and withhold information from platforms and services depending on their perceptions of whether the privacy protections offered meet their needs.

There is reason to be dissatisfied with the modern marketplace, in which terms of service and privacy policies are offered to the individual consumer on a “take it or leave it” basis. There is a different kind of negotiation, described above, between the hive mind and large businesses. But when the hive mind and business have settled on terms, individuals cannot negotiate bespoke policies reflecting their particular wants and needs. This collective decision-making may be why some advocates regard market processes as coercive. They do not offer custom choices to all but force individual consumers into channels cut by all.

The solution that orthodox privacy advocates offer does not respond well to this problem, because they would replace “take it or leave it” policies crafted in the crucible of the marketplace with “take it or leave it” policies crafted in a political and regulatory crucible. Their prescriptions are sometimes to require artificial notice and “choice,” such as whether to accept cookies when one visits websites. This, as experience shows, does not reach consumers when they are interested in choosing.

Choice of law in privacy competition is meant to preserve manifold choices when and where consumers make their choices, such as at the decision to transact, and then let consumers choose how they use the services they have decided to adopt. Let new entrants choose variegated privacy-law regimes, and consumers will choose among them. That does not fix the whole problem, but at least it doesn’t replace consumer choice with an “expert” one-size-fits-all choice.

In parallel to business competition around privacy choice of law, states would compete with one another to provide the most felicitous environment for consumers and businesses. Some states would choose more protection, seeking the rules businesses would choose to please privacy-conscious consumers. Others might choose less protection, betting that consumers prefer goods other than information control, such as free, convenient, highly interactive, and custom services.

Importantly, this mechanism would allow companies to opt in to various privacy regimes based on the type of service they offer, enabling a degree of fine-tuning appropriate for different industries and different activities that no alternative would likely offer. This would not only result in the experimentation and competition of federalism but also enable multiple overlapping privacy-regulation regimes, avoiding the “one-size-doesn’t-fit-all” problem.

While experimentation continued, state policies would probably rationalize and converge over time. There are institutions dedicated to this, such as the Uniform Law Commission, which is at its best when it harmonizes existing laws based on states’ experience.[24]

It is well within the federal commerce power to regulate state enforcement of choice-of-law provisions, because states may use them to limit interjurisdictional competition. Controlling that is precisely what the commerce power is for. Utah’s recent Social Media Regulation Act[25] barred enforcement of choice-of-law provisions, an effort to regulate nationally from a state capital. Federally backing contractual choice-of-law selections would curtail this growing problem.

At the same time, what our proposed protections for choice-of-law rules do is not much different from what contracts already routinely do and courts enforce in many industries. Contracting parties often specify the governing state’s law and negotiate for the law that best suits their collective needs.

Indeed, sophisticated business contracts increasingly include choice-of-law clauses that state the law that the parties wish to govern their relationship. In addition to settling uncertainty, these clauses might enable the contracting parties to circumvent those states’ laws they deem to be undesirable.[26]

This practice is not only business-to-business. Consumers regularly enter into contracts that include choice-of-law clauses—including regarding privacy law. Credit card agreements, stock and mutual fund investment terms, consumer-product warranties, and insurance contracts, among many other legal agreements, routinely specify the relevant state law that will govern.

In these situations, the insurance company, manufacturer, or mutual fund has effectively chosen the law. The consumer participates in this choice only to the same extent that she participates in any choices related to mass-produced products and services, that is, by deciding whether to buy the product or service.[27]

Allowing contracting parties to create their own legal certainty by contract would likely rankle states. Indeed, “we might expect governments to respond with hostility to the enforcement of choice-of-law clauses. In fact, however, the courts usually do enforce choice-of-law clauses.”[28] With some states trying to regulate nationally and some effectively doing so, the choice the states collectively face is having a role in privacy regulation or no role at all. Competition is better for them than exclusion from the field or minimization of their role through federal preemption of state privacy law. This proposal thus advocates simple federal legislation that preserves firms’ ability to make binding choice-of-law decisions and states’ ability to retain a say in the country’s privacy-governance regime.

Avoiding a Race to the Bottom

Some privacy advocates may object that state laws will not sufficiently protect consumers.[29] Indeed, there is literature arguing that federalism will produce a race to the bottom (i.e., competition leading every state to effectively adopt the weakest law possible), for example, when states offer incorporation laws that are the least burdensome to business interests in a way that arguably diverges from public or consumer interests.[30]

The race-to-the-bottom framing slants the issues and obscures ever-present trade-offs, however. Rules that give consumers high levels of privacy come at a cost in social interaction, price, and the quality of the goods they buy and services they receive. It is not inherently “down” or bad to prefer cheap or free goods and plentiful, social, commercial interaction. It is not inherently “up” or good to opt for greater privacy.

The question is what consumers want. The answers to that question—yes, plural—are the subject of constant research through market mechanisms when markets are free to experiment and are functioning well. Consumers’ demands can change over time through various mechanisms, including experience with new technologies and business models. We argue for privacy on the terms consumers want. The goal is maximizing consumer welfare, which sometimes means privacy and sometimes means sharing personal information in the interest of other goods. There is no race to the bottom in trading one good for another.

Yet the notion of a race to the bottom persists—although not without controversy. In the case of Delaware’s incorporation statutes, the issue is highly contested. Many scholars argue that the state’s rules are the most efficient—that “far from exploiting shareholders, . . . these rules actually benefit shareholders by increasing the wealth of corporations chartered in states with these rules.”[31]

As always, there are trade-offs, and the race-to-the-bottom hypothesis requires some unlikely assumptions. Principally, as Jonathan Macey and Geoffrey Miller discuss, the assumption that state legislators are beholden to the interests of corporations over other constituencies vying for influence. As Macey and Miller explain, the presence of a powerful lobby of specialized and well-positioned corporate lawyers (whose interests are not the same as those of corporate managers) transforms the analysis and explains the persistence and quality of Delaware corporate law.[32]

In much the same vein, there are several reasons to think competition for privacy rules would not succumb to a race to the bottom.

First, if privacy advocates are correct, consumers put substantial pressure on companies to adopt stricter privacy policies. Simply opting in to the weakest state regime would not, as with corporate law, be a matter of substantial indifference to consumers but would (according to advocates) run contrary to their interests. If advocates are correct, firms avoiding stronger privacy laws would pay substantial costs. As a result, the impetus for states to offer weaker laws would be diminished. And, consistent with Macey and Miller’s “interest-group theory” of corporate law,[33] advocates themselves would be important constituencies vying to influence state privacy laws. Satisfying these advocates may benefit state legislators more than satisfying corporate constituencies does.

Second, “weaker” and “stronger” would not be the only dimensions on which states would compete for firms to adopt their privacy regimes. Rather, as mentioned above, privacy law is not one-size-fits-all. Different industries and services entail different implications for consumer interests. States could compete to specialize in offering privacy regimes attractive to distinct industries based on interest groups with particular importance to their economies. Minnesota (home of the Mayo Clinic) and Ohio (home of the Cleveland Clinic), for example, may specialize in health care and medical privacy, while California specializes in social media privacy.

Third, insurance companies are unlikely to be indifferent to the law that the companies they cover choose. Indeed, to the extent that insurers require covered firms to adopt specific privacy practices to control risk, those insurers would likely relish the prospect of outsourcing the oversight of these activities to state law enforcers. States could thus compete to mimic large insurers’ privacy preferences—which would by no means map onto “weaker” policies—to induce insurers to require covered firms to adopt their laws.

If a race to the bottom is truly a concern, the federal government could offer a 51st privacy alternative (that is, an optional federal regime as an alternative to the states’ various privacy laws). Assuming federal privacy regulation would be stricter (an assumption inherent in the race-to-the-bottom objection to state competition), such an approach would ensure that at least one sufficiently strong opt-in privacy regime would always be available. Among other things, this would preclude firms from claiming that no option offers a privacy regime stronger than those of the states trapped in the (alleged) race to the bottom.

Choice of law exists to a degree in the European Union, a trading bloc commonly regarded as uniformly regulated (and commonly regarded as superior on privacy because of a bias toward privacy over other goods). The General Data Protection Regulation (GDPR) gives EU member states broad authority to derogate from its provisions and create state-level exemptions. Article 23 of the GDPR allows states to exempt themselves from EU-wide law to safeguard nine listed broad governmental and public interests.[34] And Articles 85 through 91 provide for derogations, exemptions, and powers to impose additional requirements relative to the GDPR for a number of “specific data processing situations.”[35]

Finally, Article 56 establishes a “lead supervisory authority” for each business.[36] In the political, negotiated processes under the GDPR, this effectively allows companies to shade their regulatory obligations and enforcement outlook through their choices of location. For the United States’ sharper rule-of-law environment, we argue that the choice of law should be articulate and clear.

Refining the Privacy Choice-of-Law Proposal

The precise contours of a federal statute protecting choice-of-law terms in contracts will determine whether it successfully promotes interfirm and interstate competition. Language will also determine its political salability.

Questions include: What kind of notice, if any, should be required to make consumers aware that they are dealing with a firm under a law regime not their own? Consumers are notoriously unwilling to investigate privacy terms—or any other contract terms—in advance, and when considering the choice of law, they would probably not articulate it to themselves. But the competitive dynamics described earlier would probably communicate relevant information to consumers even without any required notice. As always, competitors will have an incentive to ensure consumers are appropriately well-informed when they can diminish their rivals or elevate themselves in comparison by doing so.[37]

Would there be limits on which state’s laws a firm could choose? For example, could a company choose the law of a state where neither the company nor the consumer is domiciled? States would certainly argue that a company should not be able to opt out of the law of the state where it is domiciled. The federal legislation we propose would allow unlimited choice. Such a choice is important if the true benefits of jurisdictional competition are to be realized.

A federal statute requiring states to enforce choice-of-law terms should not override state law denying enforcement of choice-of-law terms that are oppressive, unfair, or improperly bargained for. In cases such as Carnival Cruise Lines v. Shute[38] and The Bremen v. Zapata Off-Shore Co.,[39] the Supreme Court has considered whether forum-selection clauses in contracts might be invalid. The Court has generally upheld such clauses, but they can be oppressive if they require plaintiffs in Maine to litigate in Hawaii, for example, without a substantial reason why Hawaii courts are the appropriate forum. Choice-of-law terms do not impose the cost of travel to remote locations, but they could be used not to establish the law governing the parties but rather to create a strategic advantage unrelated to the law in litigation. Deception built into a contract’s choice-of-law terms should remain grounds for invalidating the contract under state law, even if the state is precluded from barring choice-of-law terms by statute.

The race-to-the-bottom argument raises the question of whether impeding states from overriding contractual choice-of-law provisions would be harmful to state interests, especially since privacy law concerns consumer rights. However, there are reasons to believe race-to-the-bottom incentives would be tempered by greater legal specialization and certainty and by state courts’ ability to refuse to enforce choice-of-law clauses in certain limited circumstances. As Erin O’Hara and Ribstein put it:

Choice-of law clauses reduce uncertainty about the parties’ legal rights and obligations and enable firms to operate in many places without being subject to multiple states’ laws. These reduced costs may increase the number of profitable transactions and thereby increase social wealth. Also, the clauses may not change the results of many cases because courts in states that prohibit a contract term might apply the more lenient law of a state that has close connections with the parties even without a choice-of-law clause.[40]

Determining when, exactly, a state court can refuse to enforce a firm’s choice of privacy law because of excessive leniency is tricky, but the federal statute could set out a framework for when a court could apply its own state’s law. Much like the independent federal alternative discussed above, specific minimum requirements in the federal law could ensure that any race to the bottom that does occur can go only so far. Of course, it would be essential that any such substantive federal requirements be strictly limited, or else the benefits of jurisdictional competition would be lost.

The converse to the problem of a race to the bottom resulting from state competition is the “California effect”—the prospect of states adopting onerous laws from which no company (or consumer) can opt out. States can regulate nationally through one small tendril of authority: the power to prevent businesses and consumers from agreeing on the law that governs their relationships. If a state regulates in a way that it thinks will be disfavored, it will bar choice-of-law provisions in contracts so consumers and businesses cannot exercise their preference.

Utah’s Social Media Regulation Act, for example, includes mandatory age verification for all social media users,[41] because companies must collect proof that consumers are either of age or not in Utah. To prevent consumers and businesses from avoiding this onerous requirement, Utah bars waivers of the law’s requirements “notwithstanding any contract or choice-of-law provision in a contract.”[42] If parties could choose their law, that would render Utah’s law irrelevant, so Utah cuts off that avenue. This demonstrates the value of a proposal like the one contemplated here.

Proposed Legislation

Creating a federal policy to stop national regulation coming from state capitols, while still preserving competition among states and firms, is unique. Congress usually creates its own policy and preempts states in that area to varying degrees. There is a well-developed law around this type of preemption, which is sometimes implied and sometimes expressed in statute.[43] Our proposal does not operate that way. It merely withdraws state authority to prevent parties from freely contracting about the law that applies to them.

A second minor challenge exists regarding the subject matter about which states may not regulate choice of law. Barring states from regulating choice of law entirely is an option, but if the focus is on privacy only, the preemption must be couched to allow regulation of choice of law in other areas. Thus, the scope of “privacy” must be in the language.

Finally, the withdrawal of state authority should probably be limited to positive enactments, such as statutes and regulations, leaving intact common-law practice related to choice-of-law provisions.[44] “Statute,” “enactment,” and “provision” are preferable in preemptive language to “law,” which is ambiguous.

These challenges, and possibly more, are tentatively addressed in the following first crack at statutory language, inspired by several preemptive federal statutes, including the Employee Retirement Income Security Act of 1974,[45] the Airline Deregulation Act,[46] the Federal Aviation Administration Authorization Act of 1994,[47] and the Federal Railroad Safety Act.[48]

A state, political subdivision of a state, or political authority of at least two states may not enact or enforce any statute, regulation, or other provision barring the adoption or application of any contractual choice-of-law provision to the extent it affects contract terms governing commercial collection, processing, security, or use of personal information.

Conclusion

This report introduces a statutory privacy framework centered on individual states and consistent with the United States’ constitutional design. But it safeguards companies from the challenge created by the intersection of that design and the development of modern commerce and communication, which may require them to navigate the complexities and inefficiencies of serving multiple regulators. It fosters an environment conducive to jurisdictional competition and experimentation.

We believe giving states the chance to compete under this approach should be explored in lieu of consolidating privacy law in the hands of one central federal regulator. Competition among states to provide optimal legislation and among businesses to provide optimal privacy policies will help discover and deliver on consumers’ interests, including privacy, of course, but also interactivity, convenience, low costs, and more.

Consumers’ diverse interests are not known now, and they cannot be predicted reliably for the undoubtedly interesting technological future. Thus, it is important to have a system for discovering consumers’ interests in privacy and the regulatory environments that best help businesses serve consumers. It is unlikely that a federal regulatory regime can do these things. The federal government could offer a 51st option in such a system, of course, so advocates for federal involvement could see their approach tested alongside the states’ approaches.

[1] See Uniform Law Commission, “What Is a Model Act?,” https://www.uniformlaws.org/acts/overview/modelacts.

[2] 740 Ill. Comp. Stat. 14/15 (2008).

[3] See Jim Harper, Privacy and the Four Categories of Information Technology, American Enterprise Institute, May 26, 2020, https://www.aei.org/research-products/report/privacy-and-the-four-categories-of-information-technology.

[4] See Jim Harper, “What Do People Mean by ‘Privacy,’ and How Do They Prioritize Among Privacy Values? Preliminary Results,” American Enterprise Institute, March 18, 2022, https://www.aei.org/research-products/report/what-do-people-mean-by-privacy-and-how-do-they-prioritize-among-privacy-values-preliminary-results.

[5] Gramm-Leach-Bliley Act, 15 U.S.C. 6801, § 501 et seq.

[6] Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104-191, § 264.

[7] Estelle Masse, quoted in Ashleigh Hollowell, “Is Privacy Only for the Elite? Why Apple’s Approach Is a Marketing Advantage,” VentureBeat, October 18, 2022, https://venturebeat.com/security/is-privacy-only-for-the-elite-why-apples-approach-is-a-marketing-advantage.

[8] Competition among firms regarding privacy is common, particularly in digital markets. Notably, Apple has implemented stronger privacy protections than most of its competitors have, particularly with its App Tracking Transparency framework in 2021. See, for example, Brain X. Chen, “To Be Tracked or Not? Apple Is Now Giving Us the Choice,” New York Times, April 26, 2021, https://www.nytimes.com/2021/04/26/technology/personaltech/apple-app-tracking-transparency.html. For Apple, this approach is built into the design of its products and offers what it considers a competitive advantage: “Because Apple designs both the iPhone and processors that offer heavy-duty processing power at low energy usage, it’s best poised to offer an alternative vision to Android developer Google which has essentially built its business around internet services.” Kif Leswing, “Apple Is Turning Privacy into a Business Advantage, Not Just a Marketing Slogan,” CNBC, June 8, 2021, https://www.cnbc.com/2021/06/07/apple-is-turning-privacy-into-a-business-advantage.html. Apple has built a substantial marketing campaign around these privacy differentiators, including its ubiquitous “Privacy. That’s Apple.” slogan. See Apple, “Privacy,” https://www.apple.com/privacy. Similarly, “Some of the world’s biggest brands (including Unilever, AB InBev, Diageo, Ferrero, Ikea, L’Oréal, Mars, Mastercard, P&G, Shell, Unilever and Visa) are focusing on taking an ethical and privacy-centered approach to data, particularly in the digital marketing and advertising context.” Rachel Dulberg, “Why the World’s Biggest Brands Care About Privacy,” Medium, September 14, 2021, https://uxdesign.cc/who-cares-about-privacy-ed6d832156dd.

[9] New State Ice Co. v. Liebmann, 285 US 262, 311 (1932) (Brandeis, J., dissenting) (“To stay experimentation in things social and economic is a grave responsibility. Denial of the right to experiment may be fraught with serious consequences to the Nation. It is one of the happy incidents of the federal system that a single courageous State may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.”).

[10] See Charles M. Tiebout, “A Pure Theory of Local Expenditures,” Journal of Political Economy 64, no. 5 (1956): 416–24, https://www.jstor.org/stable/1826343.

[11] See, for example, Barry R. Weingast, “The Economic Role of Political Institutions: Market-Preserving Federalism and Economic Development,” Journal of Law, Economics, & Organization 11, no. 1 (April 1995): 1 31, https://www.jstor.org/stable/765068; Yingyi Qian and Barry R. Weingast, “Federalism as a Commitment to Preserving Market Incentives,” Journal of Economic Perspectives 11, no. 4 (Fall 1997): 83–92, https://www.jstor.org/stable/2138464; and Rui J. P. de Figueiredo Jr. and Barry R. Weingast, “Self-Enforcing Federalism,” Journal of Law, Economics, & Organization 21, no. 1 (April 2005): 103–35, https://www.jstor.org/stable/3554986.

[12] See US Const. art. I, § 8 (enumerating the powers of the federal Congress).

[13] See generally Randy E. Barnett, Restoring the Lost Constitution: The Presumption of Liberty (Princeton, NJ: Princeton University Press, 2014), 274–318.

[14] Protection for Private Blocking and Screening of Offensive Material, 47 U.S.C. 230.

[15] See Geoffrey A. Manne, Ben Sperry, and Kristian Stout, “Who Moderates the Moderators? A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet,” Rutgers Computer & Technology Law Journal 49, no. 1 (2022): 39–53, https://laweconcenter.org/wp-content/uploads/2021/11/Stout-Article-Final.pdf (detailing some of the history of how Section 230 immunity expanded and differs from First Amendment protections); Meghan Anand et al., “All the Ways Congress Wants to Change Section 230,” Slate, August 30, 2023, https://slate.com/technology/2021/03/section-230 reform-legislative-tracker.html (tracking every proposal to amend or repeal Section 230); and Technology & Marketing Law Blog, website, https://blog.ericgoldman.org (tracking all Section 230 cases with commentary).

[16] Fair Credit Reporting Act, 15 U.S.C. § 1681 et seq.

[17] See US Federal Trade Commission, Fair Credit Reporting Act: 15 U.S.C. § 1681, May 2023, https://www.ftc.gov/system/files/ftc_gov/pdf/fcra-may2023-508.pdf (detailing changes to the Fair Credit Reporting Act and its regulations over time).

[18] US Federal Reserve System, Consumer Financial Protection Bureau, “CFPB Launches Inquiry into the Business Practices of Data Brokers,” press release, May 15, 2023, https://www.consumerfinance.gov/about-us/newsroom/cfpb-launches-inquiry-into-the-business-practices-of-data-brokers.

[19] US Federal Reserve System, Consumer Financial Protection Bureau, List of Consumer Reporting Companies, 2021, 8, https://files.consumerfinance.gov/f/documents/cfpb_consumer-reporting-companies-list_03-2021.pdf (noting there are “three big nationwide providers of consumer reports”).

[20] See, for example, Erin A. O’Hara and Larry E. Ribstein, The Law Market (Oxford, UK: Oxford University Press, 2009); Erin A. O’Hara O’Connor and Larry E. Ribstein, “Conflict of Laws and Choice of Law,” in Procedural Law and Economics, ed. Chris William Sanchirico (Northampton, MA: Edward Elgar Publishing, 2012), in Encyclopedia of Law and Economics, 2nd ed., ed. Gerrit De Geest (Northampton, MA: Edward Elgar Publishing, 2009); and Bruce H. Kobayashi and Larry E. Ribstein, eds., Economics of Federalism (Northampton, MA: Edward Elgar Publishing, 2007).

[21] See F. A. Hayek, “The Use of Knowledge in Society,” American Economic Review 35, no. 4 (September 1945): 519–30, https://www.jstor.org/stable/1809376?seq=12.

[22] Henry N. Butler and Larry E. Ribstein, “Legal Process for Fostering Innovation” (working paper, George Mason University, Antonin Scalia Law School, Fairfax, VA), 2, https://masonlec.org/site/rte_uploads/files/Butler-Ribstein-Entrepreneurship-LER.pdf.

[23] See Henry N. Butler and Larry E. Ribstein, “The Single-License Solution,” Regulation 31, no. 4 (Winter 2008–09): 36–42, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1345900.

[24] See Uniform Law Commission, “Acts Overview,” https://www.uniformlaws.org/acts/overview.

[25] Utah Code Ann. § 13-63-101 et seq. (2023).

[26] O’Hara and Ribstein, The Law Market, 5.

[27] O’Hara and Ribstein, The Law Market, 5.

[28] O’Hara and Ribstein, The Law Market, 5.

[29] See Christiano Lima-Strong, “The U.S.’s Sixth State Privacy Law Is Too ‘Weak,’ Advocates Say,” Washington Post, March 30, 2023, https://www.washingtonpost.com/politics/2023/03/30/uss-sixth-state-privacy-law-is-too-weak-advocates-say.

[30] See, for example, William L. Cary, “Federalism and Corporate Law: Reflections upon Delaware,” Yale Law Journal 83, no. 4 (March 1974): 663–705, https://openyls.law.yale.edu/bitstream/handle/20.500.13051/15589/33_83YaleLJ663_1973_1974_.pdf (arguing Delaware could export the costs of inefficiently lax regulation through the dominance of its incorporation statute).

[31] Jonathan R. Macey and Geoffrey P. Miller, “Toward an Interest-Group Theory of Delaware Corporate Law,” Texas Law Review 65, no. 3 (February 1987): 470, https://openyls.law.yale.edu/bitstream/handle/20.500.13051/1029/Toward_An_Interest_Group_Theory_of_Delaware_Corporate_Law.pdf. See also Daniel R. Fischel, “The ‘Race to the Bottom’ Revisited: Reflections on Recent Developments in Delaware’s Corporation Law,” Northwestern University Law Review 76, no. 6 (1982): 913–45, https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=2409&context=journal_articles.

[32] Macey and Miller, “Toward an Interest-Group Theory of Delaware Corporate Law.”

[33] Macey and Miller, “Toward an Interest-Group Theory of Delaware Corporate Law.”

[34] Commission Regulation 2016/679, General Data Protection Regulation art. 23.

[35] Commission Regulation 2016/679, General Data Protection Regulation art. 85–91.

[36] Commission Regulation 2016/679, General Data Protection Regulation art. 56.

[37] See the discussion in endnote 8.

[38] Carnival Cruise Lines v. Shute, 499 US 585 (1991).

[39] The Bremen v. Zapata, 407 US 1 (1972).

[40] O’Hara and Ribstein, The Law Market, 8.

[41] See Jim Harper, “Perspective: Utah’s Social Media Legislation May Fail, but It’s Still Good for America,” Deseret News, April 6, 2023, https://www.aei.org/op-eds/utahs-social-media-legislation-may-fail-but-its-still-good-for-america.

[42] Utah Code Ann. § 13-63-401 (2023).

[43] See Bryan L. Adkins, Alexander H. Pepper, and Jay B. Sykes, Federal Preemption: A Legal Primer, Congressional Research Service, May 18, 2023, https://sgp.fas.org/crs/misc/R45825.pdf.

[44] Congress should not interfere with interpretation of choice-of-law provisions. These issues are discussed in Tanya J. Monestier, “The Scope of Generic Choice of Law Clauses,” UC Davis Law Review 56, no. 3 (February 2023): 959–1018, https://digitalcommons.law.buffalo.edu/cgi/viewcontent.cgi?article=2148&context=journal_articles.

[45] Employee Retirement Income Security Act of 1974, 29 U.S.C. § 1144(a).

[46] Airline Deregulation Act, 49 U.S.C. § 41713(b).

[47] Federal Aviation Administration Authorization Act of 1994, 49 U.S.C. § 14501.

[48] Federal Railroad Safety Act, 49 U.S.C. § 20106.

A Competition Perspective on Physician Non-Compete Agreements

Physician non-compete agreements may have significant competitive implications, and effects on both providers and patients, but they are treated variously under the law on . . .

Abstract

Physician non-compete agreements may have significant competitive implications, and effects on both providers and patients, but they are treated variously under the law on a state-by-state basis. Reviewing the relevant law and the economic literature cannot identify with confidence the net effects of such agreements on either physicians or health care delivery with any generality. In addition to identifying future research projects to inform policy, it is argued that the antitrust “rule of reason” provides a useful and established framework with which to evaluate such agreements in specific health care markets and, potentially, to address those agreements most likely to do significant damage to health care competition and consumers.

A Competition Law & Economics Analysis of Sherlocking

Sherlocking refers to an online platform’s use of nonpublic third-party business data to improve its own business decisions—for instance, by mimicking the successful products . . .

Abstract

Sherlocking refers to an online platform’s use of nonpublic third-party business data to improve its own business decisions—for instance, by mimicking the successful products and services of edge providers. Such a strategy emerges as a form of self-preferencing and, as with other theories about preferential access to data, it has been targeted by some policymakers and competition authorities due to the perceived competitive risks originating from the dual role played by hybrid platforms (acting as both referees governing their platforms, and players competing with the business they host). This paper investigates the competitive implications of sherlocking, maintaining that an outright ban is unjustified. First, the paper shows that, by aiming to ensure platform neutrality, such a prohibition would cover scenarios (i.e., the use of nonpublic third-party business data to calibrate business decisions in general, rather than to adopt a pure copycat strategy) that should be analyzed separately. Indeed, in these scenarios, sherlocking may affect different forms of competition (inter-platform v. intra-platform competition). Second, the paper argues that, in either case, the practice’s anticompetitive effects are questionable and that the ban is fundamentally driven by a bias against hybrid and vertically integrated players.

I. Introduction

The dual role some large digital platforms play (as both intermediary and trader) has gained prominence among the economic arguments used to justify the recent wave of regulation hitting digital markets around the world. Many policymakers have expressed concern about potential conflicts of interest among companies that have adopted this hybrid model and that also control important gateways for business users. In other words, the argument goes, some online firms act not only as regulators who set their platforms’ rules and as referees who enforce those rules, but also as market players who compete with their business users. This raises the fear that large platforms could reserve preferential treatment for their own services and products, to the detriment of downstream rivals and consumers. That, in turn, has led to calls for platform-neutrality rules.

Toward this aim, essentially all of the legislative initiatives undertaken around the world in recent years to enhance competition in digital markets have included anti-discrimination provisions that target various forms of self-preferencing. Self-preferencing, it has been said, serves as the symbol of the current competition-policy zeitgeist in digital markets.[1] Indeed, this conduct is considered functional to leveraging strategies that would grant gatekeepers the chance to entrench their power in core markets and extend it into associated markets.[2]

Against this background, so-called “sherlocking” has emerged as one form of self-preferencing. The term was coined roughly 20 years ago, after Apple updated its own app Sherlock (a search tool on its desktop-operating system) to mimic a third-party application called Watson, which was created by Karelia Software to complement the Apple tool’s earlier version.[3] According to critics of self-preferencing generally and sherlocking in particular, biased intermediation and related conflicts of interest allow gatekeepers to exploit their preferential access to business users’ data to compete against them by replicating successful products and services. The implied assumption is that this strategy is relevant to competition policy, even where no potential intellectual-property rights (IPRs) are infringed and no slavish imitation sanctionable under unfair-competition laws is detected. Indeed, under such theories, sherlocking would already be prevented by the enforcement of these rules.

To tackle perceived misuse of gatekeepers’ market position, the European Union’s Digital Markets Act (DMA) introduced a ban on sherlocking.[4] Similar concerns have also motivated requests for intervention in the United States,[5] Australia,[6] and Japan.[7] In seeking to address at least two different theories of gatekeepers’ alleged conflicts of interest, these proposed bans on exploiting access to business users’ data are not necessarily limited to the risk of product imitation, but may include any business decision whatsoever that a platform may make while relying on that data.

In parallel with the regulatory initiatives, the conduct at-issue has also been investigated in some antitrust proceedings, which appear to seek the very same twofold goal. In particular, in November 2020, the European Commission sent a statement of objections to Amazon that argued the company had infringed antitrust rules through the systematic use of nonpublic business data from independent retailers who sell on the Amazon online marketplace in order to benefit Amazon’s own retail business, which directly competes with those retailers.[8] A similar investigation was opened by the UK Competition and Markets Authority (CMA) in July 2022.[9]

Further, as part of the investigation opened into Apple’s App Store rule requiring developers to use Apple’s in-app purchase mechanism to distribute paid apps and/or paid digital content, the European Commission also showed interest in evaluating whether Apple’s conduct might disintermediate competing developers from relevant customer data, while Apple obtained valuable data about those activities and its competitors’ offers.[10] The European Commission and UK CMA likewise launched an investigation into Facebook Marketplace, with accusations that Meta used data gathered from advertisers in order to compete with them in markets where the company is active, such as classified ads.[11]

There are two primary reasons these antitrust proceedings are relevant. First, many of the prohibitions envisaged in regulatory interventions (e.g., DMA) clearly took inspiration from the antitrust investigations, thus making it important to explore the insights that competition authorities may provide to support an outright ban. Second, given that regulatory intervention will be implemented alongside competition rules (especially in Europe) rather than displace them,[12] sherlocking can be assessed at both the EU and national level against dominant players that are not eligible for “gatekeeper” designation under the DMA. For those non-gatekeeper firms, the practice may still be investigated by antitrust authorities and assessed before courts, aside from the DMA’s per se prohibition. And, of course, investigations and assessments of sherlocking could also be made even in those jurisdictions where there isn’t an outright ban.

The former sis well-illustrated by the German legislature’s decision to empower its national competition authority with a new tool to tackle abusive practices that are similar and functionally equivalent to the DMA.[13] Indeed, as of January 2021, the Bundeskartellamt may identify positions of particular market relevance (undertakings of “paramount significance for competition across markets”) and assess their possible anticompetitive effects on competition in those areas of digital ecosystems in which individual companies may have a gatekeeper function. Both the initiative’s aims and its list of practices are similar to the DMA. They are distinguished primarily by the fact that the German list is exhaustive, and the practices at-issue are not prohibited per se, but are subject to a reversal of the burden of proof, allowing firms to provide objective justifications. For the sake of this analysis, within the German list, one provision prohibits designated undertakings from “demanding terms and conditions that permit … processing data relevant for competition received from other undertakings for purposes other than those necessary for the provision of its own services to these undertakings without giving these undertakings sufficient choice as to whether, how and for what purpose such data are processed.”[14]

Unfortunately, none of the above-mentioned EU antitrust proceedings have concluded with a final decision that addresses the merits of sherlocking. This precludes evaluating whether the practice would have survived before the courts. Regarding the Apple investigation, the European Commission dropped the case over App Store rules and issued a new statement of objections that no longer mentions sherlocking.[15] Further, the European Commission and the UK CMA accepted the commitments offered by Amazon to close those investigations.[16] The CMA likewise accepted the commitments offered by Meta.[17]

Those outcomes can be explained by the DMA’s recent entry into force. Indeed, because of the need to comply with the new regulation, players designated as gatekeepers likely have lost interest in challenging antitrust investigations that target the very same conduct prohibited by the DMA.[18] After all, given that the DMA does not allow any efficiency defense against the listed prohibitions, even a successful appeal against an antitrust decision would be a pyrrhic victory. From the opposite perspective, the same applies to the European Commission, which may decide to save time, costs, and risks by dropping an ongoing case against a company designated as a gatekeeper under the DMA, knowing that the conduct under investigation will be prohibited in any case.

Nonetheless, despite the lack of any final decision on sherlocking, these antitrust assessments remain relevant. As already mentioned, the DMA does not displace competition law and, in any case, dominant platforms not designated as gatekeepers under the DMA still may face antitrust investigations over sherlocking. This applies even more for jurisdictions, such as the United States, that are evaluating DMA-like legislative initiatives (e.g., the American Innovation and Choice Online Act, or “AICOA”).

Against this background, drawing on recent EU cases, this paper questions the alleged anticompetitive implications of sherlocking, as well as claims that the practice fails to comply with existing antitrust rules.

First, the paper illustrates that prohibitions on the use of nonpublic third-party business data would cover two different theories that should be analyzed separately. Whereas a broader case involves all the business decisions adopted by a dominant platform because of such preferential access (e.g., the launch of new products or services, the development or cessation of existing products or services, the calibration of pricing and management systems), a more specific case deals solely with the adoption of a copycat strategy. By conflating these theories in support of a blanket ban that condemns any use of nonpublic third-party business data, EU antitrust authorities are fundamentally motivated by the same policy goal pursued by the DMA—i.e., to impose a neutrality regime on large online platforms. The competitive implications differ significantly, however, as adopting copycat strategies may only affect intra-brand competition, while using said data to improve other business decisions could also affect inter-platform competition.

Second, the paper shows that, in both of these scenarios, the welfare effects of sherlocking are unclear. Notably, exploiting certain data to better understand the market could help a platform to develop new products and services, to improve existing products and services, or more generally to be more competitive with respect to both business users and other platforms. As such outcomes would benefit consumers in terms of price and quality, any competitive advantage achieved by the hybrid platform could be considered unlawful only if it is not achieved on the merits. In a similar vein, if sherlocking is used by a hybrid platform to deliver replicas of its business users’ products and services, that would likely provide short-term procompetitive effects benefitting consumers with more choice and lower prices. In this case, the only competitive harm that would justify an antitrust intervention resides in (uncertain) negative long-term effects on innovation.

As a result, in any case, an outright ban of sherlocking, such as is enshrined in the DMA, is economically unsound since it would clearly harm consumers.

The paper is structured as follows. Section II describes the recent antitrust investigations of sherlocking, illustrating the various scenarios that might include the use of third-party business data. Section III investigates whether sherlocking may be considered outside the scope of competition on the merits for bringing competitive advantages to platforms solely because of their hybrid business model. Section IV analyzes sherlocking as a copycat strategy by investigating the ambiguous welfare effects of copying in digital markets and providing an antitrust assessment of the practice at issue. Section V concludes.

II. Antitrust Proceedings on Sherlocking: Platform Neutrality and Copycat Competition

Policymakers’ interest in sherlocking is part of a larger debate over potentially unfair strategies that large online platforms may deploy because of their dual role as an unavoidable trading partner for business users and a rival in complementary markets.

In this scenario, as summarized in Table 1, the DMA outlaws sherlocking, establishing that to “prevent gatekeepers from unfairly benefitting from their dual role,”[19] they are restrained from using, in competition with business users, “any data that is not publicly available that is generated or provided by those business users in the context of their use of the relevant core platform services or of the services provided together with, or in support of, the relevant core platform services, including data generated or provided by the customers of those business users.”[20] Recital 46 further clarifies that the “obligation should apply to the gatekeeper as a whole, including but not limited to its business unit that competes with the business users of a core platform service.”

A similar provision was included in the American Innovation and Choice Online Act (AICOA), which was considered, but not ultimately adopted, in the 117th U.S. Congress. AICOA, however, would limit the scope of the ban to the offer of products or services that would compete with those offered by business users.[21] Concerns about copycat strategies were also reported in the U.S. House of Representatives’ investigation of the state of competition in digital markets as supporting the request for structural-separation remedies and line-of-business restrictions to eliminate conflicts of interest where a dominant intermediary enters markets that place it in competition with dependent businesses.[22] Interestingly, however, in the recent complaint filed by the U.S. Federal Trade Commission (FTC) and 17 state attorneys general against Amazon that accuses the company of having deployed an interconnected strategy to block off every major avenue of competition (including price, product selection, quality, and innovation), there is no mention of sherlocking among the numerous unfair practices under investigation.[23]

Evaluating regulatory-reform proposals for digital markets, the Australian Competition and Consumer Commission (ACCC) also highlighted the risk of sherlocking, arguing that it could have an adverse effect on competition, notably on rivals’ ability to compete, when digital platforms exercise their strong market position to utilize nonpublic data to free ride on the innovation efforts of their rivals.[24] Therefore, the ACCC suggested adopting service-specific codes to address self-preferencing by, for instance, imposing data-separation requirements to restrain dominant app-store providers from using commercially sensitive data collected from the app-review process to develop their own apps.[25]

Finally, on a comparative note, it is also useful to mention the proposals advanced by the Japanese Fair Trade Commission (JFTC) in its recent market-study report on mobile ecosystems.[26] In order to ensure equal footing among competitors, the JFTC specified that its suggestion to prevent Google and Apple from using nonpublic data generated by other developers’ apps aims at pursuing two purposes. Such a ban would, indeed, concern not only use of the data for the purpose of developing competing apps, products, and services, but also its use for developing their own apps, products, and services.

TABLE 1: Legislative Initiatives and Proposals to Ban Sherlocking

As previously anticipated, sherlocking recently emerged as an antitrust offense in three investigations launched by the European Commission and the UK CMA.

In the first case, Amazon’s alleged reliance on marketplace sellers’ nonpublic business data has been claimed to distort fair competition on its platform and prevent effective competition. In its preliminary findings, the Commission argued that Amazon takes advantage of its hybrid business model, leveraging its access to nonpublic third-party sellers’ data (e.g., the number of ordered and shipped units of products; sellers’ revenues on the marketplace; the number of visits to sellers’ offers; data relating to shipping, to sellers’ past performance, and to other consumer claims on products, including the activated guarantees) to adjust its retail offers and strategic business decisions to the detriment of third-party sellers, which are direct competitors on the marketplace.[27] In particular, the Commission was concerned that Amazon uses such data for its decision to start and end sales of a product, for its pricing system, for its inventory-planning and management system, and to identify third-party sellers that Amazon’s vendor-recruitment teams should approach to invite them to become direct suppliers to Amazon Retail. To address the data-use concern, Amazon committed not to use nonpublic data relating to, or derived from, independent sellers’ activities on its marketplace for its retail business and not to use such data for the purposes of selling branded goods, as well as its private-label products.[28]

A parallel investigation ended with similar commitments in the UK.[29] According to the UK CMA, Amazon’s access to and use of nonpublic seller data could result in a competitive advantage for Amazon Retail arising from its operation of the marketplace, rather than from competition on the merits, and may lead to relevant adverse effects on competition. Notably, it was alleged this could result in a reduction in the scale and competitiveness of third-party sellers on the Amazon Marketplace; a reduction in the number and range of product offers from third-party sellers on the Amazon Marketplace; and/or less choice for consumers, due to them being offered lower quality goods and/or paying higher prices than would otherwise be the case.

It is also worth mentioning that, by determining that Amazon is an undertaking of paramount significance for competition across markets, the Bundeskartellamt emphasized the competitive advantage deriving from Amazon’s access to nonpublic data, such as Glance Views, sales figures, sale quantities, cost components of products, and reorder status.[30] Among other things, with particular regard to Amazon’s hybrid role, the Bundeskartellamt noted that the preferential access to competitively sensitive data “opens up the possibility for Amazon to optimize its own-brand assortment.”[31]

A second investigation involved Apple and its App Store rule.[32] According to the European Commission, the mandatory use of Apple’s own proprietary in-app purchase system (IAP) would, among other things, grant Apple full control over the relationship its competitors have with customers, thus disintermediating those competitors from customer data and allowing Apple to obtain valuable data about the activities and offers of its competitors.

Finally, Meta faced antitrust proceedings in both the EU and the UK.[33] The focus was on Facebook Marketplace—i.e., an online classified-ads service that allows users to advertise goods for sale. According to the European Commission and the CMA, Meta unilaterally imposes unfair trading conditions on competing online-classified ads services that advertise on Facebook or Instagram. These terms and conditions, which authorize Meta to use ads-related data derived from competitors for the benefit of Facebook Marketplace, are considered unjustified, as they impose an unnecessary burden on competitors and only benefit Facebook Marketplace. The suspicion is that Meta has used advertising data from Facebook Marketplace competitors for the strategic planning, product development, and launch of Facebook Marketplace, as well as for Marketplace’s operation and improvement.

Overall, these investigations share many features. The concerns about third-party business-data use, as well as about other forms of self-preferencing, revolve around the competitive advantages that accrue to a dominant platform because of its dual role. Such advantages are considered unfair, as they are not the result of the merits of a player, but derived purely and simply from its role as an important gateway to reach end users. Moreover, this access to valuable business data is not reciprocal. The feared risk is the marginalization of business users competing with gatekeepers on the gatekeepers’ platforms and, hence, the alleged harm to competition is the foreclosure of rivals in complementary markets (horizontal foreclosure).

The focus of these investigations was well-illustrated by the European Commission’s decision on Amazon’s practice.[34] The Commission’s concern was about the “data delta” that Amazon may exploit, namely the additional data related to third-party sellers’ listings and transactions that are not available to, and cannot be replicated by, the third-party sellers themselves, but are available to and used by Amazon Retail for its own retail operations.[35] Contrary to Amazon Retail—which, according to Commission’s allegations, would have full access to and would use such individual, real-time data of all its third-party sellers to calibrate its own retail decisions—sellers would have access only to their own individual listings and sales data. As a result, the Commission came to the (preliminary) conclusion that real-time access to and use of such volume, variety, and granularity of non-publicly available data from its retail competitors generates a significant competitive advantage for Amazon Retail in each of the different decisional processes that drive its retail operations.[36]

On a closer look, however, while antitrust authorities seem to target the use of nonpublic third-party business data as a single theory of harm, their allegations cover two different scenarios along the lines of what has already been examined with reference to the international legislative initiatives and proposals. Indeed, the Facebook Marketplace case does not involve an allegation of copying, as Meta is accused of gathering data from its business users to launch and improve its ads service, instead of reselling goods and services.

FIGURE 1: Sherlocking in Digital Markets

As illustrated above in Figure 1, while the claim in the latter scenario is that the preferential data use would help dominant players calibrate business decisions in general, the former scenario instead involves the use of such data for a pure copycat strategy of an entire product or service, or some of its specific features.

In both scenarios the aim of the investigations is to ensure platform neutrality. Accordingly, as shown by the accepted commitments, the envisaged solution for antitrust authorities is to impose  data-separation requirements to restrain dominant platforms from using third-party commercially sensitive data. Putting aside that these investigations concluded with commitments from the firms, however, their chances of success before a court differ significantly depending on whether they challenge a product-imitation strategy, or any business decision adopted because of the “data delta.”

A. Sherlocking and Unconventional Theories of Harm for Digital Markets

Before analyzing how existing competition-law rules could be applied to the various scenarios involving the use of third-party business data, it is worth providing a brief overview of the framework in which the assessment of sherlocking is conducted. As competition in the digital economy is increasingly a competition among ecosystems,[37] a lively debate has emerged on the capacity of traditional antitrust analysis to adequately capture the peculiar features of digital markets. Indeed, the combination of strong economies of scale and scope; indirect network effects; data advantages and synergies across markets; and portfolio effects all facilitate ecosystem development all contribute to making digital markets highly concentrated, prone to tipping, and not easily contestable.[38] As a consequence, it’s been suggested that addressing these distinctive features of digital markets requires an overhaul of the antitrust regime.

Such discussions require the antitrust toolkit and theories of harm to illustrate whether and how a particular practice, agreement, or merger is anticompetitive. Notably, at issue is whether traditional antitrust theories of harm are fit for purpose or whether novel theories of harm should be developed in response to the emerging digital ecosystems. The latter requires looking at the competitive impact of expanding, protecting, or strengthening an ecosystem’s position, and particularly whether such expansion serves to exploit a network of capabilities and to control access to key inputs and components.[39]

A significant portion of recent discussions around developing novel theories of harm to better address the characteristics of digital-business models and markets has been devoted to the topic of merger control—in part a result of the impressive number of acquisitions observed in recent years.[40] In particular, the focus has been on analyzing conglomerate mergers that involve acquiring a complementary or unrelated asset, which have traditionally been assumed to raise less-significant competition concerns.

In this regard, an ecosystem-based theory seems to have guided the Bundeskartellamt in its assessment of Meta’s acquisition of Kustomer[41] and by the CMA in Microsoft/Activision.[42] A more recent example is the European Commission’s decision to prohibit the proposed Booking/eTraveli merger, where the Commission explicitly noted that the transaction would have allowed Booking to expand its travel-services ecosystem.[43] The Commission’s concerns were related primarily to the so-called “envelopment” strategy, in which a prominent platform within a specific market broadens its range of services into other markets where there is a significant overlap of customer groups already served by the platform.[44]

Against this background, putative self-preferencing harms represent one of the European Commission’s primary (albeit contentious)[45] attempts to develop new theories of harm built on conglomerate platforms’ ability to bundle services or use data from one market segment to inform product development in another.[46] Originally formulated in the Google Shopping decision,[47] the theory of harm of (leveraging through) self-preferencing has subsequently inspired the DMA, which targets different forms of preferential treatment, including sherlocking.

In particular, it is asserting that platform may use self-preferencing to adopt a leveraging strategy with a twofold anticompetitive effect—that is, excluding or impeding rivals from competing with the platform (defensive leveraging) and extending the platform’s market power into associated markets (offensive leveraging). These goals can be pursued because of the unique role that some large digital platforms play. That is, they not only enjoy strategic market status by controlling ecosystems of integrated complementary products and services, which are crucial gateways for business users to reach end users, but they also perform a dual role as both a critical intermediary and a player active in complementors’ markets. Therefore, conflicts of interests may provide incentives for large vertically integrated platforms to favor their own products and services over those of their competitors.[48]

The Google Shopping theory of harm, while not yet validated by the Court of Justice of the European Union (CJEU),[49] has also found its way into merger analysis, as demonstrated by the European Commission’s recent assessment of iRobot/Amazon.[50] In its statement of objections, the Commission argued that the proposed acquisition of iRobot may give Amazon the ability and incentive to foreclose iRobot’s rivals by engaging in several foreclosing strategies to prevent them from selling robot vacuum cleaners (RVCs) on Amazon’s online marketplace and/or at degrading such rivals’ access to that marketplace. In particular, the Commission found that Amazon could deploy such self-preferencing strategies as delisting rival RVCs; reducing rival RVCs’ visibility in both organic and paid results displayed in Amazon’s marketplace; limiting access to certain widgets or commercially attractive labels; and/or raising the costs of iRobot’s rivals to advertise and sell their RVCs on Amazon’s marketplace.[51]

Sherlocking belongs to this framework of analysis and can be considered a form of self-preferencing, specifically because of the lack of reciprocity in accessing sensitive data.[52] Indeed, while gatekeeper platforms have access to relevant nonpublic third-party business data as a result of their role as unavoidable trading partners, they leverage this information exclusively, without sharing it with third-party sellers, thus further exacerbating an already uneven playing field.[53]

III. Sherlocking for Competitive Advantage: Hybrid Business Model, Neutrality Regimes, and Competition on the Merits

Insofar as prohibitions of sherlocking center on the competitive advantages that platforms enjoy because of their dual role—thereby allowing some players to better calibrate their business decisions due to their preferential access to business users’ data—it should be noted that competition law does not impose a general duty to ensure a level playing field.[54] Further, a competitive advantage does not, in itself, amount to anticompetitive foreclosure under antitrust rules. Rather, foreclosure must not only be proved (in terms of actual or potential effects) but also assessed against potential benefits for consumers in terms of price, quality, and choice of new goods and services.[55]

Indeed, not every exclusionary effect is necessarily detrimental to competition.[56] Competition on the merits may, by definition, lead to the departure from the market or the marginalization of competitors that are less efficient and therefore less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation.[57] Automatically classifying any conduct with exclusionary effects were as anticompetitive could well become a means to protect less-capable, less-efficient undertakings and would in no way protect more meritorious undertakings—thereby potentially hindering a market’s competitiveness.[58]

As recently clarified by the CJEU regarding the meaning of “competition on the merits,” any practice that, in its implementation, holds no economic interest for a dominant undertaking except that of eliminating competitors must be regarded as outside the scope of competition on the merits.[59] Referring to the cases of margin squeezes and essential facilities, the CJEU added that the same applies to practices that a hypothetical equally efficient competitor is unable to adopt because that practice relies on using resources or means inherent to the holding of such a dominant position.[60]

Therefore, while antitrust cases on sherlocking set out to ensure a level playing field and platform neutrality, and therefore center on the competitive advantages that a platform enjoys because of its dual role, mere implementing a hybrid business model does not automatically put such practices outside the scope of competition on the merits. The only exception, according to the interpretation provided in Bronner, is the presence of an essential facility—i.e., an input whose access should be considered indispensable, as there are no technical, legal, or economic obstacles capable of making it impossible, or even unreasonably difficult, to duplicate it.[61]

As a result, unless it is proved that the hybrid platform is an essential facility, sherlocking and other forms of self-preferencing cannot be considered prima facie outside the scope of competition on the merits, or otherwise unlawful. Rather, any assessment of sherlocking demands the demonstration of anticompetitive effects, which in turn requires finding an impact on efficient firms’ ability and incentive to compete. In the scenario at-issue, for instance, the access to certain data may allow a platform to deliver new products or services; to improve existing products or services; or more generally to compete more efficiently not only with respect to the platform’s business users, but also against other platforms. Such an increase in both intra-platform and inter-platform competition would benefit consumers in terms of lower prices, better quality, and a wider choice of new or improved goods and services—i.e., competition on the merits.[62]

In Facebook Marketplace, the European Commission and UK CMA challenged the terms and conditions governing the provision of display-advertising and business-tool services to which Meta required its business customers to sign up.[63] In their view, Meta abused its dominant position by imposing unfair trading conditions on its advertising customers, which authorized Meta to use ads-related data derived from the latter in a way that could afford Meta a competitive advantage on Facebook Marketplace that would not have arisen from competition on the merits. Notably, antitrust authorities argued that Meta’s terms and conditions were unjustified, disproportionate, and unnecessary to provide online display-advertising services on Meta’s platforms.

Therefore, rather than directly questioning the platform’s dual role or hybrid business model, the European Commission and UK CMA decided to rely on traditional case law which considers unfair those clauses that are unjustifiably unrelated to the purpose of the contract, unnecessarily limit the parties’ freedom, are disproportionate, or are unilaterally imposed or seriously opaque.[64] This demonstrates that, outside the harm theory of the unfairness of terms and conditions, a hybrid platform’s use of nonpublic third-party business data to improve its own business decisions is generally consistent with antitrust provisions. Hence, an outright ban would be unjustified.

IV. Sherlocking to Mimic Business Users’ Products or Services

The second, and more intriguing, sherlocking scenario is illustrated by the Amazon Marketplace investigations and regards the original meaning of sherlocking—i.e., where a data advantage is used by a hybrid platform to mimic its business users’ products or services.

Where sherlocking charges assert that the practice allows some platforms to use business users’ data to compete against them by replicating their products or services, it should not be overlooked that the welfare effects of such a copying strategy are ambiguous. While the practice could benefit consumers in the short term by lowering prices and increasing choice, it may discourage innovation over the longer term if third parties anticipate being copied whenever they deliver successful products or services. Therefore, the success of an antitrust investigation essentially relies on demonstrating a harm to innovation that would induce business users to leave the market or stop developing their products and services. In other words, antitrust authorities should be able to demonstrate that, by allowing dominant platforms to free ride on their business guests’ innovation efforts, sherlocking would negatively affect rivals’ ability to compete.

A. The Welfare Effects of Copying

The tradeoff between the short- and long-term welfare effects of copying has traditionally been analyzed in the context of the benefits and costs generated by intellectual-property protection.[65] In particular, the economic literature investigating the optimal life of patents[66] and copyrights[67] focuses on the efficient balance between dynamic benefits associated with innovation and the static costs of monopoly power granted by IPRs.

More recently, product imitation has instead been investigated in the different scenario of digital markets, where dominant platforms adopting a hybrid business model may use third-party sellers’ market data to design and promote their own products over their rivals’ offerings. Indeed, some studies report that large online platforms may attempt to protect their market position by creating “kill zones” around themselves—i.e., by acquiring, copying, or eliminating their rivals.[68] In such a novel setting, the welfare effects of copying are assessed regardless of the presence and the potential enforcement of IPRs, but within a strategy aimed at excluding rivals by exploiting the dual role of both umpire and player to get preferential access to sensitive data and free ride on their innovative efforts.[69]

Even in this context, however, a challenging tradeoff should be considered. Indeed, while in the short term, consumers may benefit from the platform’s imitation strategy in terms of lower prices and higher quality, they may be harmed in the longer term if third parties are discouraged from delivering new products and services. As a result, while there is empirical evidence on hybrid platforms successfully entering into third parties’ adjacent market segments, [70] the extant academic literature finds the welfare implications of such moves to be ambiguous.

A first strand of literature attempts to estimate the welfare impact of the hybrid business model. Notably, Andre Hagiu, Tat-How Teh, and Julian Wright elaborated a model to address the potential implications of an outright ban on platforms’ dual mode, finding that such a structural remedy may harm consumer surplus and welfare even where the platform would otherwise engage in product imitation and self-preferencing.[71] According to the authors, banning the dual mode does not restore the third-party seller’s innovation incentives or the effective price competition between products, which are the putative harms caused by imitation and self-preferencing. Therefore, the authors’ evaluation was that interventions specifically targeting product imitation and self-preferencing were preferable.

Germa?n Gutie?rrez suggested that banning the dual model would generate hardly any benefits for consumers, showing that, in the Amazon case, interventions that eliminate either the Prime program or product variety are likely to decrease welfare.[72]

Further, analyzing Amazon’s business model, Federico Etro found that the platform and consumers’ incentives are correctly aligned, and that Amazon’s business model of hosting sellers and charging commissions prevents the company from gaining through systematic self?preferencing for its private-label and first-party products.[73] In the same vein, on looking at its business model and monetization strategy, Patrick Andreoli-Versbach and Joshua Gans argued that Amazon does not have an obvious incentive to self-preference.[74] Indeed, Amazon’s profitability data show that, on average, the company’s operating margin is higher on third-party sales than on first-party retail sales.

Looking at how modeling details may yield different results with regard to the benefits and harms of the hybrid business model, Simon Anderson and O?zlem Bedre-Defoile maintain that the platform’s choice to sell its own products benefits consumers by lowering prices when a monopoly platform hosts competitive fringe sellers, regardless of the platform’s position as a gatekeeper, whether sellers have an alternate channel to reach consumers, or whether alternate channels are perfect or imperfect substitutes for the platform channel.[75] On the other hand, the authors argued that platform product entry might harm consumers when a big seller with market power sells on its own channel and also on the platform. Indeed, in that case, the platform setting a seller fee before the big seller prices its differentiated products introduces double markups on the big seller’s platform-channel price and leaves some revenue to the big seller.

Studying whether Amazon engages in self-preferencing on its marketplace by favoring its own brands in search results, Chiara Farronato, Andrey Fradkin, and Alexander MacKay demonstrate empirically that Amazon brands remain about 30% cheaper and have 68% more reviews than other similar products.[76] The authors acknowledge, however, that their findings do not imply that consumers are hurt by Amazon brands’ position in search results.

Another strand of literature specifically tackles the welfare effects of sherlocking. In particular, Erik Madsen and Nikhil Vellodi developed a theoretical framework to demonstrate that a ban on insider imitation can either stifle or stimulate innovation, depending on the nature of innovation.[77] Specifically, the ban could stimulate innovation for experimental product categories, while reducing innovation in incremental product markets, since the former feature products with a large chance of superstar demand and the latter generate mostly products with middling demand.

Federico Etro maintains that the tradeoffs at-issue are too complex to be solved with simple interventions, such as bans on dual mode, self-preferencing, or copycatting.[78] Indeed, it is difficult to conclude that Amazon entry is biased to expropriate third-party sellers or that bans on dual mode, self-preferencing, or copycatting would benefit consumers, because they either degrade services and product variety or induce higher prices or commissions.

Similar results are provided by Jay Pil Choi, Kyungmin Kim, and Arijit Mukherjee, who developed a tractable model of a platform-run marketplace where the platform charges a referral fee to the sellers for access to the marketplace, and may also subsequently launch its own private-label product by copying a seller.[79] The authors found that a policy to either ban hybrid mode or only prohibit information use for the launch of private-label products may produce negative welfare implications.

Further, Radostina Shopova argues that, when introducing a private label, the marketplace operator does not have incentive to distort competition and foreclose the outside seller, but does have an incentive to lower fees charged to the outside seller and to vertically differentiate its own product in order to protect the seller’s channel.[80] Even when the intermediary is able to perfectly mimic the quality of the outside seller and monopolize its product space, the intermediary prefers to differentiate its offer and chooses a lower quality for the private-label product. Accordingly, as the purpose of private labels is to offer a lower-quality version of products aimed at consumers with a lower willingness to pay, a marketplace operator does not have an incentive to distort competition in favor of its own product and foreclose the seller of the original higher-quality product.

In addition, according to Jean-Pierre Dubé, curbing development of private-label programs would harm consumers and Amazon’s practices amount to textbook retailing, as they follow an off-the-shelf approach to managing private-label products that is standard for many retail chains in the West.[81] As a result, singling out Amazon’s practices would set a double standard.

Interestingly, such findings about predictors and effects of Amazon’s entry in competition with third-party merchants on its own marketplace are confirmed by the only empirical study developed so far. In particular, analyzing the Home & Kitchen department of Germany’s version of Amazon Marketplace between 2016 and 2021, Gregory S. Crawford, Matteo Courthoud, Regina Seibel, and Simon Zuzek’s results suggest that Amazon’s entry strategy was more consistent with making Marketplace more attractive to consumers than expropriating third-party merchants.[82] Notably, the study showed that, comparing Amazon’s entry decisions with those of the largest third-party merchants, Amazon tends to enter low-growth and low-quality products, which is consistent with a strategy that seeks to make Marketplace more attractive by expanding variety, lessening third-party market power, and/or enhancing product availability. The authors therefore found that Amazon’s entry on Amazon Marketplace demonstrated no systematic adverse effects and caused a mild market expansion.

Massimo Motta and Sandro Shelegia explored interactions between copying and acquisitions, finding that the former (or the threat of copying) can modify the outcome of an acquisition negotiation.[83] According to their model, there could be both static and dynamic incentives for an incumbent to introduce a copycat version of a complementary product. The static rationale consists of lowering the price of the complementary product in order to capture more rents from it, while the dynamic incentive consists of harming a potential rival’s prospects of developing a substitute. The latter may, in turn, affect the direction the entrant takes toward innovation. Anticipating the incumbent’s copying strategy, the entrant may shift resources from improvements to compete with the incumbent’s primary product to developing complementary products.

Jingcun Cao, Avery Haviv, and Nan Li analyzed the opposite scenario—i.e., copycats that seek to mimic the design and user experience of incumbents’ successful products.[84] The authors find empirically that, on average, copycat apps do not have a significant effect on the demand for incumbent apps and that, as with traditional counterfeit products, they may generate a positive demand spillover toward authentic apps.

Massimo Motta also investigated the potential foreclosure effects of platforms adopting a copycat strategy committed to non-discriminatory terms of access for third parties (e.g., Apple App Store, Google Play, and Amazon Marketplace).[85] Notably, according to Motta, when a third-party seller is particularly successful and the platform is unable to raise fees and commissions paid by that seller, the platform may prefer to copy its product or service to extract more profits from users, rather than rely solely on third-party sales. The author acknowledged, however, that even though this practice may create an incentive for self-preferencing, it does not necessarily have anticompetitive effects. Indeed, the welfare effects of the copying strategy are a priori ambiguous.[86] While, on the one hand, the platform’s copying of a third-party product benefits consumers by increasing variety and competition among products, on the other hand, copying might be wasteful for society, in that it entails a fixed cost and may discourage innovation if rivals anticipate that they will be systematically copied whenever they have a successful product.[87] Therefore, introducing a copycat version of a product offered by a firm in an adjacent market might be procompetitive.

B. Antitrust Assessment: Competition, Innovation, and Double Standards

The economic literature has demonstrated that the rationale and welfare effects of sherlocking by hybrid platforms are definitively ambiguous. Against concerns about rivals’ foreclosure, some studies provide a different narrative, illustrating that such a strategy is more consistent with making the platform more attractive to consumers (by differentiating the quality and pricing of the offer) than expropriating business users.[88] Furthermore, copies, imitations, and replicas undoubtedly benefit consumers with more choice and lower prices.

Therefore, the only way to consider sherlocking anticompetitive is by demonstrating long-term deterrent effects on innovation (i.e., reducing rivals’ incentives to invest in new products and services) outweigh consumers’ short-term advantages.[89] Moreover, deterrent effects must not be merely hypothetical, as a finding of abuse cannot be based on a mere possibility of harm.[90] In any case, such complex tradeoffs are at odds with a blanket ban.[91]

Moreover, assessments of the potential impact of sherlocking on innovation cannot disregard the role of IPRs—which are, by definition, the main primary to promote innovation. From this perspective, intellectual-property protection is best characterized as another form of tradeoff. Indeed, the economic rationale of IPRs (in particular, of patents and copyrights) involves, among other things, a tradeoff between access and incentives—i.e., between short-term competitive restrictions and long-term innovative benefits.[92]

According to the traditional incentive-based theory of intellectual property, free riding would represent a dangerous threat that justifies the exclusive rights granted by intellectual-property protection. As a consequence, so long as copycat expropriation does not infringe IPRs, it should be presumed legitimate and procompetitive. Indeed, such free riding is more of an intellectual-property issue than a competitive concern.

In addition, to strike a fair balance between restricting competition and providing incentives to innovation, the exclusive rights granted by IPRs are not unlimited in terms of duration, nor in terms of lawful (although not authorized) uses of the protected subject matter. Under the doctrine of fair use, for instance, reverse engineering represents a legitimate way to obtain information about a firm’s product, even if the intended result is to produce a directly competing product that may steer customers away from the initial product and the patented invention.

Outside of reverse engineering, copying is legitimately exercised once IPRs expire, when copycat competitors can reproduce previously protected elements. As a result of the competitive pressure exerted by new rivals, holders of expired IPRs may react by seeking solutions designed to block or at least limit the circulation of rival products. They could, for example, request other IPRs to cover aspects or functionalities different from those previously protected. They could also bring (sometimes specious) legal action for infringement of the new IPR or for unfair competition by slavish imitation. For these reasons, there have been occasions where copycat competitors have received protection from antitrust authorities against sham litigation brought by IPR holders concerned about losing margins due to pricing pressure from copycats.[93]

Finally, within the longstanding debate on the intersection of intellectual-property protection and competition, EU antitrust authorities have traditionally been unsympathetic toward restrictions imposed by IPRs. The success of the essential-facility doctrine (EFD) is the most telling example of this attitude, as its application in the EU has been extended to IPRs. As a matter of fact, the EFD represents the main antitrust tool for overseeing intellectual property in the EU.[94]

After Microsoft, EU courts have substantially dismantled one of the “exceptional circumstances” previously elaborated in Magill and specifically introduced for cases involving IPRs, with the aim of safeguarding a balance between restrictions to access and incentives to innovate. Whereas the CJEU established in Magill that refusal to grant an IP license should be considered anticompetitive if it prevents the emergence of a new product for which there is potential consumer demand, in Microsoft, the General Court considered such a requirement met even when access to an IPR is necessary for rivals to merely develop improved products with added value.

Given this background, recent competition-policy concerns about sherlocking are surprising. To briefly recap, the practice at-issue increases competition in the short term, but may affect incentives to innovate in the long-term. With regard to the latter, however, the practice neither involves products protected by IPRs nor constitutes a slavish imitation that may be caught under unfair-competition laws.

The case of Amazon, which has received considerable media coverage, is illustrative of the relevance of IP protection. Amazon has been accused of cloning batteries, power strips, wool runner shoes, everyday sling bags, camera tripods, and furniture.[95] One may wonder what kind of innovation should be safeguarded in these cases against potential copies. Admittedly, such examples appear consistent with the findings of the already-illustrated empirical study conducted by Crawford et al. indicating that Amazon tends to enter low-quality products in order to expand variety on the Marketplace and to make it more attractive to consumers.

Nonetheless, if an IPR is involved, right holders are provided with proper means to protect their products against infringement. Indeed, one of the alleged targeted companies (Williams-Sonoma) did file a complaint for design and trademark infringement, claiming that Amazon had copied a chair (Orb Dining Chair) sold by its West Elm brand. According to Williams-Sonoma, the Upholstered Orb Office Chair—which Amazon began selling under its Rivet brand in 2018—was so similar that the ordinary observer would be confused by the imitation.[96] If, instead, the copycat strategy does not infringe any IPR, the potential impact on innovation might not be considered particularly worrisome—at least at first glance.

Further, neither the degree to which third-party business data is unavailable nor the degree to which they are relevant in facilitating copying are clear cut. For instance, in the case of Amazon, public product reviews supply a great deal of information[97] and, regardless of the fact that a third party is selling a product on the Marketplace, anyone can obtain an item for the purposes of reverse engineering.[98]

In addition, antitrust authorities are used to intervening against opportunistic behavior by IPR holders. European competition authorities, in particular, have never before seemed particularly responsive to the motives of inventors and creators versus the need to encourage maximum market openness.

It should also be noted that cloning is a common strategy in traditional markets (e.g., food products)[99] and has been the subject of longstanding controversies between high-end fashion brands and fast-fashion brands (e.g., Zara, H&M).[100] Furthermore, brick-and-mortar retailers also introduce private labels and use other brands’ sales records in deciding what to produce.[101]

So, what makes sherlocking so different and dangerous when deployed in digital markets as to push competition authorities to contradict themselves?[102]

The double standard against sherlocking reflects the same concern and pursues the same goal of the various other attempts to forbid any form of self-preferencing in digital markets. Namely, antitrust investigations of sherlocking are fundamentally driven by the bias against hybrid and vertically integrated players. The investigations rely on the assumption that conflicts of interest have anticompetitive implications and that, therefore, platform neutrality should be promoted to ensure the neutrality of the competitive process.[103] Accordingly, hostility toward sherlocking may involve both of the illustrated scenarios—i.e., the use of nonpublic third-party business data either in adopting any business decision, or just copycat strategies, in particular.

As a result, however, competition authorities end up challenging a specific business model, rather than the specific practice at-issue, which brings undisputed competitive benefits in terms of lower prices and wider consumer choice, and which should therefore be balanced against potential exclusionary risks. As the CJEU has pointed out, the concept of competition on the merits:

…covers, in principle, a competitive situation in which consumers benefit from lower prices, better quality and a wider choice of new or improved goods and services. Thus, … conduct which has the effect of broadening consumer choice by putting new goods on the market or by increasing the quantity or quality of the goods already on offer must, inter alia, be considered to come within the scope of competition on the merits.[104]

Further, in light of the “as-efficient competitor” principle, competition on the merits may lead to “the departure from the market, or the marginalization of, competitors that are less efficient and so less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation.”[105]

It has been correctly noted that the “as-efficient competitor” principle is a reminder of what competition law is about and how it differs from regulation.[106] Competition law aims to protect a process, rather than engineering market structures to fulfill a particular vision of how an industry is to operate.[107] In other words, competition law does not target firms on the basis of size or status and does not infer harm from (market or bargaining) power or business model. Therefore, neither the dual role played by some large online platforms nor their preferential access to sensitive business data or their vertical integration, by themselves, create a competition problem. Competitive advantages deriving from size, status, power, or business model cannot be considered per se outside the scope of competition on the merits.

Some policymakers have sought to resolve these tensions in how competition law regards sherlocking by introducing or envisaging an outright ban. These initiatives and proposals have clearly been inspired by antitrust investigations, but they did so for the wrong reasons. Instead of taking stock of the challenging tradeoffs between short-term benefits and long-term risks that an antitrust assessment of sherlocking requires, they blamed competition law for not providing effective tools to achieve the policy goal of platform neutrality.[108] Therefore, the regulatory solution is merely functional to bypass the traditional burden of proof of antitrust analysis and achieve what competition-law enforcement cannot provide.

V. Conclusion

The bias against self-preferencing strikes again. Concerns about hybrid platforms’ potential conflicts of interest have led policymakers to seek prohibitions to curb different forms of self-preferencing, making the latter the symbol of the competition-policy zeitgeist in digital markets. Sherlocking shares this fate. Indeed, the DMA outlaws any use of business users’ nonpublic data and similar proposals have been advanced in the United States, Australia, and Japan. Further, like other forms of self-preferencing, such regulatory initiatives against sherlocking have been inspired by previous antitrust proceedings.

Drawing on these antitrust investigations, the present research shows the extent to which an outright ban on sherlocking is unjustified. Notably, the practice at-issue includes two different scenarios: the broad case in which a gatekeeper exploits its preferential access to business users’ data to better calibrate all of its business decisions and the narrow case in which such data is used to adopt a copycat strategy. In either scenario, the welfare effects and competitive implications of sherlocking are unclear.

Indeed, the use of certain data by a hybrid platform to improve business decisions generally should be classified as competition on the merits, and may yield an increase in both intra-platform (with respect to business users) and inter-platform (with respect to other platforms) competition. This would benefit consumers in terms of lower prices, better quality, and a wider choice of new or improved goods and services. In a similar vein, if sherlocking is used to deliver replicas of business users’ products or services, the anti-competitiveness of such a strategy may only result from a cumbersome tradeoff between short-term benefits (i.e., lower prices and wider choice) and negative long-term effects on innovation.

An implicit confirmation of the difficulties encountered in demonstrating the anti-competitiveness of sherlocking comes from the recent complaint issued by the FTC against Amazon.[109] Current FTC Chairwoman Lina Khan devoted a significant portion of her previous academic career to questioning Amazon’s practices (including the decision to introduce its own private labels inspired by third-party products)[110] and to supporting the adoption of structural-separation remedies to tackle platforms’ conflicts of interest that induce them to exploit their “systemic informational advantage (gleaned from competitors)” to thwart rivals and strengthen their own position by introducing replica products.[111] Despite these premises and although the FTC’s complaint targets numerous practices belonging to what has been described as an interconnected strategy to block off every major avenue of competition, however, sherlocking is surprisingly off the radar.

Regulatory initiatives to ban sherlocking in order to ensure platform neutrality with respect to business users and a level playing field among rivals would sacrifice undisputed procompetitive benefits on the altar of policy goals that competition rules are not meant to pursue. Sherlocking therefore appears to be a perfect case study of the side effects of unwarranted interventions in digital markets.

[1] Giuseppe Colangelo, Antitrust Unchained: The EU’s Case Against Self-Preferencing, 72 GRUR International 538 (2023).

[2] Jacques Cre?mer, Yves-Alexandre de Montjoye, & Heike Schweitzer, Competition Policy for the Digital Era (2019), 7, https://op.europa.eu/en/publication-detail/-/publication/21dc175c-7b76-11e9-9f05-01aa75ed71a1/language-en (all links last accessed 3 Jan. 2024); UK Digital Competition Expert Panel, Unlocking Digital Competition, (2019) 58, available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[3] You’ve Been Sherlocked, The Economist (2012), https://www.economist.com/babbage/2012/07/13/youve-been-sherlocked.

[4] Regulation (EU) 2022/1925 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (2022), OJ L 265/1, Article 6(2).

[5] U.S. S. 2992, American Innovation and Choice Online Act (AICOA) (2022), Section 3(a)(6), available at https://www.klobuchar.senate.gov/public/_cache/files/b/9/b90b9806-cecf-4796-89fb-561e5322531c/B1F51354E81BEFF3EB96956A7A5E1D6A.sil22713.pdf. See also U.S. House of Representatives, Subcommittee on Antitrust, Commercial, and Administrative Law, Investigation of Competition in Digital Markets, Majority Staff Reports and Recommendations (2020), 164, 362-364, 378, available at https://democrats-judiciary.house.gov/uploadedfiles/competition_in_digital_markets.pdf.

[6] Australian Competition and Consumer Commission, Digital Platform Services Inquiry Report on Regulatory Reform (2022), 125, https://www.accc.gov.au/about-us/publications/serial-publications/digital-platform-services-inquiry-2020-2025/digital-platform-services-inquiry-september-2022-interim-report-regulatory-reform.

[7] Japan Fair Trade Commission, Market Study Report on Mobile OS and Mobile App Distribution (2023), https://www.jftc.go.jp/en/pressreleases/yearly-2023/February/230209.html.

[8] European Commission, 10 Nov. 2020, Case AT.40462, Amazon Marketplace; see Press Release, Commission Sends Statement of Objections to Amazon for the Use of Non-Public Independent Seller Data and Opens Second Investigation into Its E-Commerce Business Practices, European Commission (2020), https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2077.

[9] Press Release, CMA Investigates Amazon Over Suspected Anti-Competitive Practices, UK Competition and Markets Authority (2022), https://www.gov.uk/government/news/cma-investigates-amazon-over-suspected-anti-competitive-practices.

[10] European Commission, 16 Jun. 2020, Case AT.40716, Apple – App Store Practices.

[11] Press Release, Commission Sends Statement of Objections to Meta over Abusive Practices Benefiting Facebook Marketplace, European Commission (2022), https://ec.europa.eu/commission/presscorner/detail/en/ip_22_7728; Press Release, CMA Investigates Facebook’s Use of Ad Data, UK Competition and Markets Authority (2021), https://www.gov.uk/government/news/cma-investigates-facebook-s-use-of-ad-data.

[12] DMA, supra note 4, Recital 10 and Article 1(6).

[13] GWB Digitalization Act, 18 Jan. 2021, Section 19a. On risks of overlaps between the DMA and the competition law enforcement, see Giuseppe Colangelo, The European Digital Markets Act and Antitrust Enforcement: A Liaison Dangereuse, 47 European Law Review 597.

[14] GWB, supra note 13, Section 19a (2)(4)(b).

[15] Press Release, Commission Sends Statement of Objections to Apple Clarifying Concerns over App Store Rules for Music Streaming Providers, European Commission (2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_1217.

[16] European Commission, 20 Dec. 2022, Case AT.40462; Press Release, Commission Accepts Commitments by Amazon Barring It from Using Marketplace Seller Data, and Ensuring Equal Access to Buy Box and Prime, European Commission (2022), https://ec.europa.eu/commission/presscorner/detail/en/ip_22_7777; UK Competition and Markets Authority, 3 Nov. 2023, Case No. 51184, https://www.gov.uk/cma-cases/investigation-into-amazons-marketplace.

[17] UK Competition and Markets Authority, 3 Nov. 2023, Case AT.51013, https://www.gov.uk/cma-cases/investigation-into-facebooks-use-of-data.

[18] See, e.g., Gil Tono & Lewis Crofts (2022), Amazon Data Commitments Match DMA Obligations, EU’s Vestager Say, mLex (2022), https://mlexmarketinsight.com/news/insight/amazon-data-commitments-match-dma-obligation-eu-s-vestager-says (reporting that Commissioner Vestager stated that Amazon’s data commitments definitively appear to match what would be asked within the DMA).

[19] DMA, supra note 4, Recital 46.

[20] Id., Article 6(2) (also stating that, for the purposes of the prohibition, non-publicly available data shall include any aggregated and non-aggregated data generated by business users that can be inferred from, or collected through, the commercial activities of business users or their customers, including click, search, view, and voice data, on the relevant core platform services or on services provided together with, or in support of, the relevant core platform services of the gatekeeper).

[21] AICOA, supra note 5.

[22] U.S. House of Representatives, supra note 5; see also Lina M. Khan, The Separation of Platforms and Commerce, 119 Columbia Law Review 973 (2019).

[23] U.S. Federal Trade Commission, et al. v. Amazon.com, Inc., Case No. 2:23-cv-01495 (W.D. Wash., 2023).

[24] Australian Competition and Consumer Commission, supra note 6, 125.

[25] Id., 124.

[26] Japan Fair Trade Commission, supra note 7, 144.

[27] European Commission, supra note 8. But see also Amazon, Supporting Sellers with Tools, Insights, and Data (2021), https://www.aboutamazon.eu/news/policy/supporting-sellers-with-tools-insights-and-data (claiming that the company is just using aggregate (rather than individual) data: “Just like our third-party sellers and other retailers across the world, Amazon also uses data to run our business. We use aggregated data about customers’ experience across the store to continuously improve it for everyone, such as by ensuring that the store has popular items in stock, customers are finding the products they want to purchase, or connecting customers to great new products through automated merchandising.”)

[28] European Commission, supra note 16.

[29] UK Competition and Markets Authority, supra notes 9 and 16.

[30] Bundeskartellamt, 5 Jul. 2022, Case B2-55/21, paras. 493, 504, and 518.

[31] Id., para. 536.

[32] European Commission, supra note 10.

[33] European Commission, supra note 11; UK Competition and Markets Authority, supra note 11.

[34] European Commission, supra note 16. In a similar vein, see also UK Competition and Markets Authority, supra note 16, paras. 4.2-4.7.

[35] European Commission, supra note 16, para. 111.

[36] Id., para. 123.

[37] Cre?mer, de Montjoye, & Schweitzer, supra note 2, 33-34.

[38] See, e.g., Marc Bourreau, Some Economics of Digital Ecosystems, OECD Hearing on Competition Economics of Digital Ecosystems (2020), https://www.oecd.org/daf/competition/competition-economics-of-digital-ecosystems.htm; Amelia Fletcher, Digital Competition Policy: Are Ecosystems Different?, OECD Hearing on Competition Economics of Digital Ecosystems (2020).

[39] See, e.g., Cristina Caffarra, Matthew Elliott, & Andrea Galeotti, ‘Ecosystem’ Theories of Harm in Digital Mergers: New Insights from Network Economics, VoxEU (2023), https://cepr.org/voxeu/columns/ecosystem-theories-harm-digital-mergers-new-insights-network-economics-part-1 (arguing that, in merger control, the implementation of an ecosystem theory of harm would require assessing how a conglomerate acquisition can change the network of capabilities (e.g., proprietary software, brand, customer-base, data) in order to evaluate how easily competitors can obtain alternative assets to those being acquired); for a different view, see Geoffrey A. Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 George Mason Law Review 1281(2021).

[40] See, e.g., Viktoria H.S.E. Robertson, Digital merger control: adapting theories of harm, (forthcoming) European Competition Journal; Caffarra, Elliott, & Galeotti, supra note 39; OECD, Theories of Harm for Digital Mergers (2023), available at www.oecd.org/daf/competition/theories-of-harm-for-digital-mergers-2023.pdf; Bundeskartellamt, Merger Control in the Digital Age – Challenges and Development Perspectives (2022), available at https://www.bundeskartellamt.de/SharedDocs/Publikation/EN/Diskussions_Hintergrundpapiere/2022/Working_Group_on_Competition_Law_2022.pdf?__blob=publicationFile&v=2; Elena Argentesi, Paolo Buccirossi, Emilio Calvano, Tomaso Duso, Alessia Marrazzo, & Salvatore Nava, Merger Policy in Digital Markets: An Ex Post Assessment, 17 Journal of Competition Law & Economics 95 (2021); Marc Bourreau & Alexandre de Streel, Digital Conglomerates and EU Competition Policy (2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3350512.

[41] Bundeskartellamt, 11 Feb. 2022, Case B6-21/22, https://www.bundeskartellamt.de/SharedDocs/Entscheidung/EN/Fallberichte/Fusionskontrolle/2022/B6-21-22.html;jsessionid=C0837BD430A8C9C8E04D133B0441EB95.1_cid362?nn=4136442.

[42] UK Competition and Markets Authority, Microsoft / Activision Blizzard Merger Inquiry (2023), https://www.gov.uk/cma-cases/microsoft-slash-activision-blizzard-merger-inquiry.

[43] See European Commission, Commission Prohibits Proposed Acquisition of eTraveli by Booking (2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_4573 (finding that a flight product is a crucial growth avenue in Booking’s ecosystem, which revolves around its hotel online-travel-agency (OTA) business, as it would generate significant additional traffic to the platform, thus allowing Booking to benefit from existing customer inertia and making it more difficult for competitors to contest Booking’s position in the hotel OTA market).

[44] Thomas Eisenmann, Geoffrey Parker, & Marshall Van Alstyne, Platform Envelopment, 32 Strategic Management Journal 1270 (2011).

[45] See, e.g., Colangelo, supra note 1, and Pablo Iba?n?ez Colomo, Self-Preferencing: Yet Another Epithet in Need of Limiting Principles, 43 World Competition 417 (2020) (investigating whether and to what extent self-preferencing could be considered a new standalone offense in EU competition law); see also European Commission, Digital Markets Act – Impact Assessment Support Study (2020), 294, https://op.europa.eu/en/publication-detail/-/publication/0a9a636a-3e83-11eb-b27b-01aa75ed71a1/language-en (raising doubts about the novelty of this new theory of harm, which seems similar to the well-established leveraging theories of harm of tying and bundling, and margin squeeze).

[46] European Commission, supra note 45, 16.

[47] European Commission, 27 Jun. 2017, Case AT.39740, Google Search (Shopping).

[48] See General Court, 10 Nov. 2021, Case T-612/17, Google LLC and Alphabet Inc. v. European Commission, ECLI:EU:T:2021:763, para. 155 (stating that the general principle of equal treatment obligates vertically integrated platforms to refrain from favoring their own services as opposed to rival ones; nonetheless, the ruling framed self-preferencing as discriminatory abuse).

[49] In the meantime, however, see Opinion of the Advocate General Kokott, 11 Jan. 2024, Case C-48/22 P, Google v. European Commission, ECLI:EU:C:2024:14, paras. 90 and 95 (arguing that the self-preferencing of which Google is accused constitutes an independent form of abuse, albeit one that exhibits some proximity to cases involving margin squeezing).

[50] European Commission, Commission Sends Amazon Statement of Objections over Proposed Acquisition of iRobot (2023), https://ec.europa.eu/commission/presscorner/detail/en/IP_23_5990.

[51] The same concerns and approach have been shared by the CMA, although it reached a different conclusion, finding that the new merged entity would not have incentive to self-preference its own branded RVCs: see UK Competition and Markets Authority, Amazon / iRobot Merger Inquiry – Clearance Decision (2023), paras. 160, 188, and 231, https://www.gov.uk/cma-cases/amazon-slash-irobot-merger-inquiry.

[52] See European Commission, supra note 45, 304.

[53] Id., 313-314 (envisaging, among potential remedies, the imposition of a duty to make all data used by the platform for strategic decisions available to third parties); see also Désirée Klinger, Jonathan Bokemeyer, Benjamin Della Rocca, & Rafael Bezerra Nunes, Amazon’s Theory of Harm, Yale University Thurman Arnold Project (2020), 19, available at https://som.yale.edu/sites/default/files/2022-01/DTH-Amazon.pdf.

[54] Colangelo, supra note 1; see also Oscar Borgogno & Giuseppe Colangelo, Platform and Device Neutrality Regime: The New Competition Rulebook for App Stores?, 67 Antitrust Bulletin 451 (2022).

[55] See Court of Justice of the European Union (CJEU), 12 May 2022, Case C-377/20, Servizio Elettrico Nazionale SpA v. Autorità Garante della Concorrenza e del Mercato, ECLI:EU:C:2022:379; 19 Apr. 2018, Case C-525/16, MEO v. Autoridade da Concorrência, ECLI:EU:C:2018:270; 6 Sep. 2017, Case C-413/14 P, Intel v. Commission, ECLI:EU:C:2017:632; 6 Oct. 2015, Case C-23/14, Post Danmark A/S v. Konkurrencerådet (Post Danmark II), ECLI:EU:C:2015:651; 27 Mar. 2012, Case C-209/10, Post Danmark A/S v Konkurrencera?det (Post Danmark I), ECLI: EU:C:2012:172; for a recent overview of the EU case law, see also Pablo Iba?n?ez Colomo, The (Second) Modernisation of Article 102 TFEU: Reconciling Effective Enforcement, Legal Certainty and Meaningful Judicial Review, SSRN (2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4598161.

[56] CJEU, Intel, supra note 55, paras. 133-134.

[57] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 73.

[58] Opinion of Advocate General Rantos, 9 Dec. 2021, Case C?377/20, Servizio Elettrico Nazionale SpA v. Autorità Garante della Concorrenza e del Mercato, ECLI:EU:C:2021:998, para. 45.

[59] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 77.

[60] Id., paras. 77, 80, and 83.

[61] CJEU, 26 Nov.1998, Case C-7/97, Oscar Bronner GmbH & Co. KG v. Mediaprint Zeitungs- und Zeitschriftenverlag GmbH & Co. KG, Mediaprint Zeitungsvertriebsgesellschaft mbH & Co. KG and Mediaprint Anzeigengesellschaft mbH & Co. KG, ECLI:EU:C:1998:569.

[62] CJEU, Servizio Elettrico Nazionale, supra note 55, para. 85.

[63] European Commission, supra note 11; UK Competition and Markets Authority, supra note 17, paras. 2.6, 4.3, and 4.7.

[64] See, e.g., European Commission, Case COMP D3/34493, DSD, para. 112 (2001) OJ L166/1; affirmed in GC, 24 May 2007, Case T-151/01, DerGru?nePunkt – Duales System DeutschlandGmbH v. European Commission, ECLI:EU:T:2007:154 and CJEU, 16 Jul. 2009, Case C-385/07 P, ECLI:EU:C:2009:456; European Commission, Case IV/31.043, Tetra Pak II, paras. 105–08, (1992) OJ L72/1; European Commission, Case IV/29.971, GEMA III, (1982) OJ L94/12; CJUE, 27 Mar. 1974, Case 127/73, Belgische Radio en Televisie e socie?te? belge des auteurs, compositeurs et e?diteurs v. SV SABAM and NV Fonior, ECLI:EU:C:1974:25, para. 15; European Commission, Case IV/26.760, GEMA II, (1972) OJ L166/22; European Commission, Case IV/26.760, GEMA I, (1971) OJ L134/15.

[65] See, e.g., Richard A. Posner, Intellectual Property: The Law and Economics Approach, 19 The Journal of Economic Perspectives 57 (2005).

[66] See, e.g., Richard Gilbert & Carl Shapiro, Optimal Patent Length and Breadth, 21 The RAND Journal of Economics 106 (1990); Pankaj Tandon, Optimal Patents with Compulsory Licensing, 90 Journal of Political Economy 470 (1982); Frederic M. Scherer, Nordhaus’ Theory of Optimal Patent Life: A Geometric Reinterpretation, 62 American Economic Review 422 (1972); William D. Nordhaus, Invention, Growth, and Welfare: A Theoretical Treatment of Technological Change, Cambridge, MIT Press (1969).

[67] See, e.g., Hal R. Varian, Copying and Copyright, 19 The Journal of Economic Perspectives 121 (2005); William R. Johnson, The Economics of Copying, 93 Journal of Political Economy 158 (1985); Stephen Breyer, The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies, and Computer Programs, 84 Harvard Law Review 281 (1970).

[68] Sai Krishna Kamepalli, Raghuram Rajan, & Luigi Zingales, Kill Zone, NBER Working Paper No. 27146 (2022), http://www.nber.org/papers/w27146; Massimo Motta & Sandro Shelegia, The “Kill Zone”: Copying, Acquisition and Start-Ups’ Direction of Innovation, Barcelona GSE Working Paper Series Working Paper No. 1253 (2021), https://bse.eu/research/working-papers/kill-zone-copying-acquisition-and-start-ups-direction-innovation; U.S. House of Representatives, Subcommittee on Antitrust, Commercial, and Administrative Law, supra note 8, 164; Stigler Committee for the Study of Digital Platforms, Market Structure and Antitrust Subcommittee (2019) 54, https://research.chicagobooth.edu/stigler/events/single-events/antitrust-competition-conference/digital-platforms-committee; contra, see Geoffrey A. Manne, Samuel Bowman, & Dirk Auer, Technology Mergers and the Market for Corporate Control, 86 Missouri Law Review 1047 (2022).

[69] See also Howard A. Shelanski, Information, Innovation, and Competition Policy for the Internet, 161 University of Pennsylvania Law Review 1663 (2013), 1999 (describing as “forced free riding” the situation occurring when a platform appropriates innovation by other firms that depend on the platform for access to consumers).

[70] See Feng Zhu & Qihong Liu, Competing with Complementors: An Empirical Look at Amazon.com, 39 Strategic Management Journal 2618 (2018).

[71] Andrei Hagiu, Tat-How Teh, and Julian Wright, Should Platforms Be Allowed to Sell on Their Own Marketplaces?, 53 RAND Journal of Economics 297 (2022), (the model assumes that there is a platform that can function as a seller and/or a marketplace, a fringe of small third-party sellers that all sell an identical product, and an innovative seller that has a better product in the same category as the fringe sellers and can invest more in making its product even better; further, the model allows the different channels (on-platform or direct) and the different sellers to offer different values to consumers; therefore, third-party sellers (including the innovative seller) can choose whether to participate on the platform’s marketplace, and whenever they do, can price discriminate between consumers that come to it through the marketplace and consumers that come to it through the direct channel).

[72] See Germa?n Gutie?rrez, The Welfare Consequences of Regulating Amazon (2022), available at http://germangutierrezg.com/Gutierrez2021_AMZ_welfare.pdf (building an equilibrium model where consumers choose products on the Amazon platform, while third-party sellers and Amazon endogenously set prices of products and platform fees).

[73] See Federico Etro, Product Selection in Online Marketplaces, 30 Journal of Economics & Management Strategy 614 (2021), (relying on a model where a marketplace such as Amazon provides a variety of products and can decide, for each product, whether to monetize sales by third-party sellers through a commission or become a seller on its platform, either by commercializing a private label version or by purchasing from a vendor and resell as a first party retailer; as acknowledged by the author, a limitation of the model is that it assumes that the marketplace can set the profit?maximizing commission on each product; if this is not the case, third-party sales would be imperfectly monetized, which would increase the relative profitability of entry).

[74] Patrick Andreoli-Versbach & Joshua Gans, Interplay Between Amazon Store and Logistics, SSRN (2023) https://ssrn.com/abstract=4568024.

[75] Simon Anderson & O?zlem Bedre-Defolie, Online Trade Platforms: Hosting, Selling, or Both?, 84 International Journal of Industrial Organization 102861 (2022).

[76] Chiara Farronato, Andrey Fradkin, & Alexander MacKay, Self-Preferencing at Amazon: Evidence From Search Rankings, NBER Working Paper No. 30894 (2023), http://www.nber.org/papers/w30894.

[77] See Erik Madsen & Nikhil Vellodi, Insider Imitation, SSRN (2023) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3832712 (introducing a two-stage model where the platform publicly commits to an imitation policy and the entrepreneur observes this policy and chooses whether to innovate: if she chooses not to, the game ends and both players earn profits normalized to zero; otherwise, the entrepreneur pays a fixed innovation cost to develop the product, which she then sells on a marketplace owned by the platform).

[78] Federico Etro, The Economics of Amazon, SSRN (2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4307213.

[79] Jay Pil Choi, Kyungmin Kim, & Arijit Mukherjee, “Sherlocking” and Information Design by Hybrid Platforms, SSRN (2023), https://ssrn.com/abstract=4332558 (the model assumes that the platform chooses its referral fee at the beginning of the game and that the cost of entry is the same for both the seller and the platform).

[80] Radostina Shopova, Private Labels in Marketplaces, 89 International Journal of Industrial Organization 102949 (2023), (the model assumes that the market structure is given exogenously and that the quality of the seller’s product is also exogenous; therefore, the paper does not investigate how entry by a platform affects the innovation incentives of third-party sellers).

[81] Jean-Pierre Dube?, Amazon Private Brands: Self-Preferencing vs Traditional Retailing, SSRN (2022) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4205988.

[82] Gregory S. Crawford, Matteo Courthoud, Regina Seibel, & Simon Zuzek, Amazon Entry on Amazon Marketplace, CEPR Discussion Paper No. 17531 (2022), https://cepr.org/publications/dp17531.

[83] Motta & Shelegia, supra note 68.

[84] Jingcun Cao, Avery Haviv, & Nan Li, The Spillover Effects of Copycat Apps and App Platform Governance, SSRN (2023), https://ssrn.com/abstract=4250292.

[85] Massimo Motta, Self-Preferencing and Foreclosure in Digital Markets: Theories of Harm for Abuse Cases, 90 International Journal of Industrial Organization 102974 (2023).

[86] Id.

[87] Id.

[88] See, e.g., Crawford, Courthoud, Seibel, & Zuzek, supra note 82; Etro, supra note 78; Shopova, supra note 80.

[89] Motta, supra note 85.

[90] Servizio Elettrico Nazionale, supra note 55, paras. 53-54; Post Danmark II, supra note 55, para. 65.

[91] Etro, supra note 78; see also Herbert Hovenkamp, The Looming Crisis in Antitrust Economics, 101 Boston University Law Review 489 (2021), 543, (arguing that: “Amazon’s practice of selling both its own products and those of rivals in close juxtaposition almost certainly benefits consumers by permitting close price comparisons. When Amazon introduces a product such as AmazonBasics AAA batteries in competition with Duracell, prices will go down. There is no evidence to suggest that the practice is so prone to abuse or so likely to harm consumers in other ways that it should be categorically condemned. Rather, it is an act of partial vertical integration similar to other practices that the antitrust laws have confronted and allowed in the past.”)

[92] On the more complex economic rationale of intellectual property, see, e.g., William M. Landes & Richard A. Posner, The Economic Structure of Intellectual Property Law, Cambridge, Harvard University Press (2003).

[93] See, e.g., Italian Competition Authority, 18 Jul. 2023 No. 30737, Case A538 – Sistemi di sigillatura multidiametro per cavi e tubi, (2023) Bulletin No. 31.

[94] See CJEU, 6 Apr. 1995, Joined Cases C-241/91 P and 242/91 P, RTE and ITP v. Commission, ECLI:EU:C:1995:98; 29 Apr. 2004, Case C-418/01, IMS Health GmbH & Co. OHG v. NDC Health GmbH & Co. GH, ECLI:EU:C:2004:257; General Court, 17 Sep. 2007, Case T-201/04, Microsoft v. Commission, ECLI:EU:T:2007:289; CJEU, 16 Jul. 2015, Case C-170/13, Huawei Technologies Co. Ltd v. ZTE Corp., ECLI:EU:C:2015:477.

[95] See, e.g., Dana Mattioli, How Amazon Wins: By Steamrolling Rivals and Partners, Wall Street Journal (2022), https://www.wsj.com/articles/amazon-competition-shopify-wayfair-allbirds-antitrust-11608235127; Aditya Kalra & Steve Stecklow, Amazon Copied Products and Rigged Search Results to Promote Its Own Brands, Documents Show, Reuters (2021), https://www.reuters.com/investigates/special-report/amazon-india-rigging.

[96] Williams-Sonoma, Inc. v. Amazon.Com, Inc., Case No. 18-cv-07548 (N.D. Cal., 2018). The suit was eventually dismissed, as the parties entered into a settlement agreement: Williams-Sonoma, Inc. v. Amazon.Com, Inc., Case No. 18-cv-07548-AGT (N.D. Cal., 2020).

[97] Amazon Best Sellers, https://www.amazon.com/Best-Sellers/zgbs.

[98] Hovenkamp, supra note 91, 2015-2016.

[99] Nicolas Petit, Big Tech and the Digital Economy, Oxford, Oxford University Press (2020), 224-225.

[100] For a recent analysis, see Zijun (June) Shi, Xiao Liu, Dokyun Lee, & Kannan Srinivasan, How Do Fast-Fashion Copycats Affect the Popularity of Premium Brands? Evidence from Social Media, 60 Journal of Marketing Research 1027 (2023).

[101] Lina M. Khan, Amazon’s Antitrust Paradox, 126 Yale Law Journal 710 (2017), 782.

[102] See Massimo Motta &Martin Peitz, Intervention Triggers and Underlying Theories of Harm, in Market Investigations. A New Competition Tool for Europe? (M. Motta, M. Peitz, & H. Schweitzer, eds.), Cambridge, Cambridge University Press (2022), 16, 59 (arguing that, while it is unclear to what extent products or ideas are worth protecting and/or can be protected from sherlocking and whether such cloning is really harmful to consumers, this is clearly an area where an antitrust investigation for abuse of dominant position would not help).

[103] Khan, supra note 101, 780 and 783 (arguing that Amazon’s conflicts of interest tarnish the neutrality of the competitive process and that the competitive implications are clear, as Amazon is exploiting the fact that some of its customers are also its rivals).

[104] Servizio Elettrico Nazionale, supra note 55, para. 85.

[105] Post Danmark I, supra note 55, para. 22.

[106] Iba?n?ez Colomo, supra note 55, 21-22.

[107] Id.

[108] See, e.g., DMA, supra note 4, Recital 5 (complaining that the scope of antitrust provisions is “limited to certain instances of market power, for example dominance on specific markets and of anti-competitive behaviour, and enforcement occurs ex post and requires an extensive investigation of often very complex facts on a case by case basis.”).

[109] U.S. Federal Trade Commission, et al. v. Amazon.com, Inc., supra note 23.

[110] Khan, supra note 101.

[111] Khan, supra note 22, 1003, referring to Amazon, Google, and Meta.

PRESENTATIONS & INTERVIEWS

Gus Hurwitz on the Supreme Court’s Murthy Case

ICLE Director of Law & Economics Programs Gus Hurwitz was a guest on The Cyberlaw Podcast, where he discussed the U.S. Supreme Court’s Murthy v. . . .

ICLE Director of Law & Economics Programs Gus Hurwitz was a guest on The Cyberlaw Podcast, where he discussed the U.S. Supreme Court’s Murthy v. Missouri free speech case and a unanimous decision by the court on when a public official may use a platform’s tools to suppress critics posting on his or her social-media page. Other topics included AI deepfakes, the congressional bill to force the divestment of TikTok, and the Federal Trade Commission’s lawsuit against Meta. Audio of the full episode is embedded below.

R.J. Lehmann on the Florida and California Insurance Markets

ICLE Editor-in-Chief R.J. Lehmann was a guest on Bloomberg’s Odd Lots podcast to discuss state insurance regulation and the role it has played in the . . .

ICLE Editor-in-Chief R.J. Lehmann was a guest on Bloomberg’s Odd Lots podcast to discuss state insurance regulation and the role it has played in the collapse of the homeowners insurance markets in California and Florida. The transcript is available here and the full episode is embedded below.

David Teece on Diversity in Corporate Governance

ICLE Academic Affiliate David Teece was a guest on the Insights from the Top podcast to discuss the importance of gender and racial diversity in . . .

ICLE Academic Affiliate David Teece was a guest on the Insights from the Top podcast to discuss the importance of gender and racial diversity in corporate governance, the state of securitization in emerging markets, what ownership means for rising attorneys, and how the firm has remained strong for more than a century. The full episode is embedded below.

Lazar Radic on the EU’s Apple Fine

ICLE Senior Scholar Lazar Radic joined the Euractiv Tech Brief podcast to discuss the European Commission’s €1.8 billion fine against Apple for alleged abuse of . . .

ICLE Senior Scholar Lazar Radic joined the Euractiv Tech Brief podcast to discuss the European Commission’s €1.8 billion fine against Apple for alleged abuse of dominant position in the music-streaming market. Audio of the full episode is embedded below.

IN THE MEDIA

Lazar Radic on the Brussels Effect

ICLE Senior Scholar Lazar Radic was quoted by ExchangeWire in a story about how other jurisdictions are looking to copy the EU’s Digital Markets Act. . . .

ICLE Senior Scholar Lazar Radic was quoted by ExchangeWire in a story about how other jurisdictions are looking to copy the EU’s Digital Markets Act. You can read the full piece here.

ICLE’s Lazar Radic calls what the DMA is trying to achieve ‘the Brussels effect’ – “a regulatory contagion from the EU to other places. This would turn it into something like what you have called “the world’s digital police,” at least within the boundaries of the conduct covered by the DMA,” he explains.

…For advertisers, adapting to managing additional silos can further complicate an already tricky task. With the fragmentation of tracking and profiling individuals at scale without relying on third-party data, which is frequently stored in third-party cookies, many marketers are already grappling with this challenge. Radic describes this as “…clearly designed to drive a wedge in gatekeepers’ ad tech model, preventing them from cross-using data between a core platform services and any other service provided by the gatekeeper – for example, between an online search engine, a messaging app, and a social networking app.”

…“Gatekeeper’s ad tech might become less effective, and their ads less relevant. This hurts gatekeepers. In turn, given gatekeepers’ loss of control of advertising on their own platforms, end-users might be exposed to more irrelevant, random advertising noise from third-parties,” explains Radic. “This hurts consumers. The DMA could also impact gatekeepers’ incentives to invest in their platform, seeing as how the regulation purposefully facilitates third parties from free-riding on those investments. This, in the end, hurts everyone.”

Dan Gilman, Geoff Manne, & Brian Albrecht on Out-of-Market Effects

A Truth on the Market piece by ICLE Senior Scholar Daniel J. Gilman, President Geoffrey A. Manne, and Chief Economist Brian Albrecht was cited by . . .

A Truth on the Market piece by ICLE Senior Scholar Daniel J. Gilman, President Geoffrey A. Manne, and Chief Economist Brian Albrecht was cited by the Information Technology & Innovation Foundation in a blog post about labor monopsony effects in merger enforcement. You can read the full piece here.

Finally, there is another problem with the Guidelines’ turn to labor: Benefits to labor can coincide with harms to consumers, creating challenges in merger reviews. Indeed, Gilman et al. highlight this conflict when they ask, “Are mergers to be challenged—and, if challenged, blocked—if they harm workers in a single labor market, even if they are procompetitive (and pro-consumer) in the relevant product market?” In other words, will the agencies be able to conduct effective merger reviews when a merger raises labor market concentration—which may or may not hurt workers—but reduces prices for consumers? Moreover, even if a merger does not result in a conflict between worker and consumer interests, it could result in conflicts between two labor markets that can further hinder effective merger review. Gilman et al. also highlight this when they assert that merger benefits in one labor market could offset the losses in another, resulting in net gains for the overall labor market. In this case, the agencies will also face another challenge in effectively conducting merger reviews when they have to balance gains and losses between labor markets.

Brian Albrecht on the DOJ’s Apple Case

ICLE Chief Economist Brian Albrecht was quoted by The Dispatch in a story about the U.S. Justice Department’s antitrust lawsuit against Apple. You can read . . .

ICLE Chief Economist Brian Albrecht was quoted by The Dispatch in a story about the U.S. Justice Department’s antitrust lawsuit against Apple. You can read the full piece here.

Other observers, however, warn against dismissing the DOJ’s entire suit as frivolous. “It is a complex case,” Brian Albrecht, the chief economist for the International Center for Law and Economics, told TMD. “Anyone who mocks [it] as obviously ridiculous overall, is overstating it.”

 

Gus Hurwitz on the DOJ’s Apple Antitrust Case

ICLE Director of Law & Economics Programs Gus Hurwitz was quoted by the New York Times in a story about the U.S. Justice Department’s antitrust . . .

ICLE Director of Law & Economics Programs Gus Hurwitz was quoted by the New York Times in a story about the U.S. Justice Department’s antitrust case against Apple. You can read the full thing here.

And federal prosecutors are explicitly connecting the Apple lawsuit to that earlier fight. “They’re really presenting this case as a successor to that: Microsoft 2.0,” said Gus Hurwitz, a senior fellow at the University of Pennsylvania Carey Law School.

Others say the Microsoft case’s legacy is less clear. Hurwitz told DealBook that the reality was more complicated. Netscape failed in part because a botched upgrade turned off users, while Microsoft missed out on the dawn of internet 2.0 services because of bad strategic decisions.

“In terms of actual industrial changes, I think the case yielded very little,” Hurwitz said.

…“That might open up opportunities for competitors,” Hurwitz said. But he added, “That’s not necessarily the best way of facilitating competition in the market.” 

ICLE on CCCA’s Impact on Reward Cards

ICLE research was cited by Americans for Tax Reform in a recent letter about the Credit Card Competition Act. You can read the full piece . . .

ICLE research was cited by Americans for Tax Reform in a recent letter about the Credit Card Competition Act. You can read the full piece here.

The mandates in the bill are so costly that more than $75 billion in rewards that consumers receive every year would largely disappear. According to the International Center for Law & Economics, “86% of credit cardholders have active rewards cards, including 77% of cardholders with a household income of less than $50,000.” The disappearance of rewards would likely harm minority communities and small businesses.

Gus Hurwitz on the DOJ’s Apple Antitrust Suit

ICLE Director of Law & Economics Programs Gus Hurwitz was quoted by The New York Times in an item about the U.S. Justice Department’s antitrust . . .

ICLE Director of Law & Economics Programs Gus Hurwitz was quoted by The New York Times in an item about the U.S. Justice Department’s antitrust case against Apple. You can read the full piece here.

But some experts think this lawsuit is a stretch. Gus Hurwitz, a senior fellow at the University of Pennsylvania Carey Law School, told DealBook that antitrust policy traditionally hasn’t focused on issues like porting consumer data to different platforms.

He added that while prosecutors were seeking to help some consumers — those who favor switching devices — the lawsuit could end up hurting others. Users of iOS “derive a lot of value from their closed ecosystem,” he said. “Apple users like the closed ecosystem and the benefits that confers on them.

Geoff Manne on the DOJ’s Apple Lawsuit

ICLE President and Founder Geoffrey A. Manne was quoted by Fortune in a story about the U.S. Justice Department’s antitrust lawsuit against Apple. You can . . .

ICLE President and Founder Geoffrey A. Manne was quoted by Fortune in a story about the U.S. Justice Department’s antitrust lawsuit against Apple. You can read the full piece here.

“They must know this case will be hard to win. Maybe they feel their best path to victory is creating a strong public atmosphere that Apple is not some noble, pro-consumer actor,” said Geoffrey Manne, president and founder of the International Center for Law and Economics, a Portland-based nonprofit research center.

“A lot of these cases are about changing the public perception of these companies, which in turn deters the behavior.”

Lazar Radic on the DOJ’s Apple Case

ICLE Senior Scholar Lazar Radic was quoted by The Drum in a story about the U.S. Justice Department’s antitrust case against Apple. You can read . . .

ICLE Senior Scholar Lazar Radic was quoted by The Drum in a story about the U.S. Justice Department’s antitrust case against Apple. You can read the full piece here.

But another concern about the case’s strength is raised by Lazar Radic, a professor of law and a senior scholar of competition policy at the International Center for Law & Economics. The case “seems slightly outdated,” Radic says.

In particular, he suggests that some of the issues raised by the plaintiffs appear to have been recently remedied. “One example is the cloud services complaint, which is accusing Apple of suppressing mobile cloud streaming services. Apple changed its policy on [cloud gaming services] earlier this year, to an extent that would address this [concern].”

…The DOJ, in its complaint, would seem to be grasping at straws on this front, Radic suggests. “The DOJ’s complaint is that the color of the text bubbles is different, which disadvantages Android [users]. Apple did address … the downgraded experience when messaging non-iPhone phones. But the DOJ seems to be saying that that’s not enough because having different colors for the bubbles of text still disadvantages non-iPhone users.”

…In addition to what Radic deems “outdated” complaints, others have pointed out that the DOJ’s case omits a handful of concerns that have been at the heart of other antitrust suits against Apple.

Lazar Radic on India’s Competition Law Consultation

ICLE Senior Scholar Lazar Radic was quoted by India’s The Week about calls to extend the nation’s consultation on changes to competition law. You can . . .

ICLE Senior Scholar Lazar Radic was quoted by India’s The Week about calls to extend the nation’s consultation on changes to competition law. You can read the full piece here.

Dr. Lazar Radic, a Senior Scholar at the International Center for Law & Economics and an Adjunct Professor of Law at IE University, said, “India should explore strategies to attract players to the market before regulating them. Regulatory challenges posed by the Digital Markets Act (DMA) might deter gatekeepers from innovating, potentially leading to negative outcomes for users, similar to the delays experienced by Meta’s Thread launch and Bard’s introduction in Europe. These incidents underscore the broader impact on consumer choice and innovation.” He added, “The DMA is also criticized for its vague goals, lack of clear cost-benefit analysis procedures, and rigid structure without exemptions for consumer benefits and industry innovation. India should avoid hastily adopting experimental regulations and instead focus on understanding the objectives behind Europe’s DMA.”
.”

ICLE on Section 214

ICLE was cited in a Law360 story on the Federal Communications Commission’s plan to reimpose Title II on broadband providers. You can read the full . . .

ICLE was cited in a Law360 story on the Federal Communications Commission’s plan to reimpose Title II on broadband providers. You can read the full piece here.

The International Center for Law & Economics waded into the contentious debate over what regulations would apply to the service if it is reclassified as a Title II service under the Communications Act with a Friday letter that pointed toward an article written by one of the think tank’s own, calling the foreign ownership rules a “trojan horse.” Title II of the act governs telecommunications services. Currently, broadband is regulated as an information service under Title I of the law and is subject to less regulation.

…But according to the think tank, applying Section 214 to broadband companies would “necessitate FCC approval for essential operational decisions, such as network upgrades or service discontinuations, thereby stifling innovation, investment, and the broader objectives of national broadband expansion.”

The think tank also highlights the argument that putting such regulation upon ISPs could “undermine public safety and network resiliency” by making it harder for companies to switch over to newer and safer technologies, since to do so might trigger burdensome regulatory oversight.

Instead, the think tank said it wanted to urge the FCC to “seek a regulatory approach that fosters innovation, investment, and the robust expansion of broadband access across the United States, without imposing unnecessary and counterproductive burdens.”

ICLE on the McDonald’s No-Poach Case

ICLE’s amicus brief in the McDonald’s no-poach case was cited in a story about the case from Law360. You can read the full piece here. . . .

ICLE’s amicus brief in the McDonald’s no-poach case was cited in a story about the case from Law360. You can read the full piece here.

Several business groups including the U.S. Chamber of Commerce supported McDonald’s with briefs to the appeals court. The restaurant chain also received backing for its high court petition from the International Center for Law & Economics and the International Franchise Association.

Geoff Manne on Choice of Law for Privacy

ICLE President Geoff Manne was cited by the American Enterprise Institute (AEI) in a blog post about the paper he and AEI’s Jim Harper recently . . .

ICLE President Geoff Manne was cited by the American Enterprise Institute (AEI) in a blog post about the paper he and AEI’s Jim Harper recently published proposing a choice-of-law approach to privacy regulation. You can read the full piece here.

But it is real, and in a paper released today, Geoff Manne of the International Center for Law and Economics and I argue for a different solution: choice of law.

Eric Fruits on the Right to Repair

ICLE Senior Scholar Eric Fruits was cited in a blog post by the Competitive Enterprise Institute on the so-called “right to repair.” You can read . . .

ICLE Senior Scholar Eric Fruits was cited in a blog post by the Competitive Enterprise Institute on the so-called “right to repair.” You can read the full piece here.

Lawmakers should weigh this nominal benefit with the negative consequences that are likely to result from right to repair laws. Such proposals could fuel a black market for spare parts that put consumers at higher risk of theft, according to Juan Londoño, senior policy analyst at the Taxpayers Protection Alliance. Right to repair laws likely violate federal copyright law, could harm the rights of digital creators, and open the floodgates to piracy, according to Devlin Hartline, legal fellow for the Hudson Institute. Consumer data is also at risk. “Many brands cultivate trust by keeping customer data safe and away from prying eyes,” according to Dr. Eric Fruits, senior scholar at International Center for Law & Economics.

ICLE on Murthy v Missouri

ICLE’s amicus brief in the U.S. Supreme Court’s Murthy v. Missouri case was cited by Tech Policy Press in a story about the case. You . . .

ICLE’s amicus brief in the U.S. Supreme Court’s Murthy v. Missouri case was cited by Tech Policy Press in a story about the case. You can read the full piece here.

International Center For Law & Economics

  • The International Center for Law & Economics (ICLE) is a nonprofit, nonpartisan global research and policy center that builds intellectual foundations for sensible, economically sound policy. ICLE promotes using law-and-economics methods and economic learning to inform policy debates. The brief argues that the First Amendment safeguards the marketplace of ideas from government interference, recognizing that challenging false speech with true speech is the solution. Government coercion to suppress dissenting views undermines the scientific enterprise and deprives the public of informed decision-making. Unpopular speech may be silenced without protection, hindering societal progress and democracy. The marketplace allows for the competition of ideas, where truth can prevail over falsehood through rational discourse.
  • The brief also argues that government intervention in content moderation on social media platforms is unconstitutional, as it violates the First Amendment’s prohibition against abridging speech based on its message, ideas, subject matter, or content. The competitive nature of the marketplace of ideas ensures that social media platforms adjust their content moderation practices based on consumer demand and market forces without the need for government interference.
  • The Fifth Circuit’s test for determining government coercion, derived from Bantam Books, fails to capture the essence of the Supreme Court’s ruling, which focuses on the objective actions of the government rather than the subjective response of private actors.

R.J. Lehmann on Inflation and Insurance

ICLE Editor-in-Chief R.J. Lehmann was cited by China Daily in a story about recent inflation trends. You can read the full piece here. “The classic . . .

ICLE Editor-in-Chief R.J. Lehmann was cited by China Daily in a story about recent inflation trends. You can read the full piece here.

“The classic example is that, you know, a (car) bumper used to be a cheap replacement part, and it’s no longer that way because you have advanced sensors in there — that makes it quite an expensive proposition,” said R.J. Lehmann, a senior fellow at the International Center for Law and Economics, a nonpartisan research center, to The New York Times.

Eric Fruits on the FCC’s Digital-Discrimination Rules

ICLE Senior Scholar Eric Fruits’ writing on digital discrimination were cited in a piece at The Dispatch. You can read the full piece here. Eric . . .

ICLE Senior Scholar Eric Fruits’ writing on digital discrimination were cited in a piece at The Dispatch. You can read the full piece here.

Eric Fruits at Truth on the Market spells out the court challenges to the Federal Communications Commission’s (FCC) order on digital discrimination. The order includes pricing regulation, although the law explicitly forbade it.

Todd Zywicki, Geoffrey Manne, & Julian Morris on the Durbin Amendment

ICLE President Geoff Manne, Senior Scholar Julian Morris, and Nonresident Scholar Todd Zywicki were cited by Regulation in a story about the history of the . . .

ICLE President Geoff Manne, Senior Scholar Julian Morris, and Nonresident Scholar Todd Zywicki were cited by Regulation in a story about the history of the Durbin amendment. You can read the full piece here.

Other studies have found that, despite possible short??run savings to merchants, the interchange fee cap regulation had an adverse effect on consumers. A 2014 Mercatus Center working paper by Todd Zywicki, Geoffrey Manne, and Julian Morris estimated that the interchange fee cap led to an increase of $22.8 billion in annual costs for consumers, resulting mainly from higher fees on bank accounts, such as monthly maintenance fees, overdraft fees, and ATM fees. They found that the regulation reduced the availability of free checking accounts by 50 percent and increased minimum balance requirements by 23 percent. Moreover, the regulation reduced incentives for card issuers to offer user rewards and benefits such as cash back, points, and discounts. They also found that some issuers reduced the issuance of debit cards, especially to low??income and unbanked consumers, who are more likely to use debit cards for small??value transactions.

Dirk Auer on AI Legal Services

ICLE Director of Competition Policy Dirk Auer was quoted by Law.com in a story about the use of artificial intelligence in legal services. You can read . . .

ICLE Director of Competition Policy Dirk Auer was quoted by Law.com in a story about the use of artificial intelligence in legal services. You can read the full piece here.

Dirk Auer is director of competition policy at the International Center for Law & Economics.

He and a colleague have written many papers on the topic of AI and the perception of anti-competitiveness.

He suggests that it may not be a case of being late to the party.

Drawing similarities between law firms and big business, it can be viewed that smaller firms that fear Generative AI may outpace them would do well to think smarter and view AI not as an unfair advantage.

“You’ve seen regulators worrying that, for example, a company like Amazon, with an important web hosting market position, may use that position to dominate the fields of generative AI. Or you see regulators worrying that Google may use its search position and online ecosystem to succeed in the field of generative AI,” Auer said.

He said there is a rising fear that large Internet 2.0 companies may use the advantage they’ve acquired in those markets to dominate generative AI, but in reality, it doesn’t seem like Web 2.0 companies have been very successful in doing this.

“The latest example that we have … is Google releasing Gemini. And while a lot of people have joked about Google Gemini’s perceived ideology, I think the bigger point is that Google’s Gemini seems years behind Open AI rival generative AI service,” Auer said.

Auer says we need to remember that AI can also be used by competitors to enter markets more effectively.

“AI could be used by firms to make collusion more stable, but AI could also be used by firms to detect markets where there is collusion and to enter those markets because there are profits to be made,” Auer said. “If you look at it, sort of the big picture, it’s not entirely clear that AI does more to facilitate collusion than what it does break collusion.”

Gus Hurwitz on Pre-Merger Notification

ICLE Director of Law & Economics Programs Gus Hurwitz was cited by Regulation in a piece about the Federal Trade Commission’s proposed changes to the . . .

ICLE Director of Law & Economics Programs Gus Hurwitz was cited by Regulation in a piece about the Federal Trade Commission’s proposed changes to the Hart–Scott–Rodino premerger notification and report form. You can read the full piece here.

These terms are best read together, according to Justin Hurwitz of the International Center for Law and Economics and the University of Pennsylvania’s Carey Law School. In a Regulatory Review article last year, he wrote, “Given that all necessary information could be acquired through a second request, ‘appropriateness’ is a question of whether ‘necessary’ information should be requested of all transactions subject to premerger notification or only of those subject to second requests.”

 

 

Gus Hurwitz on the Supreme Court’s NetChoice Cases

ICLE Director of Law & Economics Programs Gus Hurwitz was quoted by Business Insider in a story about the U.S. Supreme Court’s NetChoice v. Paxton . . .

ICLE Director of Law & Economics Programs Gus Hurwitz was quoted by Business Insider in a story about the U.S. Supreme Court’s NetChoice v. Paxton and Moody v. NetChoice. You can read the full piece here.

“If the states win, then I expect that we are going to very quickly have a very different sort of internet experience,” Justin (Gus) Hurwitz, academic director of the University of Pennsylvania’s Center for Technology, Innovation & Competition, told BI.

Hurwitz said companies will likely do two things immediately: “The first is they will, at least on a temporary basis, stop hosting content, comments, user-generated speech, discussion forums, and things like that.”

…The second action social companies would likely take, Hurwitz said, would involve identifying new ways to operate in an environment where the government could compel them to host certain types of speech — which could mean blocking features like forums from being accessed in states like Florida or Texas.

…Florida’s argument broaches a broader question raised by the two laws, Hurwitz noted: whether social media companies should be treated as publishers, like a newspaper which has editorial discretion, or common carriers like phone companies, which offer connectivity to everyone regardless of what they’re saying to the person on the other line.

…”And what is social media? You can see how it has characteristics of both,” Hurwitz said, “But they’re not newspapers. They’re not phone companies. They’re not shopping malls or telegraphs. They’re not radio or broadcast TV or cable television. They’re something different. So that’s the dichotomy: Are they more like newspapers or common carriers? The answer might just be, no, that they’re something different altogether, and there’s got to be some other way that the Court tells us we need to think about the First Amendment issues in these cases.”

…Ultimately, Hurwitz noted, five or six justices appeared poised to declare that the laws violate First Amendment precedents. However, he expects the court’s ruling on these cases will raise deeper legal questions than the initial issues at hand.

“This is probably an epochal case. It’s going to raise more questions than it answers and could define the discussions we will have around these topics for the next 10, 20, even 30 years,” Hurwiz said. “And it’s probably going to do very little to actually answer any of those questions — because they’re hard, hard questions. So if you’re watching this case, expecting this is going to answer the issue once and for all, prepare to be disappointed in really interesting ways.”

Eric Fruits on Retrans Blackouts

ICLE Senior Scholar Eric Fruits was cited by Communications Daily in a piece about retransmission blackouts. You can read the full piece here. The FCC . . .

ICLE Senior Scholar Eric Fruits was cited by Communications Daily in a piece about retransmission blackouts. You can read the full piece here.

The FCC proposal that video subscribers get rebates for programming blackouts due to retransmission consent negotiation loggerheads “looks to be a fool’s errand that may end up doing more harm than good,” International Center for Law & Economics Senior Scholar Eric Fruits blogged Wednesday. The FCC commissioners adopted a retransmission consent blackout rebate NPRM 3-2 in January (see 2401100026). Fruits said the idea might seem fair, as consumers shouldn’t pay for programming they can’t access. However, he said, it’s unclear what party — programmers or multichannel programming video distributors — is more responsible for blackouts. Yet the proposal indicates the FCC thinks MVPDs are to blame, he said. That could bolster cord cutting and incentivize MVPDs offering lower compensation to broadcasters to offset the rebate costs, hurting smaller or local programmers that rely on retrans fees, he said.

Geoff Manne on the FTC’s Amazon Suit

ICLE President Geoffrey A. Manne was quoted in a column in Forbes about the Federal Trade Commission’s antitrust suit against Amazon. You can read the . . .

ICLE President Geoffrey A. Manne was quoted in a column in Forbes about the Federal Trade Commission’s antitrust suit against Amazon. You can read the full piece here.

Legal scholars differ on the antitrust prosecution of platforms. For example, a leading U.S. antitrust treatise writer, Professor Herbert Hovenkamp, generally supports the antitrust challenges to the platforms, seeing a need to restore competitive behavior (particularly in the case of Amazon and Facebook) or deal with “natural monopoly” concerns (Google). Stanford professor Douglas Melamed states that Google’s conduct “clearly has some legitimate benefits, and the question is how the courts are going to fit that into the overall analysis.” Howard University professor Andrew Gavil says that “[t]he allegations are definitely serious” in the 2020 case against Google, but “[w]hether the evidence will pan out is the big question.” Law and economics scholar Geoffrey Manne believes that the FTC’s suit against Amazon “will face an uphill battle before the courts.”

Eric Fruits on the FTC’s Challenge to Kroger/Albertsons Merger

ICLE Senior Scholar Eric Fruits was quoted by the The Oregonian about the Federal Trade Commission’s challenge of the proposed merger of Kroger and Albertsons. You . . .

ICLE Senior Scholar Eric Fruits was quoted by the The Oregonian about the Federal Trade Commission’s challenge of the proposed merger of Kroger and Albertsons. You can read the full piece here.

“The current FTC, philosophically and politically, is really opposed to big mergers in general,” said Eric Fruits, an antitrust researcher at the Portland-based nonprofit International Center for Law & Economics. “They dropped a lot of strong hints that they were opposed to it, so we kind of knew that we were going to end up where we are today.”

…“I think that Kroger and Albertsons believe that they need to do this deal because the food retail market has changed in such a way that it just can’t support competing against each other,” Fruits said. “They have so much more competitive pressure.”

Eric Fruits on the FTC’s Kroger/Albertsons Case

ICLE Senior Scholar Eric Fruits was quoted by the Cincinnati Business Courier about the Federal Trade Commission’s challenge of the proposed merger of Kroger and . . .

ICLE Senior Scholar Eric Fruits was quoted by the Cincinnati Business Courier about the Federal Trade Commission’s challenge of the proposed merger of Kroger and Albertsons. You can read the full piece here.

Eric Fruits, a Cincinnati native who is senior scholar at the Portland, Ore.-based International Center for Law & Economics, expects the process to run into 2025. Lawyers will need time to take depositions. It could take a month to hear the case and another month for the judge to decide, he said.
“It would be amazing if it got done by the end of the year,” he said.
…Despite the FTC’s strong opposition, experts say Kroger’s acquisition of Albertsons is actually a little more likely than not to go through.
Fruits gives it a “slightly better than 50-50” chance of getting completed. One of his arguments: Judges don’t like to be trailblazers, and precedent shows that grocery deals get approved.
“I think Kroger and Albertsons make a pretty good case that this merger won’t harm consumers or workers,” he said.
But the FTC is driven to win this one.
“The current FTC has taken a very aggressive approach, and my guess is they really want a check mark in the win column,” he said.
Fruits believes the true size of the competitive market favors Kroger and Albertsons.

Kristian Stout on Studying Spectrum Needs

ICLE Director of Innovation Policy Kristian Stout was quoted by Communications Daily in a story about the NTIA’s spectrum strategy. You can read the full . . .

ICLE Director of Innovation Policy Kristian Stout was quoted by Communications Daily in a story about the NTIA’s spectrum strategy. You can read the full piece here.

NTIA should move quickly to study the bands highlighted, said Kristian Stout, director-innovation policy at the International Center for Law & Economics. “There is a clear need to provide spectrum for both licensed and unlicensed uses” and “innovative spectrum sharing models” like CBRS have proven successful, he said.

Kristian Stout on the FCC’s Enhanced Competition Incentive Program

ICLE Director of Innovation Policy Kristian Stout was quoted by Communications Daily in a story about the Federal Communications Commission’s Enhanced Competition Incentive Program. You . . .

ICLE Director of Innovation Policy Kristian Stout was quoted by Communications Daily in a story about the Federal Communications Commission’s Enhanced Competition Incentive Program. You can read the full piece here.

Kristian Stout, director-innovation policy at the International Center for Law & Economics, said it’s probably too early to tell if ECIP will have success, and he’s not hearing much so far. Success will depend on how attractive regulatory incentives and requirements are, as well as “market conditions, the interest and financial capability of smaller entities to acquire spectrum, the level of awareness and outreach about the program and the FCC’s monitoring and adaptability to feedback,” Stout wrote in an email.

The success of ECIP “hinges on whether the incentives provided can sufficiently outweigh the costs and competitive risks for larger carriers, alongside market readiness for spectrum redistribution due to demands for wireless services or technological advancements like 5G,” Stout added.

Lazar Radic on the EU’s Apple Decision

ICLE Senior Scholar Lazar Radic was quoted by The Drum in a story about the European Commission’s 1.8 billion euro fine of Apple. You read . . .

ICLE Senior Scholar Lazar Radic was quoted by The Drum in a story about the European Commission’s 1.8 billion euro fine of Apple. You read the full piece here.

One expert taking issue with the calculation of the Commission’s fine is Lazar Radic, a professor of law and a senior scholar of competition policy at the International Center for Law & Economics, who calls the lump sum “rather arbitrarily determined.” He adds that “it is unclear why such deterrence is needed” in light of the EU’s Digital Markets Act (DMA), which allows regulators to issue fines of up to 10% of an organization’s global turnover, or up to 20% for repeat infringements.

Mario Zúñiga on Sika/Group Chema

ICLE Senior Scholar Mario Zúñiga was quoted by Semana Económica in a story about Sika’s proposed purchase of Group Chema. You can read the full piece . . .

ICLE Senior Scholar Mario Zúñiga was quoted by Semana Económica in a story about Sika’s proposed purchase of Group Chema. You can read the full piece (in Spanish) here.

“Partiendo de la premisa de que podría generarse daño a la compentencia, el Indecopi podría [ordenar a Sika] no comprar una determinada línea de activos o venderle a un tercero las tiendas que haya en una cierta zona, donde sólo tengan presencia Sika y Chima”, dice Mario Zúñiga, senior scholar en política de compentencia en International Center for Law & Economics (ICLE).

Brian Albrecht on Surge Pricing

ICLE Chief Economist Brian Albrecht was cited in blog post from the Foundation for Economic Education about Wendy’s dynamic-pricing plan. You can read the full . . .

ICLE Chief Economist Brian Albrecht was cited in blog post from the Foundation for Economic Education about Wendy’s dynamic-pricing plan. You can read the full piece here.

There seems to be some tactful reframing going on here. It’s much better marketing to say they are lowering prices during off-peak periods rather than to say they are increasing prices during a surge. But as the economist Brian Albrecht has pointed out, that kind of amounts to the same thing.

Ben Sperry on the NetChoice Cases

ICLE Senior Scholar Ben Sperry was cited by Disruptive Competition Project in a blog post about the U.S. Supreme Court’s recent NetChoice v. Paxton and . . .

ICLE Senior Scholar Ben Sperry was cited by Disruptive Competition Project in a blog post about the U.S. Supreme Court’s recent NetChoice v. Paxton and Moody v. NetChoice cases. You can read the full piece here.

Ben Sperry, Senior Scholar at the International Center for Law & Economics, focused on the multiple invocations of George Orwell and his dystopian novel 1984 during oral argument. Sperry stated that certain questions from Justices Samuel Alito and Clarence Thomas appeared to suggest that they believe social-media companies do engage in censorship, but they are confusing the right of private actors to set rules for their property with government oppression. Sperry rebutted the Justices’ line of logic, commenting: “Social-media companies can kick you off their platform or restrict your ability to post, but that’s about it. They can’t put you in jail. However much social media is the “modern public square,” it remains private property, and they have the right to exercise editorial discretion. The only thing Orwellian is to conflate this obvious distinction.”

ICLE ON SOCIAL MEDIA

March Threads 2024

Threads from ICLE scholars on trending issues for the month of March 2024. 1/245 Just days before the DMA's entry into force, Apple — one . . .

Threads from ICLE scholars on trending issues for the month of March 2024.