Showing 9 of 109 Publications in US Constitution

Murthy Oral Arguments: Standing, Coercion, and the Difficulty of Stopping Backdoor Government Censorship

TOTM With Monday’s oral arguments in Murthy v. Missouri, we now have more of a feel for how the U.S. Supreme Court appears to be considering . . .

With Monday’s oral arguments in Murthy v. Missouri, we now have more of a feel for how the U.S. Supreme Court appears to be considering the issues of social-media censorship—in this case, done allegedly at the behest of federal officials.

In the International Center for Law & Economics’ (ICLE) amicus brief in the case, we argued that the First Amendment protects a marketplace of ideas, and government agents can’t intervene in that marketplace by coercing social-media companies into removing disfavored speech. But if the oral arguments are any indication, there are reasons to be skeptical that the Court will uphold the preliminary injunction the district court issued against the government officials (later upheld in a more limited form by the 5th U.S. Circuit Court of Appeals).

Read the full piece here.

Continue reading
Innovation & the New Economy

A Choice-of-Law Alternative to Federal Preemption of State Privacy Law

Scholarship Executive Summary A prominent theme in debates about US national privacy legislation is whether federal law should preempt state law. A federal statute could create . . .

Executive Summary

A prominent theme in debates about US national privacy legislation is whether federal law should preempt state law. A federal statute could create one standard for markets that are obviously national in scope. Another approach is to allow states to be “laboratories of democracy” that adopt different laws so they can discover the best ones.

We propose a federal statute requiring states to recognize contractual choice-of-law provisions, so companies and consumers can choose what state privacy law to adopt. Privacy would continue to be regulated at the state level. However, the federal government would provide for jurisdictional competition among states, such that companies operating nationally could comply with the privacy laws of any one state.

Our proposed approach would foster a double competition aimed at discerning and delivering on consumers’ true privacy interests: market competition to deliver privacy policies that consumers prefer and competition among states to develop the best privacy laws.

Unlike a single federal privacy law, this approach would provide 50 competing privacy regimes for national firms. The choice-of-law approach can trigger competition and innovation in privacy practices while preserving a role for meaningful state privacy regulation.


The question of preemption of state law by the federal government has bedeviled debates about privacy regulation in the United States. A prominent theme is to propose a national privacy policy that largely preempts state policies to create one standard for markets that are obviously national. Another approach is to allow states to be “laboratories of democracy” that adopt different laws, with the hope that they will adopt the best rules over time. Both approaches have substantial costs and weaknesses.

The alternative approach we propose would foster a double competition aimed at discerning and delivering on consumers’ true privacy interests: market competition to deliver privacy policies that consumers prefer and competition among states to develop the best privacy laws. Indeed, our proposal aims to obtain the best features—and avoid the worst features—of both a federal regime and a multistate privacy law regime by allowing firms and consumers to agree on compliance with the single regime of their choosing.

Thus, we propose a federal statute requiring states to recognize contractual choice-of-law provisions, so companies and consumers can choose what state privacy law to adopt. Privacy would continue to be regulated at the state level. However, the federal government would provide for jurisdictional competition among states, and companies operating nationally could comply with the privacy laws of any one state.

Unlike a single federal privacy law, this approach would provide 50 competing privacy regimes for national firms. Protecting choice of law can trigger competition and innovation in privacy practices while preserving a role for meaningful state privacy regulation.

The Emerging Patchwork of State Privacy Statutes Is a Problem for National Businesses

A strong impetus for federal privacy legislation is the opportunity national and multinational businesses see to alleviate the expense and liability of having a patchwork of privacy statutes with which they must comply in the United States. Absent preemptive legislation, they could conceivably operate under 50 different state regimes, which would increase costs and balkanize their services and policies without coordinate gains for consumers. Along with whether a federal statute should have a private cause of action, preempting state law is a top issue when policymakers roll up their sleeves and discuss federal privacy legislation.

But while the patchwork argument is real, it may be overstated. There are unlikely ever to be 50 distinct state regimes; rather, a small number of state legislation types is likely, as jurisdictions follow each other’s leads and group together, including by promulgating model state statutes.[1] States don’t follow the worst examples from their brethren, as the lack of biometric statutes modeled on Illinois’s legislation illustrates.[2]

Along with fewer “patches,” the patchwork’s costs will tend to diminish over time as states land on relatively stable policies, allowing compliance to be somewhat routinized.

Nonetheless, the patchwork is far from ideal. It is costly to firms doing business nationally. It costs small firms more per unit of revenue, raising the bar to new entry and competition. And it may confuse consumers about what their protections are (though consumers don’t generally assess privacy policies carefully anyway).

But a Federal Privacy Statute Is Far from Ideal as Well

Federal preemption has many weaknesses and costs as well. Foremost, it may not deliver meaningful privacy to consumers. This is partially because “privacy” is a congeries of interests and values that defy capture.[3] Different people prioritize different privacy issues differently. In particular, the elites driving and influencing legislation may prioritize certain privacy values differently from consumers, so legislation may not serve most consumers’ actual interests.[4]

Those in the privacy-regulation community sometimes assume that passing privacy legislation ipso facto protects privacy, but that is not a foregone conclusion. The privacy regulations issued under the Gramm-Leach-Bliley Act (concerning financial services)[5] and the Health Insurance Portability and Accountability Act (concerning health care)[6] did not usher in eras of consumer confidence about privacy in their respective fields.

The short-term benefits of preempting state law may come with greater long-term costs. One cost is the likely drop in competition among firms around privacy. Today, as some have noted, “Privacy is actually a commercial advantage. . . . It can be a competitive advantage for you and build trust for your users.”[7] But federal privacy regulation seems almost certain to induce firms to treat compliance as the full measure of privacy to offer consumers. Efforts to outperform or ace out one another will likely diminish.[8]

Another long-term cost of preempting state law is the drop in competition among states to provide well-tuned privacy and consumer-protection legislation. Our federal system’s practical genius, which Justice Louis Brandeis articulated 90 years ago in New State Ice v. Liebmann, is that state variation allows natural experiments in what best serves society—business and consumer interests alike.[9] Because variations are allowed, states can amend their laws individually, learn from one another, adapt, and converge on good policy.

The economic theory of federalism draws heavily from the Tiebout model.[10] Charles Tiebout argued that competing local governments could, under certain conditions, produce public goods more efficiently than the national government could. Local governments act as firms in a marketplace for taxes and public goods, and consumer-citizens match their preferences to the providers. Efficient allocation requires mobile people and resources, enough jurisdictions with the freedom to set their own laws, and limited spillovers among jurisdictions (effects of one jurisdiction’s policies on others).

A related body of literature on “market-preserving federalism” argues that strong and self-reinforcing limits on national and local power can preserve markets and incentivize economic growth and development.[11] The upshot of this literature is that when local jurisdictions can compete on law, not only do they better match citizens’ policy preferences, but the rules tend toward greater economic efficiency.

In contrast to the economic gains from decentralization, moving authority over privacy from states to the federal government may have large political costs. It may deepen Americans’ growing dissatisfaction with their democracy. Experience belies the ideal of responsive national government when consumers, acting as citizens, want to learn about or influence the legislation and regulation that governs more and more areas of their lives. The “rejectionist” strain in American politics that Donald Trump’s insurgency and presidency epitomized may illustrate deep dissatisfaction with American democracy that has been growing for decades. Managing a highly personal and cultural

issue like privacy through negotiation between large businesses and anonymous federal regulators would deepen trends that probably undermine the government’s legitimacy.

To put a constitutional point on it, preempting states on privacy contradicts the original design of our system, which assigned limited powers to the federal government.[12] The federal government’s enumerated powers generally consist of national public goods—particularly defense. The interstate commerce clause, inspired by state parochialism under the Articles of Confederation, exists to make commerce among states (and with tribes) regular; it is not rightly a font of power to regulate the terms and conditions of commerce generally.[13]

Preempting state law does not necessarily lead to regulatory certainty, as is often imagined. Section 230 of the Communications Decency Act may defeat once and for all the idea that federal legislation creates certainty.[14] More than a quarter century after its passage, it is hotly debated in Congress and threatened in the courts.[15]

The Fair Credit Reporting Act (FCRA) provides a similar example.[16] Passed in 1970, it comprehensively regulated credit reporting. Since then, Congress has amended it dozens of times, and regulators have made countless alterations through interpretation and enforcement.[17] The Consumer Financial Protection Bureau recently announced a new inquiry into data brokering under the FCRA.[18] That is fine, but it illustrates that the FCRA did not solve problems and stabilize the law. It just moved the jurisdiction to Washington, DC.

Meanwhile, as regulatory theory predicts, credit reporting has become a three-horse race.[19] A few slow-to-innovate firms have captured and maintained dominance thanks partially to the costs and barriers to entry that uniform regulation creates.

Legal certainty may be a chimera while business practices and social values are in flux. Certainty develops over time as industries settle into familiar behaviors and roles.

An Alternative to Preemption: Business and Consumer Choice

One way to deal with this highly complex issue is to promote competition for laws. The late, great Larry Ribstein, with several coauthors over the years, proposed one such legal mechanism: a law market empowered by choice-of-law statutes.[20] Drawing on the notion of market competition as a discovery process,[21] Ribstein and Henry Butler explained:

In order to solve the knowledge problem and to create efficient legal technologies, the legal system can use the same competitive process that encourages innovation in the private sector—that is, competition among suppliers of law. As we will see, this entails enforcing contracts among the parties regarding the applicable law. The greater the knowledge problem the more necessary it is to unleash markets for law to solve the problem.[22]

The proposal set forth below promotes just such competition and solves the privacy-law patchwork problem without the costs of federal preemption. It does this through a simple procedural regulation requiring states to enforce choice-of-law terms in privacy contracts, rather than through a heavy-handed, substantive federal law. Inspired by Butler and Ribstein’s proposal for pluralist insurance regulation,[23] the idea is to make the choice of legal regime a locus of privacy competition.

Modeled on the US system of state incorporation law, our proposed legislation would leave firms generally free to select the state privacy law under which they do business nationally. Firms would inform consumers, as they must to form a contract, that a given state’s laws govern their policies. Federal law would ensure that states respect those choice-of-law provisions, which would be enforced like any other contract term.

This would strengthen and deepen competition around privacy. If firms believed privacy was a consumer interest, they could select highly protective state laws and advertise that choice, currying consumer favor. If their competitors chose relatively lax state law, they could advertise to the public the privacy threats behind that choice. The process would help hunt out consumers’ true interests through an ongoing argument before consumers. Businesses’ and consumers’ ongoing choices— rather than a single choice by Congress followed by blunt, episodic amendments—would shape the privacy landscape.

The way consumers choose in the modern marketplace is a broad and important topic that deserves further study and elucidation. It nevertheless seems clear—and it is rather pat to observe—that consumers do not carefully read privacy policies and balance their implications. Rather, a hive mind of actors including competitors, advocates, journalists, regulators, and politicians pore over company policies and practices. Consumers take in branding and advertising, reputation, news, personal recommendations, rumors, and trends to decide on the services they use and how they use them.

That detail should not be overlooked: Consumers may use services differently based on the trust they place in them to protect privacy and related values. Using an information-intensive service is not a proposition to share everything or nothing. Consumers can and do shade their use and withhold information from platforms and services depending on their perceptions of whether the privacy protections offered meet their needs.

There is reason to be dissatisfied with the modern marketplace, in which terms of service and privacy policies are offered to the individual consumer on a “take it or leave it” basis. There is a different kind of negotiation, described above, between the hive mind and large businesses. But when the hive mind and business have settled on terms, individuals cannot negotiate bespoke policies reflecting their particular wants and needs. This collective decision-making may be why some advocates regard market processes as coercive. They do not offer custom choices to all but force individual consumers into channels cut by all.

The solution that orthodox privacy advocates offer does not respond well to this problem, because they would replace “take it or leave it” policies crafted in the crucible of the marketplace with “take it or leave it” policies crafted in a political and regulatory crucible. Their prescriptions are sometimes to require artificial notice and “choice,” such as whether to accept cookies when one visits websites. This, as experience shows, does not reach consumers when they are interested in choosing.

Choice of law in privacy competition is meant to preserve manifold choices when and where consumers make their choices, such as at the decision to transact, and then let consumers choose how they use the services they have decided to adopt. Let new entrants choose variegated privacy-law regimes, and consumers will choose among them. That does not fix the whole problem, but at least it doesn’t replace consumer choice with an “expert” one-size-fits-all choice.

In parallel to business competition around privacy choice of law, states would compete with one another to provide the most felicitous environment for consumers and businesses. Some states would choose more protection, seeking the rules businesses would choose to please privacy-conscious consumers. Others might choose less protection, betting that consumers prefer goods other than information control, such as free, convenient, highly interactive, and custom services.

Importantly, this mechanism would allow companies to opt in to various privacy regimes based on the type of service they offer, enabling a degree of fine-tuning appropriate for different industries and different activities that no alternative would likely offer. This would not only result in the experimentation and competition of federalism but also enable multiple overlapping privacy-regulation regimes, avoiding the “one-size-doesn’t-fit-all” problem.

While experimentation continued, state policies would probably rationalize and converge over time. There are institutions dedicated to this, such as the Uniform Law Commission, which is at its best when it harmonizes existing laws based on states’ experience.[24]

It is well within the federal commerce power to regulate state enforcement of choice-of-law provisions, because states may use them to limit interjurisdictional competition. Controlling that is precisely what the commerce power is for. Utah’s recent Social Media Regulation Act[25] barred enforcement of choice-of-law provisions, an effort to regulate nationally from a state capital. Federally backing contractual choice-of-law selections would curtail this growing problem.

At the same time, what our proposed protections for choice-of-law rules do is not much different from what contracts already routinely do and courts enforce in many industries. Contracting parties often specify the governing state’s law and negotiate for the law that best suits their collective needs.

Indeed, sophisticated business contracts increasingly include choice-of-law clauses that state the law that the parties wish to govern their relationship. In addition to settling uncertainty, these clauses might enable the contracting parties to circumvent those states’ laws they deem to be undesirable.[26]

This practice is not only business-to-business. Consumers regularly enter into contracts that include choice-of-law clauses—including regarding privacy law. Credit card agreements, stock and mutual fund investment terms, consumer-product warranties, and insurance contracts, among many other legal agreements, routinely specify the relevant state law that will govern.

In these situations, the insurance company, manufacturer, or mutual fund has effectively chosen the law. The consumer participates in this choice only to the same extent that she participates in any choices related to mass-produced products and services, that is, by deciding whether to buy the product or service.[27]

Allowing contracting parties to create their own legal certainty by contract would likely rankle states. Indeed, “we might expect governments to respond with hostility to the enforcement of choice-of-law clauses. In fact, however, the courts usually do enforce choice-of-law clauses.”[28] With some states trying to regulate nationally and some effectively doing so, the choice the states collectively face is having a role in privacy regulation or no role at all. Competition is better for them than exclusion from the field or minimization of their role through federal preemption of state privacy law. This proposal thus advocates simple federal legislation that preserves firms’ ability to make binding choice-of-law decisions and states’ ability to retain a say in the country’s privacy-governance regime.

Avoiding a Race to the Bottom

Some privacy advocates may object that state laws will not sufficiently protect consumers.[29] Indeed, there is literature arguing that federalism will produce a race to the bottom (i.e., competition leading every state to effectively adopt the weakest law possible), for example, when states offer incorporation laws that are the least burdensome to business interests in a way that arguably diverges from public or consumer interests.[30]

The race-to-the-bottom framing slants the issues and obscures ever-present trade-offs, however. Rules that give consumers high levels of privacy come at a cost in social interaction, price, and the quality of the goods they buy and services they receive. It is not inherently “down” or bad to prefer cheap or free goods and plentiful, social, commercial interaction. It is not inherently “up” or good to opt for greater privacy.

The question is what consumers want. The answers to that question—yes, plural—are the subject of constant research through market mechanisms when markets are free to experiment and are functioning well. Consumers’ demands can change over time through various mechanisms, including experience with new technologies and business models. We argue for privacy on the terms consumers want. The goal is maximizing consumer welfare, which sometimes means privacy and sometimes means sharing personal information in the interest of other goods. There is no race to the bottom in trading one good for another.

Yet the notion of a race to the bottom persists—although not without controversy. In the case of Delaware’s incorporation statutes, the issue is highly contested. Many scholars argue that the state’s rules are the most efficient—that “far from exploiting shareholders, . . . these rules actually benefit shareholders by increasing the wealth of corporations chartered in states with these rules.”[31]

As always, there are trade-offs, and the race-to-the-bottom hypothesis requires some unlikely assumptions. Principally, as Jonathan Macey and Geoffrey Miller discuss, the assumption that state legislators are beholden to the interests of corporations over other constituencies vying for influence. As Macey and Miller explain, the presence of a powerful lobby of specialized and well-positioned corporate lawyers (whose interests are not the same as those of corporate managers) transforms the analysis and explains the persistence and quality of Delaware corporate law.[32]

In much the same vein, there are several reasons to think competition for privacy rules would not succumb to a race to the bottom.

First, if privacy advocates are correct, consumers put substantial pressure on companies to adopt stricter privacy policies. Simply opting in to the weakest state regime would not, as with corporate law, be a matter of substantial indifference to consumers but would (according to advocates) run contrary to their interests. If advocates are correct, firms avoiding stronger privacy laws would pay substantial costs. As a result, the impetus for states to offer weaker laws would be diminished. And, consistent with Macey and Miller’s “interest-group theory” of corporate law,[33] advocates themselves would be important constituencies vying to influence state privacy laws. Satisfying these advocates may benefit state legislators more than satisfying corporate constituencies does.

Second, “weaker” and “stronger” would not be the only dimensions on which states would compete for firms to adopt their privacy regimes. Rather, as mentioned above, privacy law is not one-size-fits-all. Different industries and services entail different implications for consumer interests. States could compete to specialize in offering privacy regimes attractive to distinct industries based on interest groups with particular importance to their economies. Minnesota (home of the Mayo Clinic) and Ohio (home of the Cleveland Clinic), for example, may specialize in health care and medical privacy, while California specializes in social media privacy.

Third, insurance companies are unlikely to be indifferent to the law that the companies they cover choose. Indeed, to the extent that insurers require covered firms to adopt specific privacy practices to control risk, those insurers would likely relish the prospect of outsourcing the oversight of these activities to state law enforcers. States could thus compete to mimic large insurers’ privacy preferences—which would by no means map onto “weaker” policies—to induce insurers to require covered firms to adopt their laws.

If a race to the bottom is truly a concern, the federal government could offer a 51st privacy alternative (that is, an optional federal regime as an alternative to the states’ various privacy laws). Assuming federal privacy regulation would be stricter (an assumption inherent in the race-to-the-bottom objection to state competition), such an approach would ensure that at least one sufficiently strong opt-in privacy regime would always be available. Among other things, this would preclude firms from claiming that no option offers a privacy regime stronger than those of the states trapped in the (alleged) race to the bottom.

Choice of law exists to a degree in the European Union, a trading bloc commonly regarded as uniformly regulated (and commonly regarded as superior on privacy because of a bias toward privacy over other goods). The General Data Protection Regulation (GDPR) gives EU member states broad authority to derogate from its provisions and create state-level exemptions. Article 23 of the GDPR allows states to exempt themselves from EU-wide law to safeguard nine listed broad governmental and public interests.[34] And Articles 85 through 91 provide for derogations, exemptions, and powers to impose additional requirements relative to the GDPR for a number of “specific data processing situations.”[35]

Finally, Article 56 establishes a “lead supervisory authority” for each business.[36] In the political, negotiated processes under the GDPR, this effectively allows companies to shade their regulatory obligations and enforcement outlook through their choices of location. For the United States’ sharper rule-of-law environment, we argue that the choice of law should be articulate and clear.

Refining the Privacy Choice-of-Law Proposal

The precise contours of a federal statute protecting choice-of-law terms in contracts will determine whether it successfully promotes interfirm and interstate competition. Language will also determine its political salability.

Questions include: What kind of notice, if any, should be required to make consumers aware that they are dealing with a firm under a law regime not their own? Consumers are notoriously unwilling to investigate privacy terms—or any other contract terms—in advance, and when considering the choice of law, they would probably not articulate it to themselves. But the competitive dynamics described earlier would probably communicate relevant information to consumers even without any required notice. As always, competitors will have an incentive to ensure consumers are appropriately well-informed when they can diminish their rivals or elevate themselves in comparison by doing so.[37]

Would there be limits on which state’s laws a firm could choose? For example, could a company choose the law of a state where neither the company nor the consumer is domiciled? States would certainly argue that a company should not be able to opt out of the law of the state where it is domiciled. The federal legislation we propose would allow unlimited choice. Such a choice is important if the true benefits of jurisdictional competition are to be realized.

A federal statute requiring states to enforce choice-of-law terms should not override state law denying enforcement of choice-of-law terms that are oppressive, unfair, or improperly bargained for. In cases such as Carnival Cruise Lines v. Shute[38] and The Bremen v. Zapata Off-Shore Co.,[39] the Supreme Court has considered whether forum-selection clauses in contracts might be invalid. The Court has generally upheld such clauses, but they can be oppressive if they require plaintiffs in Maine to litigate in Hawaii, for example, without a substantial reason why Hawaii courts are the appropriate forum. Choice-of-law terms do not impose the cost of travel to remote locations, but they could be used not to establish the law governing the parties but rather to create a strategic advantage unrelated to the law in litigation. Deception built into a contract’s choice-of-law terms should remain grounds for invalidating the contract under state law, even if the state is precluded from barring choice-of-law terms by statute.

The race-to-the-bottom argument raises the question of whether impeding states from overriding contractual choice-of-law provisions would be harmful to state interests, especially since privacy law concerns consumer rights. However, there are reasons to believe race-to-the-bottom incentives would be tempered by greater legal specialization and certainty and by state courts’ ability to refuse to enforce choice-of-law clauses in certain limited circumstances. As Erin O’Hara and Ribstein put it:

Choice-of law clauses reduce uncertainty about the parties’ legal rights and obligations and enable firms to operate in many places without being subject to multiple states’ laws. These reduced costs may increase the number of profitable transactions and thereby increase social wealth. Also, the clauses may not change the results of many cases because courts in states that prohibit a contract term might apply the more lenient law of a state that has close connections with the parties even without a choice-of-law clause.[40]

Determining when, exactly, a state court can refuse to enforce a firm’s choice of privacy law because of excessive leniency is tricky, but the federal statute could set out a framework for when a court could apply its own state’s law. Much like the independent federal alternative discussed above, specific minimum requirements in the federal law could ensure that any race to the bottom that does occur can go only so far. Of course, it would be essential that any such substantive federal requirements be strictly limited, or else the benefits of jurisdictional competition would be lost.

The converse to the problem of a race to the bottom resulting from state competition is the “California effect”—the prospect of states adopting onerous laws from which no company (or consumer) can opt out. States can regulate nationally through one small tendril of authority: the power to prevent businesses and consumers from agreeing on the law that governs their relationships. If a state regulates in a way that it thinks will be disfavored, it will bar choice-of-law provisions in contracts so consumers and businesses cannot exercise their preference.

Utah’s Social Media Regulation Act, for example, includes mandatory age verification for all social media users,[41] because companies must collect proof that consumers are either of age or not in Utah. To prevent consumers and businesses from avoiding this onerous requirement, Utah bars waivers of the law’s requirements “notwithstanding any contract or choice-of-law provision in a contract.”[42] If parties could choose their law, that would render Utah’s law irrelevant, so Utah cuts off that avenue. This demonstrates the value of a proposal like the one contemplated here.

Proposed Legislation

Creating a federal policy to stop national regulation coming from state capitols, while still preserving competition among states and firms, is unique. Congress usually creates its own policy and preempts states in that area to varying degrees. There is a well-developed law around this type of preemption, which is sometimes implied and sometimes expressed in statute.[43] Our proposal does not operate that way. It merely withdraws state authority to prevent parties from freely contracting about the law that applies to them.

A second minor challenge exists regarding the subject matter about which states may not regulate choice of law. Barring states from regulating choice of law entirely is an option, but if the focus is on privacy only, the preemption must be couched to allow regulation of choice of law in other areas. Thus, the scope of “privacy” must be in the language.

Finally, the withdrawal of state authority should probably be limited to positive enactments, such as statutes and regulations, leaving intact common-law practice related to choice-of-law provisions.[44] “Statute,” “enactment,” and “provision” are preferable in preemptive language to “law,” which is ambiguous.

These challenges, and possibly more, are tentatively addressed in the following first crack at statutory language, inspired by several preemptive federal statutes, including the Employee Retirement Income Security Act of 1974,[45] the Airline Deregulation Act,[46] the Federal Aviation Administration Authorization Act of 1994,[47] and the Federal Railroad Safety Act.[48]

A state, political subdivision of a state, or political authority of at least two states may not enact or enforce any statute, regulation, or other provision barring the adoption or application of any contractual choice-of-law provision to the extent it affects contract terms governing commercial collection, processing, security, or use of personal information.


This report introduces a statutory privacy framework centered on individual states and consistent with the United States’ constitutional design. But it safeguards companies from the challenge created by the intersection of that design and the development of modern commerce and communication, which may require them to navigate the complexities and inefficiencies of serving multiple regulators. It fosters an environment conducive to jurisdictional competition and experimentation.

We believe giving states the chance to compete under this approach should be explored in lieu of consolidating privacy law in the hands of one central federal regulator. Competition among states to provide optimal legislation and among businesses to provide optimal privacy policies will help discover and deliver on consumers’ interests, including privacy, of course, but also interactivity, convenience, low costs, and more.

Consumers’ diverse interests are not known now, and they cannot be predicted reliably for the undoubtedly interesting technological future. Thus, it is important to have a system for discovering consumers’ interests in privacy and the regulatory environments that best help businesses serve consumers. It is unlikely that a federal regulatory regime can do these things. The federal government could offer a 51st option in such a system, of course, so advocates for federal involvement could see their approach tested alongside the states’ approaches.

[1] See Uniform Law Commission, “What Is a Model Act?,”

[2] 740 Ill. Comp. Stat. 14/15 (2008).

[3] See Jim Harper, Privacy and the Four Categories of Information Technology, American Enterprise Institute, May 26, 2020,

[4] See Jim Harper, “What Do People Mean by ‘Privacy,’ and How Do They Prioritize Among Privacy Values? Preliminary Results,” American Enterprise Institute, March 18, 2022,

[5] Gramm-Leach-Bliley Act, 15 U.S.C. 6801, § 501 et seq.

[6] Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104-191, § 264.

[7] Estelle Masse, quoted in Ashleigh Hollowell, “Is Privacy Only for the Elite? Why Apple’s Approach Is a Marketing Advantage,” VentureBeat, October 18, 2022,

[8] Competition among firms regarding privacy is common, particularly in digital markets. Notably, Apple has implemented stronger privacy protections than most of its competitors have, particularly with its App Tracking Transparency framework in 2021. See, for example, Brain X. Chen, “To Be Tracked or Not? Apple Is Now Giving Us the Choice,” New York Times, April 26, 2021, For Apple, this approach is built into the design of its products and offers what it considers a competitive advantage: “Because Apple designs both the iPhone and processors that offer heavy-duty processing power at low energy usage, it’s best poised to offer an alternative vision to Android developer Google which has essentially built its business around internet services.” Kif Leswing, “Apple Is Turning Privacy into a Business Advantage, Not Just a Marketing Slogan,” CNBC, June 8, 2021, Apple has built a substantial marketing campaign around these privacy differentiators, including its ubiquitous “Privacy. That’s Apple.” slogan. See Apple, “Privacy,” Similarly, “Some of the world’s biggest brands (including Unilever, AB InBev, Diageo, Ferrero, Ikea, L’Oréal, Mars, Mastercard, P&G, Shell, Unilever and Visa) are focusing on taking an ethical and privacy-centered approach to data, particularly in the digital marketing and advertising context.” Rachel Dulberg, “Why the World’s Biggest Brands Care About Privacy,” Medium, September 14, 2021,

[9] New State Ice Co. v. Liebmann, 285 US 262, 311 (1932) (Brandeis, J., dissenting) (“To stay experimentation in things social and economic is a grave responsibility. Denial of the right to experiment may be fraught with serious consequences to the Nation. It is one of the happy incidents of the federal system that a single courageous State may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.”).

[10] See Charles M. Tiebout, “A Pure Theory of Local Expenditures,” Journal of Political Economy 64, no. 5 (1956): 416–24,

[11] See, for example, Barry R. Weingast, “The Economic Role of Political Institutions: Market-Preserving Federalism and Economic Development,” Journal of Law, Economics, & Organization 11, no. 1 (April 1995): 1 31,; Yingyi Qian and Barry R. Weingast, “Federalism as a Commitment to Preserving Market Incentives,” Journal of Economic Perspectives 11, no. 4 (Fall 1997): 83–92,; and Rui J. P. de Figueiredo Jr. and Barry R. Weingast, “Self-Enforcing Federalism,” Journal of Law, Economics, & Organization 21, no. 1 (April 2005): 103–35,

[12] See US Const. art. I, § 8 (enumerating the powers of the federal Congress).

[13] See generally Randy E. Barnett, Restoring the Lost Constitution: The Presumption of Liberty (Princeton, NJ: Princeton University Press, 2014), 274–318.

[14] Protection for Private Blocking and Screening of Offensive Material, 47 U.S.C. 230.

[15] See Geoffrey A. Manne, Ben Sperry, and Kristian Stout, “Who Moderates the Moderators? A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet,” Rutgers Computer & Technology Law Journal 49, no. 1 (2022): 39–53, (detailing some of the history of how Section 230 immunity expanded and differs from First Amendment protections); Meghan Anand et al., “All the Ways Congress Wants to Change Section 230,” Slate, August 30, 2023, reform-legislative-tracker.html (tracking every proposal to amend or repeal Section 230); and Technology & Marketing Law Blog, website, (tracking all Section 230 cases with commentary).

[16] Fair Credit Reporting Act, 15 U.S.C. § 1681 et seq.

[17] See US Federal Trade Commission, Fair Credit Reporting Act: 15 U.S.C. § 1681, May 2023, (detailing changes to the Fair Credit Reporting Act and its regulations over time).

[18] US Federal Reserve System, Consumer Financial Protection Bureau, “CFPB Launches Inquiry into the Business Practices of Data Brokers,” press release, May 15, 2023,

[19] US Federal Reserve System, Consumer Financial Protection Bureau, List of Consumer Reporting Companies, 2021, 8, (noting there are “three big nationwide providers of consumer reports”).

[20] See, for example, Erin A. O’Hara and Larry E. Ribstein, The Law Market (Oxford, UK: Oxford University Press, 2009); Erin A. O’Hara O’Connor and Larry E. Ribstein, “Conflict of Laws and Choice of Law,” in Procedural Law and Economics, ed. Chris William Sanchirico (Northampton, MA: Edward Elgar Publishing, 2012), in Encyclopedia of Law and Economics, 2nd ed., ed. Gerrit De Geest (Northampton, MA: Edward Elgar Publishing, 2009); and Bruce H. Kobayashi and Larry E. Ribstein, eds., Economics of Federalism (Northampton, MA: Edward Elgar Publishing, 2007).

[21] See F. A. Hayek, “The Use of Knowledge in Society,” American Economic Review 35, no. 4 (September 1945): 519–30,

[22] Henry N. Butler and Larry E. Ribstein, “Legal Process for Fostering Innovation” (working paper, George Mason University, Antonin Scalia Law School, Fairfax, VA), 2,

[23] See Henry N. Butler and Larry E. Ribstein, “The Single-License Solution,” Regulation 31, no. 4 (Winter 2008–09): 36–42,

[24] See Uniform Law Commission, “Acts Overview,”

[25] Utah Code Ann. § 13-63-101 et seq. (2023).

[26] O’Hara and Ribstein, The Law Market, 5.

[27] O’Hara and Ribstein, The Law Market, 5.

[28] O’Hara and Ribstein, The Law Market, 5.

[29] See Christiano Lima-Strong, “The U.S.’s Sixth State Privacy Law Is Too ‘Weak,’ Advocates Say,” Washington Post, March 30, 2023,

[30] See, for example, William L. Cary, “Federalism and Corporate Law: Reflections upon Delaware,” Yale Law Journal 83, no. 4 (March 1974): 663–705, (arguing Delaware could export the costs of inefficiently lax regulation through the dominance of its incorporation statute).

[31] Jonathan R. Macey and Geoffrey P. Miller, “Toward an Interest-Group Theory of Delaware Corporate Law,” Texas Law Review 65, no. 3 (February 1987): 470, See also Daniel R. Fischel, “The ‘Race to the Bottom’ Revisited: Reflections on Recent Developments in Delaware’s Corporation Law,” Northwestern University Law Review 76, no. 6 (1982): 913–45,

[32] Macey and Miller, “Toward an Interest-Group Theory of Delaware Corporate Law.”

[33] Macey and Miller, “Toward an Interest-Group Theory of Delaware Corporate Law.”

[34] Commission Regulation 2016/679, General Data Protection Regulation art. 23.

[35] Commission Regulation 2016/679, General Data Protection Regulation art. 85–91.

[36] Commission Regulation 2016/679, General Data Protection Regulation art. 56.

[37] See the discussion in endnote 8.

[38] Carnival Cruise Lines v. Shute, 499 US 585 (1991).

[39] The Bremen v. Zapata, 407 US 1 (1972).

[40] O’Hara and Ribstein, The Law Market, 8.

[41] See Jim Harper, “Perspective: Utah’s Social Media Legislation May Fail, but It’s Still Good for America,” Deseret News, April 6, 2023,

[42] Utah Code Ann. § 13-63-401 (2023).

[43] See Bryan L. Adkins, Alexander H. Pepper, and Jay B. Sykes, Federal Preemption: A Legal Primer, Congressional Research Service, May 18, 2023,

[44] Congress should not interfere with interpretation of choice-of-law provisions. These issues are discussed in Tanya J. Monestier, “The Scope of Generic Choice of Law Clauses,” UC Davis Law Review 56, no. 3 (February 2023): 959–1018,

[45] Employee Retirement Income Security Act of 1974, 29 U.S.C. § 1144(a).

[46] Airline Deregulation Act, 49 U.S.C. § 41713(b).

[47] Federal Aviation Administration Authorization Act of 1994, 49 U.S.C. § 14501.

[48] Federal Railroad Safety Act, 49 U.S.C. § 20106.

Continue reading
Data Security & Privacy

What’s In a Name?: Common Carriage, Social Media, and the First Amendment

Scholarship Abstract Courts and legislatures have suggested that classifying social media as common carriers would make restrictions on their right to exclude users more constitutionally permissible . . .


Courts and legislatures have suggested that classifying social media as common carriers would make restrictions on their right to exclude users more constitutionally permissible under the First Amendment. A review of the relevant statutory definitions reveals that the statutes provide no support for classifying social media as common carriers. Moreover, the fact that a legislature may apply a label to a particular actor plays no significant role in the constitutional analysis. A further review of the elements of the common law definition of common carrier reveals that four of the purported criteria (whether the industry is affected with a public interest, whether the social media companies possess monopoly power, whether they are involved in the transportation and communication industries, and whether social media companies received compensating benefits) do not apply to social media and do not affect the application of the First Amendment. The only legitimate common law basis (whether an actor holds itself out as serving all members of the public without engaging in individualized bargaining) would again seem inapplicable to social media and have little bearing on the First Amendment. The weakness of these arguments suggests that advocates for limiting social media’s freedom to decide which voices to carry are attempting to gain some vague benefit from associating their efforts with common carriage’s supposed historical pedigree to avoid having to undertake the case-specific analysis demanded by the First Amendment’s established principles.

Continue reading
Innovation & the New Economy

A Law & Economics Approach to Social-Media Regulation

Popular Media The thesis of this essay is that policymakers must consider what the nature of social media companies as multisided platforms means for regulation. The balance . . .

The thesis of this essay is that policymakers must consider what the nature of social media companies as multisided platforms means for regulation. The balance struck by social media companies acting in response to the incentives they face in the market could be upset by regulation that favors the interests of some users over others. Promoting the use of technological and practical means to avoid perceived harms by users themselves would preserve the benefits of social media to society without the difficult tradeoffs of regulation. Part I will introduce the economics of multisided platforms like social media, and how this affects the incentives of these platforms. Social-media platforms, acting within the market process, are best usually best positioned to balance the interests of their users, but there could be occasions where the market process fails due to negative externalities. Part II will consider these situations where there are negative externalities due to social media and introduce the least-cost avoider principle. Usually, social-media users are the least-cost avoiders of harms, but sometimes social media are better placed to monitor and control harms. This involves a balance, as the threat of collateral censorship or otherwise reducing opportunities to speak and receive speech could result from social media regulation. Part III will then apply the insights from Part I and II to the areas of privacy, children’s online safety, and speech regulation.

I. Introduction

Policymakers at both the state and federal levels have been actively engaged in recent years with proposals to regulate social media, whether the subject is privacy, children’s online safety, or concerns about censorship, misinformation, and hate speech.[1] While there may not be consensus about precisely why social media is bad, there is broad agreement that the major online platforms are to blame for at least some harms to society. It is also generally recognized, though often not emphasized, that social media brings great value to its users. In other words, there are costs and benefits, and policymakers should be cautious when introducing new laws that would upset the balance that social-media companies must strike in order to serve their users well.

This essay will propose a general approach, informed by the law & economics tradition, to assess when and how social media should be regulated. Part I will introduce the economics of multisided platforms, and how they affects social-media platforms’ incentives. The platforms themselves, acting within the market process, are best usually best-positioned to balance the interests of their users, but there could be occasions where the market process fails due to negative externalities. Part II will consider such externalities and introduce the least-cost avoider principle. Usually, social-media users are the least-cost avoiders of harms, but platforms themselves are sometimes better placed to monitor and control harms. This requires a balance, as social-media regulation raises the threat of collateral censorship or otherwise reducing opportunities to speak and receive speech. Part III will apply the insights from Part I and II to the areas of privacy, children’s online safety, and speech regulation.

The thesis of this essay is that policymakers must consider social-media companies’ status as multisided platforms means for regulation. The balance struck by social-media companies acting in response to the market incentives they face could be upset by regulation that favors the interests of some users over others. Promoting the use of technological and practical means to avoid perceived harms would allow users to preserve the benefits of social media without the difficult tradeoffs of regulation.

II. The Economics of Social-Media Platforms

Mutually beneficial trade is a fundamental bedrock of the market process. Entrepreneurs—including those that act through formal economic institutions like business corporations—seek to discover the best ways to serve consumers. Various types of entities help connect those who wish to buy products or services to those who are trying to sell them. Physical marketplaces are common around the world: those places set up to facilitate interactions between buyers and sellers. If those marketplaces fail to serve the interests of those who use them, others will likely arise.

Social-media companies are a virtual example of what economists call multi-sided markets or platforms.[2] Such platforms derive their name from the face that they serve at least two different types of customers and facilitate their interaction. Multi-sided platforms have “indirect network effects,” described by one economist as a situation where “participants on one side value being able to interact with participants on the other side… lead[ing] to interdependent demand.”[3] In some situations, a platform may determine it can only raise revenue from one side of the platform if demand on the other side of the platform is high. In such cases, the platform may choose to offer one side free access to the platform to boost such demand, which is subsidized by participants on the other side of the platform.[4] This creates a positive feedback loop in which more participants on one side of the platform leads to more participants on the other.

In this sense, social-media companies are much like newspapers or television in that, by solving a transaction cost problem,[5] these platforms bring together potential buyers and sellers by providing content to one side and access to consumers on the other side. Recognizing that their value lies in reaching users, these platforms sell advertising and offer access to content for a lower price, often at the price of zero (or free). In other words, advertisers subsidize the access to content for platform users.

Therefore, most social-media companies are free for users. Revenue is primarily collected from the other side of the platform—i.e., from advertisers. In effect, social-media companies are attention platforms: They supply content to users, while collecting data for targeted advertisements for businesses who seek access to those users. To be successful, social-media companies must keep enough (and the right type of) users engaged so as to maintain demand for advertising. Social-media companies must curate content that users desire in order to persuade them to spend time on the platform.

But unlike newspapers or television, social-media companies primarily rely on their users to produce content rather than creating their own. Thus, they must also consider how to attract and maintain high-demand content creators, as well as how to match user-generated content to the diverse interests of other users. If they fail to serve the interests of high-demand content creators, those users may leave the platform, thus reducing time spent on the platform by all users, which thereby reduces the value of advertising. Similarly, if they fail to match content to user interests, those users will be less engaged on the platform, reducing its value to advertisers.

Moreover, this means that social-media companies need to balance the interests of advertisers and other users. Advertisers may desire more data to be collected for targeting, but users may desire less data collection. Similarly, advertisers may desire more ads, while users may prefer fewer ads. Advertisers may prefer content that keeps users engaged on the platform, even if it is harmful for society, whether because it is false, hateful, or leads to mental-health issues for minors. On the other hand, brand-conscious advertisers may not want to run ads next to content with which they disagree. Moreover, users may not want to see certain content. Social-media companies need to strike a balance that optimizes their value, recognizing that losing participants on either side would harm the other.

Usually, social-media companies acting within the market process are going to be best-positioned to make decisions on behalf of their users. Thus, they may create community rules that restrict content that would, on net, reduce user engagement.[6] This could include limitations on hate speech and misinformation. On the other hand, if they go too far in restricting content that users consider desirable, that could reduce user engagement and thus value to advertisers. Social-media companies therefore compete on moderation policies, trying to strike the appropriate balance to optimize platform value. A similar principle applies when it comes to privacy policies and protections for minors: social-media companies may choose to compete by providing tools to help users avoid what they perceive as harms, while keeping users on the platform and maintaining value for advertisers.

There may, however, be scenarios where social media produces negative externalities[7] that are harmful to society. A market failure could result, for instance, if platforms have too great of an incentive to allow misinformation or hate speech that keeps users engaged, or to collect too much (or the wrong types of) information for targeted advertising, or to offer up content that is harmful for minors and keeps them hooked to using the platform.

In sum, social-media companies are multi-sided platforms that facilitate interactions between advertisers and users by curating user-generated content that drives attention to their platforms. To optimize the platform’s value, a social-media company must keep users engaged. This will often include privacy policies, content-moderation standards, and special protections for minors. On the other hand, incentives could become misaligned and lead to situations where social-media usage leads to negative externalities due to insufficient protection of privacy, too much hate speech or misinformation, or harms to minors.

III. Negative Social-Media Externalities and the Least-Cost-Avoider Principle

In situations where there are negative externalities from social-media usage, there may be a case for regulation. Any case for regulation must, however, recognize the presence of transaction costs, and consider how platforms and users may respond to changes in those costs. To get regulation right, the burden of avoiding a negative externality should fall on the least-cost avoider.

The Coase Theorem, derived from the work of Nobel-winning economist Ronald Coase[8] and elaborated on in the subsequent literature,[9] helps to explain the issue at hand:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the least-cost avoider, while taking into consideration the total social costs of the institutional framework.

In one of Coase’s examples, the noise from a confectioner using his machinery is a potential cost to the doctor next door, who consequently can’t use his office to conduct certain testing. Simultaneously, the doctor moving his office next door is a potential cost to the confectioner’s ability to use his equipment. In a world of well-defined property rights and low transaction costs, the initial allocation of a right would not matter, because the parties could bargain to overcome the harm in a beneficial manner—i.e., the confectioner could pay the doctor for lost income or to set up sound-proof walls, or the doctor could pay the confectioner to reduce the sound of his machines.[10] But since there are transaction costs that prevent this sort of bargain, it is important whether the initial right is allocated to the doctor or the confectioner. To maximize societal welfare, the cost should be placed on the entity that can avoid the harm at the lowest cost.[11]

Here, social-media companies create incredible value for their users, but they also arguably impose negative externalities in the form of privacy harms, misinformation and hate speech, and harms particular to minors. In the absence of transaction costs, the parties could simply bargain away the harms associated with social-media usage. But since there are transaction costs, it matters whether the burden to avoid harms is placed on the users or the social-media companies. If the burden is wrongly placed, it may end up that the societal benefits of social media will be lost.

For instance, imposing liability on social-media companies risks collateral censorship, which occurs when platforms decide that liability risk is too large and opt to over-moderate or not host user-generated content, or to restrict access to such content either by charging higher prices or excluding those who could be harmed (like minors).[12] By wrongly placing the burden to avoid harms on social-media platforms, societal welfare will be reduced.

On the other hand, there may be situations where social-media companies are the least-cost avoiders. For instance, they may be best-placed to monitor and control harms associated with social-media usage when it is difficult or impossible to hold those using their platforms accountable for harms they cause.[13] For instance, if a social-media company allows anonymous or pseudonymous use, with no realistic possibility of tracking down users who cause harms, illegal conduct could go undeterred. In such cases, placing the burden on social-media users could lead to social media imposing uncompensated harms on society.

Thus, it is important to determine whether the social-media companies or their users are the least-cost avoiders. Placing the burden on the wrong party or parties would harm societal welfare, either by reducing the value of social media or by creating more uncompensated negative externalities.

IV. Applying the Lessons of Law & Economics to Social-Media Regulation

Below, I will examine the areas of privacy, children’s online safety, and content moderation, and consider both the social-media companies’ incentives and whether the platforms or their users are the least-cost avoiders.

A. Privacy

As discussed above, social-media companies are multi-sided platforms that provide content to attract attention from users, while selling information collected from those users for targeted advertising. This leads to the possibility that social-media companies will collect too much information in order to increase revenue from targeted advertising. In other words, as the argument goes, the interests of the paying side of the platform will outweigh the interests of social-media users, thereby imposing a negative externality on them.

Of course, this assumes that the collection and use of information for targeted advertisements is considered a negative externality by social-media users. While this may be true for some, for others, it may be something they care little about or even value, because targeted advertisements are more relevant to them. Moreover, many consumers appear to prefer free content with advertising to paying a subscription fee.[14]

It does seem likely, however, that negative externalities are more likely to arise when users don’t know what data is being collected or how it is being used. Moreover, it is a clear harm if social-media companies misrepresent what they are collecting and how they are using it. Thus, it is generally unobjectionable—at least, in theory—for the Federal Trade Commission or another enforcer to hold social-media companies accountable for their privacy policies.[15]

On the other hand, privacy regulation that requires specific disclosures or verifiable consent before collecting or using data would increase the cost of targeted advertising, thus reducing its value to advertisers, and thereby further reducing the platform’s incentives of to curate valuable content for users. For instance, in response to the FTC’s consent agreement with YouTube charging that it violated the Children’s Online Privacy Protection Act (COPPA), YouTube required channel owners producing children’s content to designate their channels as such, along with automated processes designed to identify the same.[16] This reduced content creators’ ability to benefit from targeted advertising if their content was directed to children. The result was less content created for children with poorer matching as well:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.

Alternatively, a social-media company could raise the price it charges to users, as it can no longer use advertising revenue to subsidize users’ access. This is, in fact, exactly what has happened in Europe, as Meta now offers an ad-free version of Facebook and Instagram for $14 a month.[18]

In other words, placing the burden on social-media companies to avoid the perceived harms from the collection and use of information for targeted advertising could lead to less free content available to consumers. This is a significant tradeoff, and not one that most social-media consumers appear willing to make voluntarily.

On the other hand, it appears that social-media users could avoid much of the harm from the collection and use of their data by using available tools, including those provided by social-media companies. For instance, most of the major social-media companies offer two-factor authentication, privacy-checkup tools, the ability to browse the service privately, to limit audience, and to download and delete data.[19] Social-media users could also use virtual private networks (VPNs) to protect their data privacy while online.[20] Finally, users could just not post private information or could limit interactions with businesses (through likes or clicks on ads) if they want to reduce the amount of information used for targeted advertising.

B. Children’s Online Safety

Some have argued that social-media companies impose negative externalities on minors by serving them addictive content and/or content that results in mental-health harms.[21] They argue that social-media companies benefit from these harms because they are able to then sell data from minors to advertisers.

While it is true that social-media companies want to attract users through engaging content and interfaces, and that they make money through targeted advertising, it is highly unlikely that they are making much money from minors themselves. Very few social-media users under 18 have considerable disposable income or access to payment-card options that would make them valuable to advertisers. Thus, regulations that raise the costa to social-media companies of serving minors, whether through a regulatory duty of care[22] or through age verification and verifiable parental consent,[23] could lead social-media companies to invest more excluding minors than in creating vibrant and safe online spaces for them.

Federal courts considering age-verification laws have noted there are costs to companies, as well as users, in obtaining this information. In Free Speech Coalition Inc. v. Colmenero,[24] the U.S. District Court in Austin, Texas, considered a law that required age verification before viewing online pornography, and found that the costs of obtaining age verification were high, citing the complaint that stated “several commercial verification services, showing that they cost, at minimum, $40,000.00 per 100,000 verifications.”[25] But just as importantly, the transaction costs in this example also include the subjective costs borne by those who actually go through with verifying their age to access pornography. As the court noted, “the law interferes with the Adult Video Companies’ ability to conduct business, and risks deterring adults from visiting the websites.”[26] Similarly, in NetChoice v. Griffin,[27] the U.S. District Court for Western District of Arkansas found that a challenged law’s age-verification requirements were “costly” and would put social-media companies covered by the law in the position of needing to take drastic action to either implement age verification, restrict access for Arkansans, or face the possibility of civil and criminal enforcement.[28]

On the other hand, social-media companies—responding to demand from minor users and their parents—have also exerted considerable effort to reduce harmful content being introduced to minors. For instance, they have invested in content-moderation policies and their enforcement, including through algorithms, automated tools, and human review, to remove, restrict, or add warnings to content inappropriate for minors.[29] On top of that, social-media companies offer tools to help minors and their parents avoid many of the harms associated with social-media usage.[30] There are also options available at the ISP, router, device, and browser level to protect minors while online. As the court put it in Griffin, “parents may rightly decide to regulate their children’s use of social media—including restricting the amount of time they spend on it, the content they may access, or even those they chat with. And many tools exist to help parents with this.”[31]

In other words, parents and minors working together can use technological and practical means to make marginal decisions about social-media usage at a lower cost than a regulatory environment that would likely lead to social-media companies restricting use by minors altogether.[32]

C. Content Moderation

There have been warring allegations about social-media companies’ incentives when it comes to content moderation. Some claim that salacious misinformation and hate speech drives user engagement, making platforms more profitable for advertisers; others argue that social-media companies engage in too much “censorship” by removing users and speech in a viewpoint-discriminatory way.[33] The U.S. Supreme Court is currently reviewing laws from Florida and Texas that would force social-media companies to carry speech.[34]

Both views fail to take into account that social-media companies are largely just responding to the incentives they face as multi-sided platforms. Social-media companies are solving a Coasean speech problem, wherein some users don’t want to be subject to certain speech from other users. As explained above, social-media companies must balance these interests by setting and enforcing community rules for speech. This may include rules against misinformation and hate speech. On the other hand, social-media companies can’t go too far in restricting high-demand speech, or they will risk losing users. Thus, they must strike a delicate balance.

Laws that restrict the “editorial discretion” of social-media companies may fail the First Amendment,[35] but they also reduce the companies’ ability to give their customers a valuable product in light of user (and advertiser) demand. For instance, the changes in the moderation standards of X (formerly Twitter) in the last year since the purchase by Elon Musk have led to many users and advertisers exiting the platform due to a perceived increase in hate speech and misinformation.[36]

Social-media companies need to be free to moderate as they see fit, free from government interference. Such interference includes not just the forced carriage of speech, but in government efforts to engage in censorship-by-proxy, as has been alleged in Murthy v. Missouri.[37] From the perspective of the First Amendment, government intervention by coercing or significantly encouraging the removal of disfavored speech, even in the name of misinformation, is just as harmful as the forced carriage of speech.[38] But more importantly for our purposes here, such government actions reduce platforms’ value by upsetting the balance that social-media companies strike with respect to their users’ speech interests.

Users can avoid being exposed to unwanted speech by averting their digital eyes from it—i.e., by refusing to interact with it and thereby training social-media companies’ algorithms to serve speech that they prefer. They can also take their business elsewhere by joining a social-media network with speech-moderation policies more to their liking. Voting with one’s digital feet (and eyes) is a much lower-cost alternative than either mandating the carriage of speech or censorship by government actors.

V. Conclusion

Social-media companies are multisided platforms that must curate compelling content while restricting harms to users in order to optimize their value to the advertisers that pay for access. This doesn’t mean they always get it right. But they are generally best-positioned to make those decisions, subject to the market process. Sometimes, there may be negative externalities that aren’t fully internalized. But as Coase taught us, that is only the beginning of the analysis. If social-media users can avoid harms at lower cost than social-media companies, then regulation should not place the burden on social-media companies. There are tradeoffs in social-media regulation, including the possibility that it will result in a less-valuable social-media experience for users.

[1] See e.g. Mary Clare Jalonick, Congress eyes new rules for tech, social media: What’s under consideration, Associated Press (May 8, 2023),;  Khara Boender, Jordan Rodell, & Alex Spyropoulos, The State of Affairs: What Happened in Tech Policy During 2023 State Legislative Sessions?, Project Disco (Jul. 25, 2023), (noting laws passed and proposed addressing consumer data privacy, content moderation, and children’s online safety at the state level).

[2] See e.g. Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Euro. Econ. Ass’n 990 (2003).

[3] David S. Evans, Multisided Platforms in Antitrust Practice, at 3 (Oct. 17, 2023), forthcoming, Michael Noel, Ed., Elgar Encyclopedia on the Economics of Competition and Regulation, available at

[4] For instance, many nightclubs hold “Ladies Night” where ladies get in free in order to attract more men who pay for entrance.

[5] Transaction costs are the additional costs borne in the process of buying or selling, separate and apart from the price of the good or service itself — i.e. the costs of all actions involved in an economic transaction. Where transaction costs are present and sufficiently large, they may prevent otherwise beneficial agreements from being concluded.

[6] See David S. Evans, Governing Bad Behavior by Users of Multi-Sided Platforms, 27 Berkeley Tech. L. J. 1201 (2012); Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 HARV. L. REV. 1598 (2018).

[7] An externality is a side effect of an activity that is not reflected in the cost of that activity — basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action.

[8] See R.H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960)

[9] See Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[10] See Coase, supra note 9, at 8-10.

[11] See id. at 34 (“When an economist is comparing alternative social arrangements, the proper procedure is to compare the total social product yielded by these different arrangements.”).

[12] See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Liability, 87 Notre Dame L. Rev. 293, 295-96 (2011); Geoffrey A. Manne, Ben Sperry & Kristian Stout, Who Moderates the Moderators: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L J. 26, 39 (2022); Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023),

[13] See Geoffrey A. Manne, Kristian Stout & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023),; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (Sep. 6, 2023),

[14] See, e.g., Matt Kaplan, What Do U.S. consumers Think About Mobile Advertising?, InMobi (Dec. 15, 2021), (55% of consumers agree or strongly agree that they prefer mobile apps with ads rather than paying to download apps); John Glenday, 65% of US TV viewers will tolerate ads for free content, according to report, The Drum (Apr. 22, 2022), (noting that a report from TiVO found 65% of consumers prefer free TV with ads to paying without ads). Consumers often prefer lower subscription fees with ads to higher subscription fees without ads as well. See e.g. Toni Fitzgerald, Netflix Gets it Right: Study Confirms People Prefer Paying Less With Ads, Forbes (Apr. 25, 2023),

[15] See 15 U.S.C. § 45.

[16] See Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, at 6-7, SSRN (Apr. 26, 2023),

[17] Id. at 1.

[18] See Sam Schechner, Meta Plans to Charge $14 a Month for Ad-Free Instagram or Facebook, Wall Street J. (Oct. 3, 2023),

[19] See Christopher Lin, Tools to Protect Your Privacy on Social Media, NetChoice (Nov. 16, 2023),

[20] See e.g. Chris Stobing, The Best VPN Services for 2024, PC Mag (Jan. 4, 2024),

[21] See e.g. Jonatahan Stempel, Diane Bartz & Nate Raymond, Meta’s Instagram linked to depression, anxiety, insomnia in kids – US state’s lawsuit, Reuters (Oct. 25, 2023), (describing complaint from 33 states alleging Meta “knowingly induced young children and teenagers into addictive and compulsive social media use”).

[22] See e.g. California Age-Appropriate Design Code Act, AB 2273 (2022),; Kids Online Safety Act, S. 1409, 118th Cong. (2023), as amended and posted by the Senate Committee on Commerce, Science, and Transportation on July 27, 2023, available at (last accessed Dec. 19, 2023).

[23] See e.g. Arkansas Act 689 of 2023, the “Social Media Safety Act.”

[24] Free Speech Coal. Inc. v. Colmenero, No. 1:23-CV-917-DAE, 2023 U.S. Dist. LEXIS 154065 (W.D. Tex., Aug. 31, 2023), available at

[25] Id. at 10.

[26] Id.

[27] NetChoice LLC. v. Griffin, Case No. 5:23-CV-05105 (W.D. Ark., Aug. 31, 2023), available at

[28] See id. at 23.

[29] See id. at 18-19.

[30] See id. at 19-20.

[31] Id. at 15.

[32] For more, see Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes, at 23 (ICLE Issue Brief, Nov. 9, 2023),

[33] For an example of a hearing where Congressional Democrats argue the former and Congressional Republicans argue the latter, see Preserving Free Speech and Reining in Big Tech Censorship, Libr. of Cong. (Mar. 28, 2023),

[34] See Moody v. NetChoice, No. 22-555 (challenging Florida’s SB 7072); NetChoice v. Paxton, No. 22-277 (challenging Texas’s HB 20).

[35] See e.g. Brief of International Center for Law & Economics as Amicus Curiae in Favor of Petitioners in 22-555 and Respondents in 22-277, Moody v. NetChoice, NetChoice v. Paxton, In the Supreme Court of the United States (Dec. 7, 2023), available at .

[36] See e.g. Ryan Mac & Tiffany Hsu, Twitter’s U.S. Ad Sales Plunge 59% as Woes Continue, New York Times (Jun. 5, 2023), (“Six ad agency executives who have worked with Twitter said their clients continued to limit spending on the platform. They cited confusion over Mr. Musk’s changes to the service, inconsistent support from Twitter and concerns about the persistent presence of misleading and toxic content on the platform.”); Kate Conger, Tiffany Hsu & Ryan Mac, Elon Musk’s Twitter Faces Exodus of Advertisers and Executives, New York Times (Nov. 1, 2022), (“At the same time, advertisers — which provide about 90 percent of Twitter’s revenue — are increasingly grappling with Mr. Musk’s ownership of the platform. The billionaire, who is meeting advertising executives in New York this week, has spooked some advertisers because he has said he would loosen Twitter’s content rules, which could lead to a surge in misinformation and other toxic content.”).

[37] See Murthy v. Missouri, No.23A-243; see also Missouri v. Biden, No. 23-30445, slip op. (5th Cir. Sept. 8, 2023).

[38] See Ben Sperry, Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social Media Platforms, (ICLE White Paper Sept. 22, 2023), forthcoming 59 Gonz. L. Rev. (2023), available at


Should be block quote

Continue reading
Innovation & the New Economy

NetChoice, the Supreme Court, and the State Action Doctrine

TOTM George Orwell’s “Nineteen Eighty-Four” is frequently invoked when political actors use language to obfuscate what they are doing. Ambiguity in language can allow both sides . . .

George Orwell’s “Nineteen Eighty-Four” is frequently invoked when political actors use language to obfuscate what they are doing. Ambiguity in language can allow both sides to appeal to the same words, like “the First Amendment” or “freedom of speech.” In a sense, the arguments over online speech currently before the U.S. Supreme Court really amount to a debate about whether private actors can “censor” in the same sense as the government.

In the oral arguments in this week’s NetChoice cases, several questions from Justices Clarence Thomas and Samuel Alito suggested that they believed social-media companies engaged in “censorship,” conflating the right of private actors to set rules for their property with government oppression. This is an abuse of language, and completely inconsistent with Supreme Court precedent that differentiates between state and private action.

Read the full piece here.

Continue reading
Innovation & the New Economy

More of a Declaration than a Constitution

Popular Media Times are rough in West Philadelphia. Between the ouster of our president at Penn and billionaire donors taking their money elsewhere, I have never been so relieved that . . .

Times are rough in West Philadelphia. Between the ouster of our president at Penn and billionaire donors taking their money elsewhere, I have never been so relieved that most of America can’t quite tell the difference between Penn and Penn State.  Although higher education seems to be in turmoil nationwide, the situation feels particularly dire here.

Read the full piece here.

Continue reading

ICLE Amicus to the 9th Circuit in NetChoice v Bonta

Amicus Brief INTEREST OF AMICUS CURIAE[1] The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations . . .


The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates and has longstanding expertise evaluating law and policy.

ICLE has an interest in ensuring that First Amendment law promotes the public interest by remaining grounded in sensible rules informed by sound economic analysis. ICLE scholars have written extensively on issues related to Internet regulation and free speech, including the interaction of privacy rules and the First Amendment.


While the District Court issued a preliminary injunction against California’s Age-Appropriate Design Code (AADC), it did so under the commercial speech standard of intermediate scrutiny. Below we argue that the Ninth Circuit should affirm the District Court’s finding that plaintiffs are likely to succeed on the merits in their First Amendment claim, but also make clear that the AADC’s rules that have the effect of restricting the access of minors to lawful speech should be subject to strict scrutiny.

The First Amendment protects an open marketplace of ideas. 303 Creative LLC v. Elenis, 600 U.S. 570, 143 S. Ct. 2298, 2311 (2023) (“‘[I]f there is any fixed star in our constitutional constellation,’ it is the principle that the government may not interfere with ‘an uninhibited marketplace of ideas.’”) (quoting West Virginia Bd. of Ed. v. Barnette, 319 U.S. 624, 642 (1943) and McCullen v. Coakley, 573 U.S. 464, 476 (2014)). In fact, the First Amendment protects speech in this marketplace whether the “government considers… speech sensible and well intentioned or deeply ‘misguided,’ and likely to cause ‘anguish’ or ‘incalculable grief.’”  303 Creative, 143 S. Ct. at 2312 (quoting Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, 574 (1995) and Snyder v. Phelps, 562 U.S. 443, 456 (2011)).

The protection of the marketplace of ideas necessarily includes the creation, distribution, purchasing, and receiving of speech. See Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 792 n.1 (2011) (“Whether government regulation applies to creating distributing or consuming speech makes no difference” for First Amendment purposes). In other words, it protects both the suppliers in the marketplace of ideas (creators and distributors), and the consumers (purchasers and receivers).

No less than other speakers, profit-driven firms involved in the creation or distribution of speech are protected by the First Amendment. See 303 Creative LLC v. Elenis, 600 U.S. 570, 600 (2023) (“[T]he First Amendment extends to all persons engaged in expressive conduct, including those who seek profit.”). This includes Internet firms that provide speech platforms. See Reno v. ACLU, 521 U.S. 844, 870 (1997); NetChoice, LLC v. Moody, 34 F.4th 1196, 1213 (11th Cir. 2022).

Even minors have a right to participate in the marketplace of ideas, including as purchasers and receivers. See Brown, 564 U.S. at 794-95 (government has no “free-floating power to restrict ideas to which children may be exposed”). This includes the use of online speech platforms. See NetChoice, LLC v. Griffin, 2023 WL 5660155, at *17 (W.D. Ark. Aug. 31, 2023) (finding Arkansas’s Act 689 “obviously burdens minors’ First amendment rights” by “bar[ring] minors from opening accounts on a variety of social media platforms”).

This is important because online firms, especially those primarily involved in curating and creating content, are central to the modern marketplace of ideas. See Packingham v. North Carolina, 582 U.S. 98, 107 (2017) (describing the Internet as “the modern public square” where citizens can “explor[e] the vast realms of human thought and knowledge”).

Online firms primarily operate as what economists call “matchmakers” or “multisided platforms.” See David Evans & Richard Schmalensee, Matchmakers: The New Economics of Multisided Platforms 10 (2016). “[M]atchmakers’ raw materials are the different groups of customers that they help bring together. And part of the stuff they sell to members of each group is access to members of the other groups. All of them operate physical or virtual places where members of these different groups get together.  For this reason, they are often called multisided platforms.” Id. In this sense, they are very similar to newspapers and cable operators in attempting to attract attention through interesting content so that advertisers can reach them.

Online platforms bring together advertisers and users—including both speakers and listeners—by curating third-party speech as well as by producing their own content. The goal is to keep users engaged so advertisers can reach them. For many online platforms, advertisers cross-subsidize access to content for users, to the point that it is often free. Online platforms are in this sense “attention platforms” which supply content to its users while collecting data for targeted advertisements for businesses who then pay for access to those users. To be successful, online platforms must keep enough—and the right type of—users engaged so as to maintain demand for advertising. But if platforms fail to curate and produce interesting content, it will lead to users using them less or even leaving altogether, making it less likely that advertisers will invest in these platforms.

The First Amendment protects this business model because it allows entities that have legally obtained data to use it for both for the curation of speech for its users and targeted advertising. See Sorrell v. IMS Health, Inc., 564 U.S. 552, 570-71 (2011) (finding that there is a “strong argument” that “information is speech for First Amendment purposes” and striking down a law limiting the ability of marketers to use prescriber-identifying information for pharmaceutical sales). The First Amendment also protects the gathering of information when it is “inherently expressive.” Cf. Project Veritas v. Schmidt, 72 F.4th 1043, 1055 (9th Cir. 2023) (citing cases that have found the act of filming or recording are inherently expressive activity). Gathering of online data for targeted advertising makes it as inherently expressive as the act of filming or recording is for creating media.

Moreover, due to the nature of online speech platforms, the collection and use of data is “inextricably intertwined” with the curation of protected, non-commercial speech. Cf. Riley v. Nat’l Fed’n of the Blind of N.C., 487 U.S. 781, 796 (1988); Dex Media West, Inc. v. City of Seattle, 696 F.3d 952, 958 (9th Cir. 2012).

By restricting use of data, the AADC will prevent online platforms from being able to tailor their products to their users, resulting in less relevant—and in the case of minors, less appropriate—content. Online platforms may also be less likely to effectively monetize through targeted advertisements. Both situations will place platforms in a situation that may require a change in business model, either by switching to subscriptions or by excluding anyone who could possibly be a minor. Thus, restrictions on the collection and use of data for the curation of content and targeted advertising should be subject to strict scrutiny, as the result of such restrictions will be to restrict minors’ access to lawful online speech.

Under strict scrutiny, California bears the burden of showing it has a compelling governmental interest and that the restriction on speech is narrowly tailored to that interest. It can do neither.

First, California fails to establish a compelling government interest because it has failed to “identify an ‘actual problem’ in need of solving.” Brown, 564 U.S. at 799 (quoting United States v. Playboy Entertainment Group, Inc., 529 U.S. 803, 822-23 (2000)). There is no more evidence of a direct causal link between the use of online platforms subject to the AADC and harm to minors than there was from the video games at issue in Brown. Cf. id. at 799-801. In fact, the best available data does “not support the conclusion that social media causes changes in adolescent health at the population level.” See Nat’l Acad. Sci. Engineering & Med., Social Media and Adolescent Health at 92 (2023).

Second, California’s law is not narrowly tailored because the requirements that restrict minors’ access to lawful content are not the least restrictive means for protecting minors from potentially harmful content. Cf. Playboy, 529 U.S. at 823-25 (finding the voluntary use of blocking devices to restrict access to adult channels is less restricting than mandating the times such content may be made available); Aschroft v. ACLU, 542 U.S. 656, 667-70 (2004) (finding filtering software a less restrictive alternative than age verification). Parents and minors have technological and practical means available to them that could allow them to avoid the putative harms of Internet use without restricting the access of others to lawful speech. Government efforts to promote the creation and use of such tools is a less restrictive way to promote the safety of minors online.

In sum, the AADC is unconstitutional because it would restrict the ability of minors to participate in the marketplace of ideas. The likely effects of the AADC on covered businesses will be to bar or severely restrict minors’ access to lawful content.


California has argued that the AADC regulates only “conduct” or “economic activity” or “data” and thus should not be subject to First Amendment scrutiny. See Ca. Brief at 28. But NetChoice is correct to emphasize that the AADC is content-based, as it is designed to prevent minors from being subject to certain kinds of “harmful” First Amendment-protected speech. See NetChoice Brief at 39-41. As such, the AADC’s rules should be subject to strict scrutiny. In this brief we emphasize a separate reason that the AADC should be subject to strict scrutiny: the restrictions on data gathering for curation of speech and targeted advertising will inevitably lead to less access to lawful online speech platforms for minors.

In Part I we argue that gathering data for the curation of speech and targeted advertising is protected by the First Amendment. In Part II we argue that the collection of data for those purposes is inextricably linked, and thus the AADC’s restrictions on the collection of data for those purposes should be subject to strict scrutiny. In Part III we argue that the AADC fails strict scrutiny, both for a lack of a compelling government interest and because its restrictions are not narrowly tailored.


Online platforms attract users by curating content and presenting it in an engaging way. To do this effectively requires data. Moreover, that same data is useful for targeted advertising, which is the primary revenue source for most online platforms, which are multisided platforms. This is a protected business model under First Amendment principles.

First, display decisions by communications platforms on how best to present information to its users is protected by the First Amendment. Cf. Miami Herald Pub. Co. v. Tornillo, 418 U.S. 241, 258 (1974) (“The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment.”). Limitations on the right of a communications platform to curate its own content come only from the marketplace of ideas itself: “The power of a privately owned newspaper to advance its own political, social, and economic views is bounded by… the acceptance of a sufficient number of readers—and hence advertisers—to assure financial success.” Id. at 255 (quoting Columbia Broad. Sys., Inc. v. Democratic Nat’l Comm., 412 U.S. 94, 117 (1973) (plurality)).

Second, the use of data for commercial purposes is protected by the First Amendment. See Sorrell, 564 U.S. at 567 (“While the burdened speech results from an economic motive, so too does a great deal of vital expression.”). No matter how much California wishes it were so, the AADC’s restrictions on the “sales, transfer, and use of” information is not simply regulation of economic activity.  Cf. id. at 750. On the contrary, the Supreme Court “has held the creation and dissemination of information are speech within the meaning of the First Amendment.” Id. Among the protected uses of data is creating tailored content, including marketing. See id. at 557-58 (describing the use of “detailing” where drug salespersons use prescribing history of doctors to present a particular sales message.).

Third, even the collection of information can be protected First Amendment activity. For instance, in Project Veritas, this court found that an audio or video recording “qualifies as speech entitled to the protection of the First Amendment.” See 72 F.4th at 1054. This is because the act of recording itself is “inherently expressive.” Id. at 1055. Recording is necessary to create the speech at issue.

Applying these principles here leads to the conclusion that the targeted advertising-supported business model of online platforms is protected by the First Amendment. Online platforms have a right to determine what to curate and how to display that content on its platform, as they seek to discover whether it serves its users and advertisers in the marketplace of ideas, much like the newspaper in Tornillo. Using data to better curate content to users and to offer them more relevant advertisements is protected, as in Sorrell. And the collection of data to curate speech and offer them targeted advertisements is as “inherently expressive” as the act of recording is for making a video in Project Veritas.


The question remains what level of scrutiny the AADC’s restrictions on data collection for curation and targeted advertising should face. The District Court applied only intermediate scrutiny, assuming that this was commercial speech. See Op. at 10-11 (in part because the AADC’s provisions fail intermediate scrutiny anyway). But the court noted that if expression involved commercial and non-commercial speech that is “inextricably intertwined,” then strict scrutiny would apply. See id. at 10. This is precisely the case, as online multisided platforms must have data both to effectively curate content and to offer targeted advertisements which subsidize users’ access. Targeted advertising is inextricably intertwined with the free or reduced-price access of users to these online platforms.

Over time, courts have gained more knowledge of how multisided platforms work, specifically in the antitrust context. See Ohio v. American Express, 138 S. Ct. 2274, 2280-81 (2018) (describing how credit card networks work). But this also has important relevance in the First Amendment context where advertisements often fund the curation of content.

For instance, in Dex Media West, this court considered yellow page directories and found that the protected speech of the phonebooks (i.e. telephone numbers) was inextricably intertwined with the advertisements that help fund it. See 696 F.3d at 956-65. The court found the “[e]conomic reality” that “yellow pages directories depend financially upon advertising does not make them any less entitled to protection under the First Amendment.” Id. at 963-64. The court rejected the district court’s conclusion that “economic dependence was not sufficient to intertwine commercial and noncommercial elements of the publication,” id. at 964, as the same could be said of television stations or newspapers as well, but they clearly receive full First Amendment protection for their speech. The court concluded that:

Ultimately, we do not see a principled reason to treat telephone directories differently from newspapers, magazines, television programs, radio shows, and similar media that does not turn on an evaluation of their contents. A profit motive and the inclusion or creation of noncommercial content in order to reach a broader audience and attract more advertising is present across all of them. We conclude, therefore, that the yellow pages directories are entitled to full First Amendment protection. Id. at 965.

Here, this means the court should consider the interconnected nature of the free or reduced-price access to online content and targeted advertising that is empowered by data collection. Online platforms are, in this sense, indistinguishable “from newspapers, magazines, television programs, radio shows, and similar media…” that curate “noncommercial content in order to reach a broader audience and attract more advertising.” Id. The only constitutional limits on platforms’ editorial discretion arise from the marketplace of ideas itself. Cf. Tornillo, 418 U.S. at 255.

To find otherwise will lead to detrimental effects on this business model. Without data collection, not only will online platforms serve less relevant content to users but also less relevant advertising. This will make the platforms less lucrative for advertisers and lead to upward pricing pressure on the user-side of online platforms. Online platforms will be forced to change their business models by either charging fees (or raising them) for access or excluding those users subject to the regulation. Excluding minors from accessing lawful speech clearly implicates the First Amendment and is subject to strict scrutiny. Cf. Brown, 564 U.S. at 794-95, 799 (the Act “is invalid unless California can demonstrate that it passes strict scrutiny”).


The District Court determined that the AADC’s provisions would fail under either intermediate or strict scrutiny. This court should affirm the district court, but also make clear that strict scrutiny applies.

A. There Is No Compelling Government Interest

Under strict scrutiny, the government must “specifically identify an ‘actual problem’ in need of solving.” Brown, 564 U.S. at 799 (quoting Playboy, 529 U.S. at 822-23).

In Brown, the Supreme Court found that California’s evidence linking exposure to violent video games and harmful effects on children was “not compelling” because it did “not prove that violent video games cause minors to act aggressively.” Id. at 800 (emphasis in original). At best, there was a limited correlation that was “indistinguishable from effects produced by other media” not subject to the rules. Id. at 800-01.

The same is true here. The literature on the relationship between Internet use and harm to minors simply does not establish causation.

For instance, the National Academies of Science, Engineering, and Medicine has noted that there are both benefits and harms from social media use for adolescents. Nat’l Acad. Sci. Engineering & Med., Social Media and Adolescent Health at 4 (2023) (“[T]he use of social media, like many things in life, may be a constantly shifting calculus of the risky, the beneficial, and the mundane.”). There are some studies that show a very slight correlation between “problematic social media use” and mental health harms for adolescents. See Holly Shannon, et al., Problematic Social Media Use in Adolescents and Young Adults: Systematic Review and Meta-analysis, 9 JMIR Mental Health 1, 2 (2022) (noting “problematic use characterizes individuals who experience addiction-like symptoms as a result of their social media use”). But the “links between social media and health are complex.” Social Media and Adolescent Health at 89.

The reasons for this complexity include the direction of the relationship (i.e., is it because of social media usage that a person is depressed or does someone use social media because they are depressed?), and whether both social media usage and mental health issues are possibly influenced by another variable(s). Moreover, it is nearly impossible to find a control group that has not been exposed to social media. As a result, the National Academies’ extensive review of the literature “did not support the conclusion that social media causes changes in adolescent health at the population level.” Id. at 92.

The AADC applies to far more than just social media, however, extending to any “online service, product, or feature” that is “likely to be accessed by children.” See Cal. Civ. Code § 1798.99.30 (b)(4). There is little evidence that general Internet usage is correlated with harm to minors. According to one survey of the international literature, the prevalence of “Problematic Internet Use” among adolescents ranges anywhere from 4% to 20%. See Juan M. Machimbarrena et al., Profiles of Problematic Internet Use and Its Impact on Adolescents’ Health-Related Quality of Life, 16 Int’l J. Eviron. Res. Public Health 1, 2 (2019). This level of harmful use suggests the AADC’s reach is overinclusive. Cf. Brown, 564 U.S. at 805 (Even when government ends are legitimate, if “they affect First Amendment rights they must be pursued by means that are neither seriously underinclusive nor seriously overinclusive.”).

Moreover, the rules at issue are also underinclusive, even assuming there was a causal link. The AADC does not extend to the same content offline and also likely to be accessed by children, even if also supported by advertising, it would not be subject to those regulations. California has offered no reason to think that accessing the same content while receiving advertising offline would be less harmful to minors. Cf. Brown, 564 U.S. at 801-02 (“California has (wisely) declined to restrict Saturday morning cartoons, the sale of games rated for young children, or the distribution of guns. The consequence is that its regulation is wildly underinclusive when judged against its asserted justification, which in our view is alone enough to defeat it.”).

In sum, California has not established a compelling state interest in protecting minors from harm allegedly associated with Internet usage.

B. The AADC Is Not Narrowly Tailored

Even assuming there is a compelling state interest in protecting minors from harms online, the AADC’s provisions restricting the collection and use of data for curating speech and targeted advertising are not narrowly tailored to that end. They are much more likely to lead to the complete exclusion of minors from online platforms, foregoing the many benefits of Internet usage. See Social Media and Adolescent Health at 4-5 (listing benefits of social media usage for adolescents). A less restrictive alternative would be promoting the use of practical and technological means by parents and minors to avoid the harms associated with Internet usage, or to avoid specifically harmful forms of Internet use.

For instance, the AADC requires covered online platforms to “[e]stimate the age of child users with a reasonable level of certainty appropriate to the risks” or “apply the privacy and data protections afforded to children” under the Act to “all consumers.” Cal. Civ. Code § 1798.99.31(a)(5). These privacy and data protections would severely limit by default the curation of speech and targeted advertising. See Cal. Civ. Code § 1798.99.31(a)(6); (b)(2)-(4). This would reduce the value of the online platforms to all users, who would receive less relevant content and advertisements.

Rather than leading to more privacy protection for minors, such a provision could result in more privacy-invasive practices or the exclusion of minors from the benefits of online platforms altogether. There is simply no foolproof method for estimating a user’s age.

Platforms typically use one of four methods: self-declaration, user-submitted hard identifiers, third-party attestation, and inferential age assurance. See Scott Babwah Brennen & Matt Perault, Keeping Kids Safe Online: How Should Policymakers Approach Age Verification?, at 4 (The Ctr. for Growth and Opportunity at Utah State University and University of North Carolina Ctr. on Tech. Pol’y Paper, Jun. 2023), Each method comes with tradeoffs. While self-declaration allows users to simply lie about their age, other methods can be quite privacy-invasive. For instance, requiring users to submit hard identifiers, like a driver’s license or passport, may enable platforms to more accurately assess age in some circumstances and may make it more difficult for minors to fabricate their age, but it also poses privacy and security risks. It requires platforms to collect and process sensitive data, requires platforms to develop expertise in ID verification, and may create barriers to access for non-minor users who lack an acceptable form of identification. Courts have consistently found age verification requirements to be an unconstitutional barrier to access to online content. See Aschroft v. ACLU; NetChoice, LLC v. Griffin; NetChoice v. Yost, 2024 WL 555904 (S.D. Ohio, Feb. 12, 2024); Free Speech Coal., Inc. v. Colmenero, 2023 WL 5655712, at *15-16 (W.D. Tex. Aug. 31, 2023) (available age verification services “amplif[y]” privacy concerns and “exacerbate[]” “First Amendment injury,” including chilling effect).

But even age assurance or age estimation comes with downsides. For instance, an online platform could use AI systems to estimate age based on an assessment of the content and behavior associated with a user. But to develop this estimate, platforms must implement technical systems to collect, review, and process user data, including minors’ data. These methods may also result in false positives, where a platform reaches an inaccurate determination that a user is underage, which would result in a different set of privacy defaults under the AADC. See Cal. Civ. Code § 1798.99.31(a)(6); (b)(2)-(4). Errors are sufficiently common that some platforms have instituted appeals mechanisms so that users can contest an age-related barrier. See, e.g., Minimum age appeals on TikTok, TikTok, (last accessed Feb. 12, 2024). Not only is the development of such mechanisms costly to online platforms, but is potentially very costly to those mislabeled as well.

Another possibility is that online platforms may restrict access by users who they have any reason to believe to be minors to avoid significantly changing their business models predicated on curation and targeted advertising. Cf. Op. at 8 (noting evidence that “age-based regulations would ‘almost certain[ly] [cause] news organizations and others [to] take steps to prevent those under 18 from accessing online news content, features, or services.’”) (quoting Amicus Curiae Br. of New York Times Co. & Student Press Law Ctr. at 6).

The reason why this is likely flows from an understanding of the economics of multisided markets mentioned above. Restricting the already limited expected revenue from minors through limits on the ability to do targeted advertising, combined with strong civil penalties for failure to live up to the provisions of the AADC with respect to minors, will encourage online platforms to simply exclude them altogether. See Cal. Civ. Code § 1798.99.35(a) (authorizing penalties of up to $7,500 per “affected child”).

Much less restrictive alternatives are possible. California could promote online education for both minors and parents which would allow them to take advantage of widely available technological and practical means to avoid online harms. Cf. Ashcroft, 542 U.S. at 666-68 (finding filtering software is a less restrictive alternative than age verification to protect minors from inappropriate content). Investing in educating the youth in media literacy could be beneficial for avoiding harms associated with problematic Internet use. See Social Media and Adolescent Health at 8-10 (arguing for training and education so young people can be empowered to protect themselves).

If anything, there are more technological ways for parents and minors to work together to avoid online harms today. For instance, there are already tools to monitor and limit how minors use the Internet available from cell carriers and broadband providers, on routers and devices, from third-party applications, and even from online platforms themselves. See Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes, at 20-21 (ICLE Issue Brief 2023-11-09), Even when it comes to privacy, educating parents and minors on how to protect their information when online would be a less restrictive alternative than restricting the use of data collection for targeted advertising.


The free marketplace of ideas is too important to be restricted, even in the name of protecting children. Minors must be able to benefit from the modern public square that is the Internet. The AADC would throw “the baby out with the bathwater.” Op. at 16. The court should affirm the judgment of the district court.

[1] All parties have consented to the filing of this brief.  See Fed. R. App. P. 29(a)(2).  No counsel for any party authored this brief in whole or in part, no party or party’s counsel has contributed money intended to fund the preparation or submission of the brief, and no individual or organization contributed funding for the preparation and submission of the brief.  See id. 29(a)(4)(E).

Continue reading
Innovation & the New Economy

March-Right-on-In Rights?

TOTM The National Institute for Standards and Technology (NIST) published a request for information (RFI) in December 2023 on its “Draft Interagency Guidance Framework for Considering . . .

The National Institute for Standards and Technology (NIST) published a request for information (RFI) in December 2023 on its “Draft Interagency Guidance Framework for Considering the Exercise of March-In Rights.” It’s quite something, if not in a good way.

Read the full piece here.

Continue reading
Intellectual Property & Licensing

ICLE’s Amicus Briefs on the Future of Online Speech

TOTM Over the past few months, we at the International Center for Law & Economics (ICLE) have endeavored to bring the law & economics methodology to . . .

Over the past few months, we at the International Center for Law & Economics (ICLE) have endeavored to bring the law & economics methodology to the forefront of several major public controversies surrounding online speech. To date, ICLE has engaged these issues by filing two amicus briefs before the U.S. Supreme Court, and another in Ohio state court.

The basic premise we have outlined is that online platforms ought to retain the right to engage in the marketplace of ideas by exercising editorial discretion, free from government interference. A free marketplace of ideas best serves both the users of these platforms, and society at-large.

Read the full piece here.

Continue reading
Innovation & the New Economy