What are you looking for?

Showing 9 of 225 Results in Intermediary Liability

ICLE Files Amicus in NetChoice Social-Media Regulation Cases

TOTM Through our excellent counsel at Yetter Coleman LLP, the International Center for Law & Economics (ICLE ) filed an amicus brief with the U.S. Supreme Court in . . .

Through our excellent counsel at Yetter Coleman LLP, the International Center for Law & Economics (ICLE ) filed an amicus brief with the U.S. Supreme Court in the Moody v. NetChoice and NetChoice v. Paxton cases. In it, we argue that the First Amendment’s protection of the “marketplace of ideas” requires allowing private actors—like social-media companies—to set speech policies for their own private property. Social-media companies are best-placed to balance the speech interests of their users, a process that requires considering both the benefits and harms of various kinds of speech. Moreover, the First Amendment protects their ability to do so, free from government intrusion, even if the intrusion is justified by an attempt to identify social media as common carriers.

Read the full piece here.

Continue reading
Innovation & the New Economy

Brief of ICLE in Moody v NetChoice, NetChoice v Paxton

Amicus Brief Interest of Amicus[1] The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations for . . .

Interest of Amicus[1]

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates and has longstanding expertise evaluating law and policy.

ICLE has an interest in ensuring that First Amendment law promotes the public interest by remaining grounded in sensible rules informed by sound economic analysis. ICLE scholars have written extensively on issues related to social media regulation and free speech. See, e.g., Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L. J. 26 (2022); Ben Sperry, Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social-Media Platforms, 59 Gonzaga L. Rev., forthcoming (2023); Br. of Internet Law Scholars, Gonzalez v. Google; Jamie Whyte, Polluting Words: Is There a Coasean Case to Regulate Offensive Speech?, ICLE White Paper (Sep. 2021); Ben Sperry, An L&E Defense of the First Amendment’s Protection of Private Ordering, Truth on the Market (Apr. 23, 2021); Liability for User-Generated Content Online: Principles for Lawmakers (Jul. 11, 2019).

Statement

The pair of NetChoice cases before the Court presents the opportunity to bolster the Court’s longstanding jurisprudence on state action and editorial discretion by affirming that the First Amendment applies to Internet speech without disfavor. See Reno v. ACLU, 521 U.S. 844, 870 (1997) (finding “no basis for qualifying the level of First Amendment scrutiny that should be applied” to the Internet).

The First Amendment protects social media companies’ rights to exercise their own content moderation policies free from government interference. Social media companies are private actors with the same right to editorial discretion over disseminating third-party speech as offline equivalents like newspapers and cable operators. See Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1926 (2019); Mia. Herald Publ’g Co. v. Tornillo, 418 U.S. 241 (1974); Turner Broad. Sys. v. FCC, 512 U.S. 622 (1994).

Consistent with that jurisprudence, the Court should conclude that social media companies are private actors fully capable of taking part in the marketplace of ideas through their exercise of editorial discretion, free from government interference.

Summary of Argument

“The most basic of all decisions is who shall decide.” Thomas Sowell, Knowledge and Decisions 40 (2d ed. 1996). Under the First Amendment, the general rule is that private actors get to decide what speech is acceptable. It is not the government’s place to censor speech or to require private actors to open their property to unwanted speech. The market process determines speech rules on social media platforms[2] just as it does in the offline world.

The animating principle of the First Amendment is to protect this “marketplace of ideas.” “The theory of our Constitution is ‘that the best test of truth is the power of the thought to get itself accepted in the competition of the market.’” United States v. Alvarez, 567 U.S. 709, 728 (2012) (quoting Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting)). To facilitate that competition, the Constitution staunchly protects the liberty of private actors to determine what speech is acceptable, largely free from government regulation of this marketplace. See Halleck, 139 S. Ct. at 1926 (“The Free Speech Clause of the First Amendment constrains governmental actors and protects private actors….”).

Importantly, one way private actors participate in the marketplace of ideas is through private ordering—by setting speech policies for their own private property, enforceable by common law remedies under contract and property law. See id. at 1930 (a “private entity may thus exercise editorial discretion over the speech and speakers in the forum”).

Protecting private ordering is particularly important with social media. While the challenged laws concern producers of social media content, producers are only a sliver of social media users. The vast majority of social media users are content consumers, and it is for their benefit that social media companies moderate content. Speech, even when lawful and otherwise protected by the First Amendment, can still be harmful, at least from the point of view of listeners. Social media companies must balance users’ demand for speech with the fact that not everyone wants to consume every possible type of speech.

The issue is how best to optimize the benefits of speech while minimizing negative speech externalities. Speech produced on social media platforms causes negative externalities when some consumers are exposed to speech they find offensive, disconcerting, or otherwise harmful. Those consumers may stop using the platform as a result. On the other hand, if limits on speech production are too extreme, speech producers and consumers may seek other speech platforms.

To optimize the value of their platforms, social media companies must consider how best to keep users—both producers and consumers of speech—engaged. Major social media platforms mainly generate revenue through advertisements. This means a loss in user engagement could reduce the value to advertisers, and thus result in less advertising revenue. In particular, a loss in engagement by high-value users could result in less advertising, and that in turn, diminishes incentives to invest in the platform. Optimizing a platform requires satisfying users who are valuable to advertisers.

Major social media platforms have developed moderation policies in response to market demand to protect their users from speech those users consider harmful. This editorial control is protected First Amendment activity.

On the other hand, the common carriage justifications Texas and Florida offer for their restrictions on social media platforms’ control over their own property do not save the States’ impermissible intervention into the marketplace of ideas. Two of the most prominent legal justifications for common carriage regulation—holding one’s property open to all-comers and market power—do not apply to social media companies. Major social media companies require all users to accept terms of service, which limit what speech is allowed. And assuming market power can justify common carriage, neither Florida nor Texas even attempted to make such a finding, making at best mere assertions.

The States’ intervention is more like treating social media platforms as company towns—an outdated approach that this Court should reject as inconsistent with First Amendment doctrine and utterly unsuitable to the Internet Age.

Argument

I. Social Media Platforms Are Best Positioned to Optimize Their Platforms To Serve Their Users’ Speech Preferences.

The First Amendment promotes a marketplace of ideas. To have a marketplace of any kind, there must be strong private property rights and enforceable contracts that enable entrepreneurs to discover the best ways to serve consumers. See generally Hernando de Soto, The Mystery of Capital (2000). As full participants in the marketplace of ideas, social media platforms must be free to exercise their own editorial policies and have choice over which ideas they allow on their platforms. Otherwise, there is no marketplace of ideas at all, but either a government-mandated free-for-all where voices struggle to be heard or an overly restricted forum where the government censors disfavored ideas.

The marketplace analogy is apt when considering First Amendment principles because, like virtually any other human activity, speech has both benefits and costs. Like other profit-driven market endeavors, it is ultimately the subjective, individual preferences of consumers that determine how to manage those tradeoffs. The nature of what is deemed offensive is obviously context- and listener-dependent, but the parties best suited to set and enforce appropriate speech rules are the property owners subject to the constraints of the marketplace.

When it comes to speech, an individual’s desire for an audience must be balanced with a prospective audience’s willingness to listen. Formal economic institutions acting in the marketplace must strike the proper balance between these desires and have an incentive to get it right or they could lose consumers. Asking government to make categorical decisions for all of society is substituting centralized evaluation of the costs and benefits of access to communications for the individual decisions of many actors, including property owners who open their property to third party speech. As the economist Thomas Sowell put it, “that different costs and benefits must be balanced does not in itself imply who must balance them?or even that there must be a single balance for all, or a unitary viewpoint (one ‘we’) from which the issue is categorically resolved.” Thomas Sowell, Knowledge and Decisions 240 (2d ed. 1996).

Rather than incremental decisions on how and under what terms individuals may relate to one another on a particular platform—which can evolve over time in response to changes in what individuals find acceptable—governments can only hand down categorical guidelines through precedential decisions: “you must allow a, b, and c speech” or “you must not allow x, y, and z speech.”

This freedom to experiment and evolve is vital in the social-media sphere, where norms about speech are in constant flux. Social media users often impose negative externalities on other users through their speech. Thus, social media companies must resolve social-cost problems among their users by balancing their speech interests.

In his famous work “The Problem of Social Cost,” the economist Ronald Coase argued that the traditional approach to regulating externalities was misguided because it overlooked the reciprocal nature of harms. Ronald H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1, 2 (1960). For example, the noise from a factory is a potential cost to the doctor next door who consequently cannot use his office to conduct certain testing, and simultaneously the doctor moving his office next door is a potential cost to the factory’s ability to use its equipment. In a world of well-defined property rights and low transaction costs, the initial allocation of a right would not matter, because the parties could bargain to overcome the harm in a beneficial manner—i.e., the factory could pay the doctor for lost income or to set up sound-proof walls, or the doctor could pay the factory to reduce the sound of its machines. But in the real world, where there are often significant transaction costs, who has the initial right matters because it is unlikely that the right will get to the highest valued use.

Similarly, on social media, speech that some users find offensive or false may be inoffensive or even patently true to other users. Protecting one group from offensive speech necessarily imposes costs on the group that favors the same speech. There is a reciprocal nature to the harms of speech, much as with other forms of nuisance. Due to transaction costs, it is unlikely that users will be able to effectively bargain to a solution on speech harms. There is a significant difference, though. Unlike the situation of the factory owner and the doctor, social media users are all using the property of social media companies. And those companies are best positioned to—and must be allowed to—balance these varied interests in real-time to optimize their platform’s value in response to consumer demand.

Social media companies are what economists call “multi-sided” platforms. See generally David S. Evans & Richard Shmalensee, Matchmakers: The New Economics of Multisided Platforms (2016). They are for-profit businesses, and the way they generate profits is by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users will abandon the platform. Without users, advertisers would have no interest in buying ads. And without advertisers, there is no profit to be made.

As in any other community, “[i]nteractions on multi-sided platforms can involve behavior that some users find offensive.” David S. Evans, Governing Bad Behavior by Users of Multi-Sided Platforms, 27 Berkeley Tech. L.J. 1201, 1215 (2012). As a result, “[p]eople may incur costs [from] unwanted exposure to hate speech, pornography, violent images, and other offensive content.” Id. And “[e]ven if they are not exposed to this content, they may dislike being part of a community in which such behavior takes place.” Id.

These cases challenge laws that cater to one set of social media users—producers of speech on social media platforms. But social media platforms must be at least as sensitive to their speech consumers. Indeed, the one-percent rule—“a vast majority of user-generated content in any specific community comes from the top 1% of active users”[3]—teaches that speech-consuming users may be even more important because they far outnumber producers. In turn, less intense users are usually the first to leave a platform, and their exit may cascade into total platform collapse. See, e.g., János Török & János Kertész, Cascading Collapse of Online Social Networks, 7 Sci. Rep., art. 16743 (2017).

Social media companies thus need to optimize the value of their platform by setting rules that keep users—mostly speech consumers—sufficiently engaged that there are advertisers who will pay to reach them. Even more, social media platforms must encourage engagement by the right users. To attract advertisers, platforms must ensure individuals likely to engage with advertisements remain active on the platform.[4] Platforms ensure this optimization by setting and enforcing community rules.

In addition, like users, advertisers themselves have preferences social media platforms must take into account. Advertisers may threaten to pull ads if they do not like the platform’s speech-governance decisions. For instance, after Elon Musk restored the accounts of Twitter users who had been banned by the company’s prior leadership, major advertisers left the platform. See Kate Conger, Tiffany Hsu, & Ryan Mac, Elon Musk’s Twitter Faces Exodus of Advertisers and Executives, N.Y. Times (Nov. 1, 2022); Ryan Mac & Tiffany Hsu, Twitter’s US Ad Sales Plunge 59% as Woes Continue, N.Y. Times (Jun. 5, 2013).

Thus, it is no surprise that in the cases of major social media companies, the platforms have set content-moderation standards that restrict many kinds of speech. See generally Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598 (2018).

The bottom line is that the market process leaves the platforms themselves best positioned to make these incremental editorial decisions about their users’ preferences on speech, in response to the feedback loop between consumer, producer, and advertiser demand. It should go without saying that social media users do not necessarily want more opportunities to say and hear certain speech. Forcing social media companies to favor one set of users—a fraction of speech producers—by forbidding “viewpoint discrimination” favored by other users is unwarranted and unlawful interference in those companies’ editorial discretion. That interference threatens rather than promotes the marketplace of ideas.

II. The First Amendment Protects Private Ordering of Speech, Including Social Media Platform Moderation Polices.

The First Amendment protects the right of social media platforms to serve the speech preferences of their users through their moderation policies.

The “text and original meaning [of the First and Fourteenth Amendments], as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech.” Halleck, 139 S. Ct. at 1928. The First Amendment’s reach does not grow when private property owners open their property for speech. If such property owners were “subject to First Amendment constraints” and thus “lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum” they would “face the unappetizing choice of allowing all comers or closing the platform altogether.” Id. at 1930. That is, the First Amendment respects—indeed protects—private ordering.

So, while the First Amendment protects the right of individuals to speak (and receive speech) without fear of legal repercussions in most instances, it does not make speech consequence-free, nor does it mandate the carrying of all speech in private spaces.

“Bad” speech has, in fact, long been kept in check via informal means, or what one might call “private ordering.” In this sense, property rights and contract law have long played a crucial role in determining the speech rules of any given space.

For instance, a man would be well within his legal rights to eject a guest from his home for using racial epithets. As a property owner, he would not only have the right to ask that person to leave but could exercise his right to eject that person as a trespasser—if necessary, calling the police to assist him. Similarly, one could not expect to go to a restaurant and yell at the top of her lungs about political issues and expect the venue to abide. A bar hosting an “open mic night” and thus opening itself up to speech is still within its rights to end a performance so offensive it could lead to a loss of patrons. Subject to narrow exceptions, property owners determine acceptable speech on their property and may enforce those rules by excluding those who refuse to comply.

A. Social media platforms are not state actors.

One exception to this strong distinction between state and private action is when a “private entity performs a traditional, exclusive public function.” See Halleck, 139 S. Ct. at 1928. In those cases, there may be a right to free speech that operates against a private actor. See Marsh v. Alabama, 326 U.S. 501 (1946).

Proceeding from Marsh, many litigants seize upon this Court’s recent analogizing social media to the “modern public square.” Packingham v. N. Carolina, 137 S. Ct. 1730, 1737 (2017). They argue social media companies are like a company town or town square and so lack the discretion to restrict speech protected by the First Amendment. But cases since Marsh make clear that the state-actor exception is exceptionally narrow.

In Marsh, this Court found that a company town, while private, was a state actor for purposes of the First Amendment. At issue was whether the company town could prevent a Jehovah’s Witness from passing out literature on the town’s sidewalks. The Court noted that “[o]wnership does not always mean absolute dominion. The more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it.” Marsh, 326 U.S. at 506. The Court proceeded to balance private property rights with First Amendment rights, determining that, in company towns, the First Amendment’s protections should be in the “preferred position.” See id. at 509.

The Court later extended this finding to shopping centers, finding they were the “functional equivalent” to the business district in Marsh, and thus finding that a shopping center could not restrict peaceful picketing of a grocery story by a local food-workers union. Food Employees v. Logan Valley Plaza, 391 U.S. 308, 318, 325 (1968).

But the Court began retreating from both Logan Valley and Marsh just a few years later in Lloyd Corp. v. Tanner, 407 U.S. 551 (1972), which concerned hand-billing in a shopping mall. Noting the “economic anomaly” that was company towns, the Court said Marsh “simply held that where private interests were substituting for and performing the customary functions of government, First Amendment freedoms could not be denied where exercised in the customary manner on the town’s sidewalks and streets.” Id. at 562 (emphasis added).

Building on Tanner, the Court went a step further in Hudgens v. NLRB, 424 U.S. 507 (1976), reversing Logan Valley and more severely cabining Marsh. Hudgens involved picketing on private property, and the Court concluded bluntly that, “under the present state of the law the constitutional guarantee of free expression has no part to play in a case such as this[.]” Id. at 521. Marsh is now a narrow exception, the Court explained, limited to situations where private property has taken on all attributes of a town. See id. at 516. And following Hudgens, the Court further limited the public-function test to “the exercise by a private entity of powers traditionally exclusively reserved to the State.” See Jackson v. Metropolitan Edison Co., 419 U.S. 345, 352 (1974).

Today it is well-established that “the constitutional guarantee of free speech is a guarantee only against abridgment by government, federal or state.” Hudgens, 424 U.S. at 513. Purely private actors—even those who open their property to the public—are not subject to First-Amendment limits on how they use their property.

The Court reaffirmed that rule recently in Halleck, which considered whether a public-access channel operated by a cable provider was a state actor. Summarizing the case law, the Court said the test required more than just a finding that the government at some point exercised the same function or that the function serves the public good. Instead, the government must have “traditionally and exclusively performed the function.” Halleck, 139 S. Ct. at 1929 (emphasis in original).

The Court then found that merely operating as a public forum for speech is not a function traditionally and exclusively performed by the government. And because “[it] is not an activity that only governmental entities have traditionally performed,” a private actor providing a forum for speech retains “editorial discretion over the speech and speakers in the forum.” Id. at 1930.

Following this Court’s state-actor jurisprudence, federal courts have consistently found social media companies are not equivalent to company towns and thus not subject to First Amendment constraints. Unlike the company town, where those within their geographical confines have little choice but to deal with them as if they are the government themselves, social media users can simply use alternative means to convey speech or receive it. The Ninth Circuit, for instance, squarely rejected the argument that social media companies fulfill a traditional, public function. See Prager Univ. v. Google, LLC, 951 F.3d 991, 996-99 (9th Cir. 2020). Every federal court to consider whether social media companies are state actors under this theory has found the same. See, e.g., Freedom Watch, Inc. v. Google Inc., 816 F. App’x 497, 499 (D.C. Cir. 2020); Brock v. Zuckerberg, 2021 WL 2650070, at *3 (S.D.N.Y. Jun. 25, 2021); Zimmerman v. Facebook, Inc., 2020 WL 5877863 at *2 (N.D. Cal. Oct. 2, 2020); Ebeid v. Facebook, Inc., 2019 WL 2059662 at *6 (N.D. Cal. May 9, 2019); Green v. YouTube, LLC, 2019 WL 1428890, at *4 (D.N.H. Mar. 13, 2019); Nyabwa v. Facebook, 2018 WL 585467, at *1 (S.D. Tex. Jan. 26, 2018); Shulman v. Facebook.com, 2017 WL 5129885, at *4 (D.N.J. Nov. 6, 2017).

B. Social media companies have a right to editorial discretion.

Private actors have the right to editorial discretion that cannot generally be overcome by state action compelling the dissemination of speech. See Mia. Herald Publ’g Co. v. Tornillo, 418 U.S. 241 (1974); Turner Broad. Sys. v. FCC, 512 U.S. 622 (1994). This is particularly important for private actors whose business is disseminating speech, like newspapers, cable operators, and social media companies.

In Tornillo, the Court struck a right-to-reply statute for political candidates because it “compel[s] editors or publishers to publish that which ‘reason tells them should not be published.’” 418 U.S. at 256. The Court established a general rule that the limits on media companies’ editorial discretion were not defined by government edict but by “the acceptance of a sufficient number of readers—and hence advertisers —to assure financial success; and, second, the journalistic integrity of its editors and publishers.” Id. at 255 (citing Columbia Broadcasting System, Inc. v. Democratic Nat’l Comm., 412 U. S. 94, 117 (1973)). In other words, the limits on how private entities exercise their editorial discretion comes from the marketplace of ideas itself—the preferences of speech consumers, advertisers, and the property owners—not the government.

The size and influence of social media companies does not shrink Tornillo’s effect. No matter how large the editor or the forum, the government still may not coerce private entities to disseminate speech. See id. at 254 (“However much validity may be found in these arguments [about monopoly power], at each point the implementation of a remedy such as an enforceable right of access necessarily calls for some mechanism .?.?.?If it is governmental coercion, this at once brings about a confrontation with the express provisions of the First Amendment.”). Alleged market power is insufficient to justify compelling the dissemination of speech by social media companies.

Turner confirms that market power is irrelevant. There the Court began with “an initial premise: Cable programmers and cable operators engage in and transmit speech, and they are entitled to the protection of the speech and press provisions of the First Amendment.” 512 U.S. at 636. While the Court nonetheless applied intermediate scrutiny, it did so based on technological differences in transmission by newspapers and cable television, and the fact that the law was content-neutral. The level of scrutiny thus turns on “the special characteristics” of transmission, not “the economic characteristics” of the market. Id. at 640.

Returning to Tornillo, the Court reasoned that the law violated the First Amendment by intruding upon the company’s editorial discretion. See 418 U.S. at 258. Like newspapers, social media platforms are “more than a passive receptable for news, comment, and advertising,” as their “choice of material,” their “decisions made as to the limitations on the size and content of the paper” and their “treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment.” Id. Indeed, that exercise of editorial control and judgment is central to a platform’s retention of speech consumers and attraction of advertisers targeting those users, and thus the platform’s continued survival. See supra, pp. ___.

Accordingly, federal courts rightly have called government actions into question when they violate the right of social media platforms to exercise editorial discretion. See NetChoice, LLC v. Bonta, 2023 WL 6135551, at *15 (N.D. Cal. Sept. 18, 2023); O’Handley v. Padilla, 579 F. Supp. 3d 1163, 1186-88 (N.D. Cal. Jan. 10, 2022); see also Murthy v. Missouri, No. 23-411, 2023 WL 6935337, at *2 (U.S. Oct. 20, 2023) (Alito, J., dissenting) (“The injunction applies only when the Government crosses the line and begins to coerce or control others’ [i.e. the social media companies’] exercise of their free-speech [i.e. editorial discretion] rights.”).

Thus, the Fifth Circuit’s claim in Paxton that “the Supreme Court’s cases do not carve out ‘editorial discretion’ as a special category of First-Amendment-protected expression,” 49 F.4th at 463, is demonstrably wrong. The Court has established that private actors have a right to exercise editorial discretion concerning speech on their property. See Halleck (using the phrase “editorial discretion” 11 times). Social media platforms have the same right.

C. Strict scrutiny applies.

As social media companies have a right to editorial discretion, the next question is the level of scrutiny the challenged statutes must satisfy. Strict scrutiny is proper, because social media platforms are much more like the newspapers in Tornillo than the cable companies in Turner.

In Turner, the Court found:

[The] physical connection between the television set and the cable network gives the cable operator bottleneck, or gatekeeper, control over most (if not all) of the television programming that is channeled into the subscriber’s home .?.?.?. [U]nlike speaker in other media, [cable operators] can thus silence the voice of competing speakers with a mere flick of the switch.

512 U.S. at 656. Social media platforms have no physical control of the connection to the home, and thus no practical ability to exclude competing voices or platforms. The internet architecture simply does not allow them to stop users from using other sites to find speech or speak. Strict scrutiny should apply to SB 7072 and HB 20.

Likewise, compelling social media companies to allow speech contrary to their terms of service is fundamentally different than mandating access for military recruiters in law schools or requiring shopping malls to allow the peaceful exercise of speech in areas held open to the public. Contra Paxton, 49 F.4th at 462-63. In those instances, there was no identification of the venue with the message. See Rumsfeld v. Forum for Acad. & Inst. Rights, Inc., 547 U.S. 47, 65 (2006); PruneYard Shopping Ctr. v. Robins, 447 U.S. 74, 86-88 (1980).

Here, the moderation decisions of social media companies do have implications for advertisers who do not want their brand associated with certain content. See Jonathan Vanian, Apple, Disney, other media companies pause advertising on X after Elon Musk boosted antisemitic tweet, CNBC (Nov. 17, 2023);[5] Caleb Ecarma, Twitter Can’t Seem to Buck Its Advertisers-Don’t-Want-to-Be-Seen-Next-to-Nazis Problem, Vanity Fair (Aug. 17, 2023);[6] Ryan Mac & Tiffany Hsu, Twitter’s US Ad Sales Plunge 59% as Woes Continue, N.Y. Times (Jun. 5, 2023).[7] Similarly, users will exit if they don’t enjoy the experience of the platform. See Steven Vaughan-Nichols, Twitter seeing ‘record user engagement’? The data tells a different story, ZDNet (Jun. 30, 2023).[8] Speech by social media companies disavowing what is said by some users of their platforms does not prevent advertisers and much of the public from identifying user speech with the platform.

Moreover, both the Florida and Texas laws are discriminate based upon content, as a reviewing court would have to consider what speech is at issue to determine whether a social media company can moderate it. This makes the laws different than those at issue in Turner, and offer an alternative reason they should be subject to strict scrutiny.

Section 230 of the Communications Act does not change this analysis. Contra Paxton, 49 F.4th at 465-66. Section 230 supplements the First Amendment’s protection of editorial discretion by granting “providers and users of an interactive computer service” immunity from (most) lawsuits for speech generated by other “information content providers” on their platforms. See 47 U.S.C. §230(c). The animating reason for Section 230 was to provide “protection for private blocking and screening” by preventing lawsuits over third party content that was left up, see Section 230(c)(1), or over third-party content that was taken down, see Section 230(c)(2). See also Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L. J. 26, 39-41 (2022). Section 230 encourages social media companies to use their underlying First Amendment rights to editorial discretion. There is no basis for citing it as a basis for restricting such rights.

*  *  *

The challenged Florida and Texas laws treat social media platforms essentially as company towns. But social media platforms simply do not demonstrate the requisite characteristics sufficient to treat them as company towns whose moderation decisions are subject to court review for viewpoint discrimination. Instead, consistent with their economic function, they are private actors with their own rights to editorial discretion protected from government interference.

III. The Justifications for Common Carriage Regulation Do Not Apply to Social Media Companies.

The law and economics principles described above establish a general rule of the First Amendment that private property owners like social media companies have the right, responsibility, and need in the marketplace to moderate speech on their platforms. It makes no more sense to apply common carriage regulation to social media platforms than it does to treat them as company towns subject to the First Amendment.

Both Florida’s SB 7072 and Texas’s HB 20 are designed to restrict the ability of social media companies to exercise editorial discretion on their platforms. Each State justified its law by comparing social media companies to common carriers. Florida’s legislative findings included the statement that social media platforms should be “treated similarly to common carriers.” Act of May 24, 2021, ch. 2021-32, § 1(6), 2021 Fla. Laws 503, 505. Texas’ legislature found that “social media platforms function as common carriers” and “social media platforms with the largest number of users are common carriers by virtue of their market dominance.” Act of Sept. 9, 2021, ch. 3, § (3)–(4), 2021 Tex. Gen. Laws 3904, 3904.

But simply “[l]abeling” a social media platform “a common carrier .?.?.?has no real First Amendment consequences.” Denver Area Educ. Telecomm. Consortium, Inc. v. FCC, 518 U.S. 727, 825 (1996) (Thomas, J., concurring in the judgment in part and dissenting in part). And nothing about social media platforms justifies the label in any event: Social media platforms do not hold themselves out to the public as common carriers, and social media platforms lack monopoly power.

A. Social media platforms do not hold themselves out to all comers.

Both the Eleventh Circuit in Moody and the Fifth Circuit in Paxton recognized that one characteristic common carriers share is that they hold themselves out as serving all members of the public without individualized bargaining. See Moody, 34 F.4th 1196, 1220 (11th Cir. 2022); Paxton, 49 F.4th at 469.

Major social media companies, however, do not hold themselves out to the public indiscriminately either for users or the type of speech allowed. Unlike a telephone company or the postal service, both of which carry all private communications regardless of the underlying message, social media companies require all users to accept terms of service dealing specifically with speech in order to use the platform. They also maintain the discretion to enforce their rules as they see fit, both curating and editing speech before presenting it to the world.. As the Eleventh Circuit put it in Moody, social media users “are not freely able to transmit messages ‘of their own design and choosing’ because platforms make—and have always made—‘individualized’ content- and viewpoint-based decisions about whether to publish particular messages or users.” Moody, 34 F.4th at 1220 (quoting FCC v. Midwest Video Corp., 440 U.S. 689, 701 (1979)).

Moreover, the very service that online platforms offer to users, and that users accept, is the moderation of speech in one form or another. Instagram allows users to curate feeds of specialized images, and Twitter does the same for specialized microblogs. Without this core moderation service, the services would be essentially useless to users. By contrast, common carriers do not have as a core part of their service the moderation of speech: any moderation of speech is incidental to operation of the service (e.g. removing unruly passengers).

Judge Srinivasan’s concurring opinion in United States Telecom Association v. FCC, 855 F.3d 381 (D.C. Cir. 2017) (denying rehearing en banc), is instructive on this point. The panel there had denied a petition for review of the FCC’s net neutrality order, which applied common carriage regulation to internet service providers. At the rehearing stage, then-Judge Kavanaugh feared the panel’s opinion would allow the government to “impose forced-carriage or equal-access obligations on YouTube and Twitter.” Id. at 433 (Kavanaugh, J., dissenting). Judge Srinivasan sought to allay that fear by explaining: Social media platforms “are not considered common carriers that hold themselves out as affording neutral, indiscriminate access to their platform without any editorial filtering[.]”. Id. at 392 (Srinivasan, J., concurring) (emphasis added). Indeed, even the Internet service providers deemed common carriers there could escape such designation if they acted like social media platforms and exercised editorial discretion and advertised themselves as doing so. See id. at 389-90 (Srinivasan, J., concurring).

Unlike the telegraph, telephone, the postal service, or even email, major social media companies do not hold themselves out to the public as open to all legal speech—they expressly retain their editorial discretion. They have publicly available terms of service that users must agree to before creating profiles that detail what is and is not allowed on their platforms. While common carriers like airlines may be able to eject passengers based upon conduct even where there is a speech element, social media companies retain the right to restrict pure expression that is inconsistent with their community standards. These rules include limitations on otherwise legal speech and disclose that violators may be restricted from use, including expulsion. Br. for Pet’rs, https://netchoice.org/wp-content/uploads/2023/11/No.‌-22-555_NetChoice-and-CCIAs-Brief-Paxton.pdf, at 4-7.

The Fifth Circuit was wrong to minimize social media platforms’ editorial discretion by comparing their efforts to newspapers curating articles and columns. See Paxton, 49 F.4th at 459-60, 492 (noting that more than 99% of content is not reviewed by a human). Miami Herald did not establish a floor on how much a private actor must exercise editorial discretion in order to be protected by the First Amendment. Nor did it specify that a human must review content rather than a company investing in algorithms to help them moderate content. The Fifth Circuit’s reasoning is essentially a “use it or lose it” theory of the First Amendment, which says if social media companies do not aggressively use their editorial discretion rights, then they can lose them. “That is not how constitutional rights work,” however; the “‘use it or lose it’ theory is wholly foreign to the First Amendment.” U.S. Telecom, 855 F.3d at 429 (Kavanaugh, J., dissenting).

Since social media companies do not hold themselves out to the public as open to all speech, they are not common carriers that can somehow be required to carry third party speech contrary to their terms of service.

B. Social media companies lack gatekeeper monopoly power.

Another reason offered for treating social media platforms like common carriers is that some social media companies are alleged to have “dominant market share,” see Biden v. Knight, 141 S. Ct. 1220, 1224 (2021) (Thomas, J., concurring), or in the words of Turner, “gatekeeper” or “bottleneck” market power. See Turner, 512 U.S. at 656.

As shown above, however, Turner is not really about market power but about the unique physical connection that gave cable providers the power to restrict access to content by the flick of a switch. In any case, there is no basis for concluding that social media companies are all monopolists.

A number of major social media companies covered by the Florida and Texas laws are not in any sense holders of substantial market power as measured by share of visits.[9] Neither are companies like reddit, LinkedIn, Tumblr, or Pinterest, who all have even fewer visits. Nonetheless, the challenged laws would apply to such entities based on monthly users at the national level or gross revenue. See Fla. Stat. §501.2041(1)(g)(4) (covered providers must have at least 100 million monthly users or $100 million in gross annual revenue); Tex. Bus. & Com. Code §§ 120.001(1), .002(b) (covered social media platforms have 50 million monthly active users). But raw revenue or user numbers do not show market power. It is, at the very least, market share (i.e., concentration) that could plausibly be instructive—and even then, market power entails a much more complex determination. See, e.g., Brian Albrecht, Competition Increases Concentration, Truth on the Market (Aug. 16, 2023), https://‌truthonthemarket.com/2023/08/16/competition-increases‌-concentration/. As economist Chad Syverson puts it, “concentration is worse than just a noisy barometer of market power. Instead, we cannot even generally know which way the barometer is oriented.” Chad Syverson, Macroeconomics and Market Power: Context, Implications, and Open Questions, 33 J. Econ. Persp. 23, 26 (2019).

Second, there is no legislative finding of market power that would justify either law: just a bare assertion by the Texas legislature that “social media platforms with the largest number of users are common carriers by virtue of their market dominance.” HB 20 § 1(4). That “finding” by the Texas legislature fails to even define a relevant market, let alone establish market shares, or identify any indicia of market power of any players in that market. In then-Judge Kavanaugh’s words, both Florida and Texas failed to “even tr[y] to make a market power showing.” U.S. Telecom, 855 F.3d at 418 (Kavanaugh, J., dissenting); see also FTC v. Facebook, 560 F. Supp. 3d 1, 18 (D.D.C. Jun. 28, 2021) (“[T]he FTC’s bare assertions would be too conclusory to plausibly establish market power”).

The Texas legislature’s bare assertion is considerably weaker than the “unusually detailed statutory findings” the Court relied on in Turner, 512 U.S. at 646,[10] and is woefully insufficient to permit reliance on this justification for common-carrier-like treatment under the First Amendment.

Conclusion

The First Amendment protects the marketplace of ideas by protecting private ordering of speech rules. For the foregoing reasons, the Court should reverse the decision of the Fifth Circuit in Paxton and affirm the decision of the Eleventh Circuit in Moody.

[1] Amicus curiae affirms that no counsel for any party authored this brief in whole or in part, and that no entity or person other than amici and their counsel made any monetary contribution toward the preparation and submission of this brief.

[2] Throughout this brief, the term “platform” as applied to the property of social media companies is used in the economic sense, as these companies are all what economists call multisided platforms. See David S. Evans, Multisided Platforms, Dynamic Competition, and the Assessment of Market Power for Internet-Based Firms, at 6 (Coase-Sandor Inst. for L. & Econ. Working Paper No. 753, Mar. 2016).

[3] Valtteri Vuorio & Zachary Horne, A Lurking Bias: Representativeness of Users Across Social Media and Its Implications for Sampling Bias In Cognitive Science, PsyArXiv Preprint at 1 (Feb. 2, 2023); see also, e.g., Alessia Antelmi, et al., Characterizing the Behavioral Evolution of Twitter Users and The Truth Behind the 90-9-1 Rule, in WWW ’19: Companion Proceedings of The 2019 World Wide Web Conference 1035 (May 2019).

[4] “For decades, the 18-to-34 age group has been considered especially valuable to advertisers. It’s the biggest cohort, overtaking the baby boomers in 2015, and 18 to 34s are thought to have money to burn on toys and clothes and products, rather than the more staid investments of middle age.” Ryan Kailath, Is 18 to 34 still the most coveted demographic?, Marketplace.com Dec. 8, 2017), https://www.market‌place.org/2017/12/08/coveted-18-34-year-old-demographic.

[5] https://www.cnbc.com/2023/11/17/apple-has-paused-advertising-on-x-after-musk-promoted-antisemitic-tweet.html.

[6] https://www.vanityfair.com/news/2023/08/twitter-advert‌isers-dont-want-nazi-problem.

[7] https://www.nytimes.com/2023/06/05/technology/twitter-ad-sales-musk.html.

[8] https://www.zdnet.com/article/twitter-seeing-record-user-engagement-the-data-tells-a-different-story.

[9] See https://www.statista.com/statistics/265773/market-share-of-the-most-popular-social-media-websites-in-the-us (Facebook at 49.9%, Instagram at 15.85%, X/Twitter at 14.69%, YouTube at 2.29%); https://gs.statcounter.com/social-media-stats/all/‌united-states-of-america (similar numbers).

[10] See also Pub. L. 102-385 § 2(a)(1) (detailing price increases of cable television since rate deregulation, which is inferential evidence of market power); id. § 2(a)(2) (explaining that local franchising regulations and the cost of building out cable networks leave most consumers with only one available option).

Continue reading
Innovation & the New Economy

ICLE Amicus Letter Supporting Review in Liapes v Facebook

Amicus Brief RE: Amicus Letter Supporting Review in Liapes v. Facebook, Inc. (No. S282529), From a Decision by the Court of Appeal, First Appellate District, Division 3 . . .

RE: Amicus Letter Supporting Review in Liapes v. Facebook, Inc. (No. S282529), From a Decision by the Court of Appeal, First Appellate District, Division 3 (No. A164880)

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center aimed at building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates and has longstanding expertise evaluating antitrust law and policy. We thank the Court for considering this amicus letter supporting Petitioner Facebook’s petition for review in which we wish to briefly highlight some of the crucial considerations that we believe should be taken into account when looking at the intermediary liability principles that underlie the interpretation of the Unruh Act.

The Court of Appeal’s decision in Liapes v. Facebook has profound implications for online advertising and raises significant legal and practical concerns that could echo beyond the advertising industry. Targeted advertising is a crucial aspect of marketing, enabling advertisers to direct benign, pro-consumer messages to potential customers based on various considerations, including age and gender. The plaintiff’s argument, and the Court of Appeal’s acceptance of it, present a boundless theory of liability, suggesting that any targeted advertising based on protected characteristics is unlawful. This theory of liability, unfortunately, fails to take account of the nature of Facebook as an online intermediary, and the optimal limitations on liability that this requires when weighing the bad acts of third parties against Facebook’s attempt to provide neutral advertising tools to the benefit of millions of users.

The Unruh Act Is Not a Strict Liability Statute

While the Unruh Act prohibits intentional discrimination, California Civil Code §§ 51 and 51.5, California courts have consistently emphasized that the statute does not impose strict liability for all differential treatment. Rather, the Unruh Act allows for distinctions that serve legitimate nondiscriminatory purposes.

Courts have held that the Unruh Act does not bar practices “justified by ‘legitimate business interests.'” Koebke v. Bernardo Heights Country Club, 36 Cal. 4th 824, 851 (2005). The statute prohibits only discrimination that is “arbitrary, invidious or unreasonable.” Javorsky v. Western Athletic Clubs, Inc., 242 Cal. App. 4th 1386, 1395 (2015). Reasonable, nonarbitrary distinctions are therefore permissible. Differential treatment may qualify as reasonable and nonarbitrary if there is a public policy justification for the distinction. For example, discounts for senior citizens have been deemed nonarbitrary because they advance policies like assisting those with limited incomes. Sargoy v. Resolution Trust Corp., 8 Cal. App. 4th 1039, 1044 (1992). And it is “reasonable” discrimination on the basis of age to prevent minors from entering bars and adult bookstores. Koire v. Metro Car Wash, 40 Cal. 3d 24, 31 (1985).

Thus, the Unruh Act does not impose strict liability merely for practices that have a disparate impact. Harris v. Capital Growth Investors XIV, 52 Cal. 3d 1142, 1149 (1991). While the Unruh Act provides robust protections, it was not intended to forbid all differential treatment. Distinctions based on legitimate justifications remain permissible under the statute’s exceptions.

Firms like Meta operate services facilitating billions of interactions between users and advertisers. In this vast, complex environment, interpreting any ad targeting based on protected class membership as a per se Unruh Act violation would amount to imposing de facto strict liability on the online advertising industry. Setting aside the fact that the Unruh Act is not a strict liability statute, drawing the liability line at this point would have drastic practical consequences.

First, a de facto strict liability standard fails to account for the immense scale and complexity of services like Facebook. Given the number of third-party advertisers and users, as well as the speed and quantity of ad auctions, some incidental correlations between ad delivery and protected characteristics are likely inevitable even absent purposeful exclusions. The Court of Appeal’s opinion exposes both advertisers and platforms like Facebook to litigation based on such correlations, on the theory that the correlations may be “probative” of the intentional discrimination the Unruh Act forbids.

Second, advertisers may have many reasonable, nonarbitrary motivations for targeting their ads to certain demographic groups. For example, targeting older people for certain kinds of medicines, or members of religious groups with information about services in their religion. The Court of Appeal’s opinion will lead to extensive, costly litigation about potential justification for such ad targeting, and in the meantime consumers will be deprived of useful ads.

Finally, if any segmentation of ad targets based on protected characteristics triggers Unruh Act violations, online advertising loses an essential tool for connecting people with relevant messages. This impedes commerce without any showing of invidious discrimination.

Although the Unruh Act provides important protections, overbroad interpretations amount to strict liability incompatible with the realities of a massive, complex ad system. Nuance is required to balance anti-discrimination aims with the actual welfare of users of services. In order to properly parse the line between reasonable and unreasonable discrimination when dealing with a neutral advertising service like Facebook and the alleged bad acts of third parties, it is necessary to incorporate the legal principles of intermediary liability into an analysis under the Unruh Act.

Principles of Intermediary Liability

In public policy and legal analysis, a central objective is to align individual incentives with social welfare, thereby deterring harmful behavior and encouraging optimal levels of precaution. See Guido Calabresi, The Cost of Accidents: A Legal and Economic Analysis 26 (1970). In the online context, this principle necessitates a careful examination of intermediary liability, especially for actors indirectly involved in online interactions.

Intermediary liability applies to third parties not directly causing harm but who can influence primary actors’ behavior to reduce harm cost-effectively. This is particularly relevant when direct deterrence is insufficient, and the intermediary can prevent harm more effectively or at a lower cost than direct enforcement. See Reiner Kraakman, Gatekeepers: The Anatomy of a Third-Party Enforcement Strategy, 2 J.L. Econ. & Org. 53, 56-57 (1986). However, not every intermediary in a potentially harmful transaction should be a target for such liability.

The focus is on locating the “least-cost avoider” – the party that can reduce the likelihood of harm at the lowest overall cost. See Harold Demsetz, When Does the Rule of Liability Matter?, 1 J. of Leg. Stud. 13, 28 (1972); see also Kraakman, supra, at 61 (“[t]he general problem remains one of selecting the mix of direct and collateral enforcement measures that minimizes the total costs of misconduct and enforcement”). This approach aims to balance the costs of enforcement against the social gains achieved as well as the losses that flow from the chilling effects of liability.

Imposing liability involves weighing the administrative costs and the potential lost benefits society might enjoy in the absence of liability. See Ronald Coase, The Problem of Social Cost, 3 J.L. & Econ. 1, 27 (1960) (“[W]hat has to be decided is whether the gain from preventing the harm is greater than the loss which would be suffered elsewhere as a result of stopping the action which produces the harm.”). The least-cost avoider is determined by considering whether the reduction in costs from locating liability on that party is outweighed by the losses caused by restricting other activities that flow from that liability. Calabresi, supra at 141.

The internet comprises various intermediaries like interactive computer services, internet service providers, content delivery networks, and advertising networks, which facilitate interactions between users, content platforms, and various service providers. See generally David S. Evans, Platform Economics: Essays on Multi-Sided Businesses (2011). Sometimes, intermediaries are the least-cost avoiders, especially when information costs are low enough for them to monitor and control end users effectively, or when it is difficult or impossible to identify bad actors using those platforms. But this is not always the case.

While liability can induce actors to take efficient precautions, intermediaries often cannot implement narrow precautions due to limited information or control. Facebook’s platform illustrates this challenge: Facebook has limited to no access to information about the motivations or design of every one of the millions of ad campaigns from millions of individual advertisers on its platform at any given time. Thus, avoiding liability risk might entail broad actions like reducing all services, including those supporting beneficial activities. If the collateral costs in lost activity are significant, the benefits of imposing intermediary liability may not justify its implementation.

Here, overbroad liability could end up severely reducing the effectiveness of advertising in general. This could result in 1) less relevant advertisements for users of online services; 2) reduced value to advertising for businesses, harming in particular small businesses which have limited advertising budgets, and 3) less revenue for online services which rely on advertising revenue, pressuring them to increase revenue through other means like higher ad prices and subscriptions.

The individuals and businesses placing advertisements, not the intermediary ad platform, are the primary actors choosing whether and how to use tools for targeting. As we noted above, under the Unruh Act there are permissible uses of targeted advertising, even when focusing on protected classes. The focus in discouraging discrimination should be on primary actors.

It is not hard to locate parties misusing Facebook’s advertising tools in a way that potentially violates the Unruh Act when evidence of discrimination is presented. On the other hand, intermediaries like Facebook will often lack particularized ex ante knowledge of specific discriminatory transactions or direct control over advertisers’ targeting choices. The only avenue for Facebook to comply with broad liability under the Unruh Act is to altogether remove the ability of businesses to use any characteristic that might theoretically trigger Unruh Act liability, which would result in the harms described above. In situations like this, where the intermediary has little ability to effectively police certain misuses of otherwise benign, neutral tools that enhance social welfare, the case for imposing collateral liability is weakened.

Moreover, some statistically disproportionate ad delivery outcomes may be inevitable given the vast scale of platforms like Facebook. Disparate effects should not automatically equate to impermissible discrimination absent purposeful exclusion. The creation of neutral tools for use by advertisers who then use them to break the law does not imply intentionality by Facebook (or any other advertising platform) to break the law. No one would suggest that a hammer company intends for its product to be misused by customers who use it to bludgeon another human being. Nuance is required.

Broad Unruh Act liability risks unintended harms. Imposing a de facto strict liability regime that treats all ad targeting of protected classes as impermissible under the Unruh Act would drive services like Facebook to restrict lawful advertising tools for all users in order to mitigate liability risks. This impairs a large amount of indisputably legal commerce to deter allegedly illegal advertising by a subset of third parties. Moreover, the effects of such a decision would echo not only throughout the advertising ecosystem, but throughout the internet ecosystem in general where intermediaries might provide similar neutral tools that could run afoul of such a broad theory of liability.

Conclusion

The intermediary liability principles outlined above strongly counsel against the overbroad Unruh Act interpretation embraced by the Court of Appeal in the present matter.

The primary actors are the advertisers choosing whether and how to target ads, not Facebook. The Court of Appeal’s broad view wrongly focused on Facebook’s provision of neutral tools rather than advertisers’ specific uses of those tools.

Given the context-dependent nature of an Unruh Act analysis, the Court of Appeal failed to appropriately balance between deterring allegedly illegal acts by advertisers with the potential loss of value from targeted advertising altogether. The proper duties of intermediaries like Facebook should be limited to feasible actions like removing impermissibly exclusionary ads when notified. They should not include disabling essential advertising tools for all users. The Court of Appeal’s overbroad approach would ultimately harm consumer access to targeted advertising.

With the foregoing in mind, we respectfully urge this Court to grant the pending petition for review. Careful examination of the Court of Appeal’s ruling will reveal it strays beyond the Act’s purpose and ignores collateral harms from overdeterrence. Guidance is needed on balancing antidiscrimination aims with the liberty interests of platforms and their users. This case presents an ideal vehicle for this Court to provide that guidance.

Continue reading
Innovation & the New Economy

A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes

ICLE Issue Brief I.       Introduction Proposals to protect children and teens online are among the few issues in recent years to receive at least rhetorical bipartisan support at . . .

I.       Introduction

Proposals to protect children and teens online are among the few issues in recent years to receive at least rhetorical bipartisan support at both the national and state level. Citing findings of alleged psychological harm to teen users,[1] legislators from around the country have moved to pass bills that would require age verification and verifiable parental consent for teens to use social-media platforms.[2] But the primary question these proposals raise is whether such laws will lead to greater parental supervision and protection for teen users, or whether they will backfire and lead teens to become less likely to use the covered platforms altogether.

The answer, this issue brief proposes, is to focus on transaction costs.[3] Or more precisely, the answer can be found by examining how transaction costs operate under the Coase theorem.

The major U.S. Supreme Court cases that have considered laws to protect children by way of parental consent and age verification all cast significant doubt on the constitutionality of such regimes under the First Amendment. The reasoning such cases have employed appears to apply a Coasean transaction-cost/least-cost-avoider analysis, especially with respect to strict scrutiny’s least-restrictive-means test.

This has important implications for recent attempts to protect teens online by way of an imposed duty of care, mandatory age verification, and/or verifiable parental consent. First, because it means these solutions are likely unconstitutional. Second, because a least-cost-avoider analysis suggests that parents are in best positioned to help teens assess the marginal costs and benefits of social media, by way of the power of the purse and through available technological means. Placing the full burden of externalities on social-media companies would reduce the options available to parents and teens, who could be excluded altogether if transaction costs are sufficiently large as to foreclose negotiation among the parties. This would mean denying teens the overwhelming benefits of social-media usage.

Part II of this brief will define transaction costs and summarize the Coase theorem, with an eye toward how these concepts can help to clarify potential spillover harms and benefits arising from teens’ social-media usage. Part III will examine three major Supreme Court cases that considered earlier parental-consent and age-verification regimes enacted to restrict minors’ access to allegedly harmful content, while arguing that one throughline in the jurisprudence has been the implicit application of least-cost-avoider analysis. Part IV will argue that, even in light of how the internet ecosystem has developed, the Coase theorem’s underlying logic continues to suggest that parents and teens working together are the least-cost avoiders of harmful internet content.

Part V will analyze proposed legislation and recently enacted bills, some of which already face challenges in the federal courts, and argue that the least-cost-avoider analysis embedded in Supreme Court precedent should continue to foreclose age-verification and parental-consent laws. Part VI concludes.

II.     The Coase Theorem and Teenage Use of Social-Media Platforms

A.    The Coase Theorem Briefly Stated and Defined

The Coase theorem has been described as “the bedrock principle of modern law and economics,”[4] and the essay that initially proposed it may be the most-cited law-review article ever published.[5] Drawn from Ronald Coase’s seminal work “The Problem of Social Cost”[6] and subsequent elaborations in the literature,[7] the theorem suggests that:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the lowest-cost avoider, while taking into consideration the total social costs of the institutional framework.

A few definitions are in order. An externality is a side effect of an activity that is not reflected in the cost of that activity—basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action. When we say that such an externality is bilateral, it is to say that it takes two to tango: only when there is a conflict in the use or enjoyment of property is there an externality problem.

Transaction costs are the additional costs borne in the process of buying or selling, separate and apart from the price of the good or service itself—i.e., the costs of all actions involved in an economic transaction. Where transaction costs are present and sufficiently large, they may prevent otherwise beneficial agreements from being concluded. Institutional frameworks determine the rules of the game, including who should bear transaction costs. In order to maximize efficiency, the Coase theorem holds that the burden of avoiding negative externalities should be placed on the party or parties that can avoid them at the lowest cost.

A related and interesting literature focuses on whether the common law is efficient, and the mechanisms by which that may come to be the case.[8] Todd J. Zywicki and Edward P. Stringham argue—contra the arguments of Judge Richard Posner—that the common law’s relative efficiency is a function of the legal process itself, rather than whether judges implicitly or explicitly adopt efficiency or wealth maximization as goals.[9] Zywicki & Stringham find both demand-side and supply-side factors that tend to promote efficiency in the common law, but note that the supply-side factors (e.g., competitive courts for litigants) have changed over time in ways that may result in diminished incentives for efficiency.[10] Their central argument is that the re-litigation of inefficient rules eventually leads to the adoption of more efficient ones.[11] Efficiency itself, they argue, is also best understood as the ability to coordinate plans, rather than as wealth maximization.[12]

In contrast to common law, there is a relative paucity of literature on whether constitutional law follows a pattern of efficiency. For example, one scholar notes that citations to Coase’s work in the corpus of constitutional-law scholarship are actually exceedingly rare.[13] This brief seeks to contribute to the law & economics literature by examining how the Supreme Court appears implicitly to have adopted one version of efficiency—the least-cost-avoider principle—in its First Amendment reviews of parental-consent and age-verification laws under the compelling-government-interest and least-restrictive-means tests.

B.     Applying the Coase Theorem to Teenage Social-Media Usage

The Coase theorem’s basic insights are useful in evaluating not only legal decisions, but also legislation. Here, this means considering issues related to children and teenagers’ online social-media usage. Social-media platforms, teenage users, and their parents are the parties at-issue in this example. While social-media platforms create incredible value for their users,[14] they also arguably impose negative externalities on both teens and their parents.[15] The question here, as it was for Coase, is how to deal with those externalities.

The common-law framework of rights in this scenario is to allow minors to enter into enforceable agreements, except where they are void for public-policy reasons. As Adam Candeub points out:

Contract law is a creature of state law, and states require parental consent for minors entering all sorts of contracts for services or receiving privileges, including getting a tattoo, obtaining a driver’s license, using a tanning facility, purchasing insurance, and signing liability waivers. As a general rule, all contracts with minors are valid, but with certain exceptions they are voidable. And even though a minor can void most contracts he enters into, most jurisdictions have laws that hold a minor accountable for the benefits he received under the contract. Because children can make enforceable contracts for which parents could end up bearing responsibility, it is a reasonable regulation to require parental consent for such contracts. The few courts that have addressed the question of the enforceability of online contracts with minors have held the contracts enforceable on the receipt of the mildest benefit.[16]

Of course, many jurisdictions have passed laws requiring age-verification for various transactions prohibited to minors, such as laws for buying alcohol or tobacco,[17] obtaining driver’s licenses,[18] and buying lottery tickets or pornography.[19] Through the Children’s Online Privacy Protection Act and its regulations, the federal government also requires that online platforms obtain verifiable parental consent before they are permitted to collect certain personal information regarding children under age 13.[20]

The First Amendment, however, has been found to protect minors’ ability to receive speech, including through commercial transactions.[21] The question therefore arises: how should the law regard minors’ ability to access information on social-media platforms? In recent years, multiple jurisdictions have responded to this question by proposing or passing age-verification and parental-consent laws for teens’ social-media usage.[22]

As will be detailed below,[23] while the internet has contributed to significant reductions in transaction costs, they are still present. Thus, in order to maximize social-media platforms’ benefits while minimizing the negative externalities they impose, policymakers should endeavor to place the burden of avoiding the harms associated with teen use on the least-cost avoider. I argue that the least-cost avoider is parents and teens working together to make marginal decisions about social-media use, including by exploiting relatively low-cost practical and technological tools to avoid harmful content. The thesis of this issue brief is that this finding is consistent with the implicit Coasean reasoning in the Supreme Court’s major First Amendment cases on parental consent and age verification.

III.   Major Supreme Court Cases on Parent Consent and Age Verification

Parental-consent and age-verification laws that seek to protect minors from harmful content are not new. The Supreme Court has had occasion to review several of them, while applying First Amendment scrutiny. An interesting aspect of this line of cases is that the Court appears implicitly to have used Coasean analysis in understanding who should bear the burden of avoiding harms associated with speech platforms.

Specifically, in each case, after an initial finding that the restrictions were content-based, the Court applied strict scrutiny. Thus, the burden was placed on the government to prove the relevant laws were narrowly tailored to a compelling government interest using the least-restrictive means. The Court’s transaction-cost analysis is implicit throughout the descriptions of the problem in each case. But the main area of analysis below will be from each case’s least-restrictive-means test section, with a focus on the compelling-state-interest test in Part III.C. Parts III.A, III.B, and III.C will deal with each of these cases in turn.

A.    United States v Playboy Entertainment Group

In United States v. Playboy Entertainment Group,[24] the Supreme Court reviewed § 505 of the Telecommunications Act of 1996, which required “cable television operators who provide channels ‘primarily dedicated to sexually-oriented programming’ either to ‘fully scramble or otherwise fully block’ those channels or to limit their transmission to hours when children are unlikely to be viewing, set by administrative regulation as the time between 10 p.m. and 6 a.m.”[25] Even prior to the regulations promulgated pursuant to the law, cable operators used technological means called “scrambling” to blur sexually explicit content for those viewers who didn’t explicitly subscribe to such content, but there were reported problems with “signal bleed” that allowed some audio and visual content to be obtained by nonsubscribers.[26] Following the regulation, cable operators responded by shifting the hours when such content would be aired—i.e., by making it unavailable for 16 hours a day. This prevented cable subscribers from viewing purchased content of their choosing at times they would prefer.[27]

The basic Coasean framework is present right from the description of the problems that the statute and regulations were trying to solve. As the Court put it:

Two essential points should be understood concerning the speech at issue here. First, we shall assume that many adults themselves would find the material highly offensive; and when we consider the further circumstance that the material comes unwanted into homes where children might see or hear it against parental wishes or consent, there are legitimate reasons for regulating it. Second, all parties bring the case to us on the premise that Playboy’s programming has First Amendment protection. As this case has been litigated, it is not alleged to be obscene; adults have a constitutional right to view it; the Government disclaims any interest in preventing children from seeing or hearing it with the consent of their parents; and Playboy has concomitant rights under the First Amendment to transmit it. These points are undisputed.[28]

In Coasean language, the parties at-issue were the cable operators, content-providers of sexually explicit programming, adult cable subscribers, and their children. Cable television provides tremendous value to its customers, including sexually explicit subscription content that is valued by those subscribers. There is, however, a negative externality to the extent that such programming may become available to children whose parents find it inappropriate. The Court noted that some parents may allow their children to receive such content, and the government disclaimed an interest in preventing such reception with parental consent. Given imperfect scrambling technology, this possible negative externality was clearly present. The question that arose was whether the transaction costs imposed by time-shifting requirements in Section 505 have the effect of restricting adults’ ability to make such viewing decisions for themselves and on behalf of their children.

After concluding that Section 505 was a content-based restriction, due to the targeting of specific adult content and specific programmers, the Court stated that when a content-based restriction is designed “to shield the sensibilities of listeners, the general rule is that the right of expression prevails, even where no less restrictive alternative exists. We are expected to protect our own sensibilities ‘simply by averting [our] eyes.’” [29]

This application of strict scrutiny does not change, the court noted, because we are dealing in this instance with children or the issue of parental consent:

No one suggests the Government must be indifferent to unwanted, indecent speech that comes into the home without parental consent. The speech here, all agree, is protected speech; and the question is what standard the Government must meet in order to restrict it. As we consider a content-based regulation, the answer should be clear: The standard is strict scrutiny. This case involves speech alone; and even where speech is indecent and enters the home, the objective of shielding children does not suffice to support a blanket ban if the protection can be accomplished by a less restrictive alternative.[30]

Again, using our Coasean translator, we can read the opinion as saying the least-cost way to avoid the negative externality of unwanted adult content is by just not looking at it, or for parents to use the means available to them to prevent their children from viewing it.

In fact, that is exactly where the Court goes, by comparing, under the least-restrictive-means test, the targeted blocking mechanism made available in Section 504 of the statute to the requirements imposed by Section 505:

[T]argeted blocking enables the Government to support parental authority without affecting the First Amendment interests of speakers and willing listeners—listeners for whom, if the speech is unpopular or indecent, the privacy of their own homes may be the optimal place of receipt. Simply put, targeted blocking is less restrictive than banning, and the Government cannot ban speech if targeted blocking is a feasible and effective means of furthering its compelling interests. This is not to say that the absence of an effective blocking mechanism will in all cases suffice to support a law restricting the speech in question; but if a less restrictive means is available for the Government to achieve its goals, the Government must use it.[31]

Moreover, the Court found that the fact that parents largely eschewed the available low-cost means to avoid the harm was not necessarily sufficient for the government to prove that it is the least-restrictive alternative:

When a plausible, less restrictive alternative is offered to a content-based speech restriction, it is the Government’s obligation to prove that the alternative will be ineffective to achieve its goals. The Government has not met that burden here. In support of its position, the Government cites empirical evidence showing that § 504, as promulgated and implemented before trial, generated few requests for household-by-household blocking. Between March 1996 and May 1997, while the Government was enjoined from enforcing § 505, § 504 remained in operation. A survey of cable operators determined that fewer than 0.5% of cable subscribers requested full blocking during that time. Id., at 712. The uncomfortable fact is that § 504 was the sole blocking regulation in effect for over a year; and the public greeted it with a collective yawn.[32]

This is because there were, in fact, other market-based means available for parents to use to avoid the harm of unwanted adult programming,[33] and the government had not proved that Section 504 could be effective with more adequate notice.[34] The Court concluded its least-restrictive means analysis by saying:

Even upon the assumption that the Government has an interest in substituting itself for informed and empowered parents, its interest is not sufficiently compelling to justify this widespread restriction on speech. The Government’s argument stems from the idea that parents do not know their children are viewing the material on a scale or frequency to cause concern, or if so, that parents do not want to take affirmative steps to block it and their decisions are to be superseded. The assumptions have not been established; and in any event the assumptions apply only in a regime where the option of blocking has not been explained. The whole point of a publicized § 504 would be to advise parents that indecent material may be shown and to afford them an opportunity to block it at all times, even when they are not at home and even after 10 p.m. Time channeling does not offer this assistance. The regulatory alternative of a publicized § 504, which has the real possibility of promoting more open disclosure and the choice of an effective blocking system, would provide parents the information needed to engage in active supervision. The Government has not shown that this alternative, a regime of added communication and support, would be insufficient to secure its objective, or that any overriding harm justifies its intervention.[35]

In Coasean language, the government’s imposition of transaction costs through time-shifting channels is not the least-cost way to avoid the harm. By publicizing the blocking mechanism of Section 504, as well as promoting market-based alternatives like VCRs to record programming for playback later or blue-screen technology that blocks scrambled video, adults would be able to effectively act as least-cost avoiders of harmful content, including on behalf of their children.

B.     Ashcroft v ACLU

In Ashcroft v. ACLU,[36] the Supreme Court reviewed a U.S. District Court’s preliminary injunction of the age-verification requirements imposed by the Children Online Protection Act (COPA), which was designed to “protect minors from exposure to sexually explicit materials on the Internet.”[37] The law created criminal penalties “of a $50,000 fine and six months in prison for the knowing posting” for ‘commercial purposes’ of World Wide Web content that is ‘harmful to minors.’”[38] The law did, however, provide an escape hatch, through:

…an affirmative defense to those who employ specified means to prevent minors from gaining access to the prohibited materials on their Web site. A person may escape conviction under the statute by demonstrating that he

“has restricted access by minors to material that is harmful to minors—

“(A) by requiring use of a credit card, debit account, adult access code, or adult personal identification number;

“(B) by accepting a digital certificate that verifies age; or

“(C) by any other reasonable measures that are feasible under available technology.” § 231(c)(1).[39]

Here, the Coasean analysis of the problem is not stated as explicitly as in Playboy, but it is still apparent. The internet clearly provides substantial value to users, including those who want to view pornography. But there is a negative externality in internet pornography’s broad availability to minors for whom it would be inappropriate. Thus, to prevent these harms, COPA established a criminal regulatory scheme with an age-verification defense. The threat of criminal penalties, combined with the age-verification regime, imposed high transaction costs on online publishers who post content defined as harmful to minors. This leaves adults (including parents of children) and children themselves as the other relevant parties. Again, the question is: who is the least-cost avoider of the possible negative externality of minor access to pornography? The adult-content publisher or the parents, using technological and practical means?

The Court immediately went to an analysis of the least-restrictive-means test, defining the inquiry as follows:

In considering this question, a court assumes that certain protected speech may be regulated, and then asks what is the least restrictive alternative that can be used to achieve that goal. The purpose of the test is not to consider whether the challenged restriction has some effect in achieving Congress’ goal, regardless of the restriction it imposes. The purpose of the test is to ensure that speech is restricted no further than necessary to achieve the goal, for it is important to ensure that legitimate speech is not chilled or punished. For that reason, the test does not begin with the status quo of existing regulations, then ask whether the challenged restriction has some additional ability to achieve Congress’ legitimate interest. Any restriction on speech could be justified under that analysis. Instead, the court should ask whether the challenged regulation is the least restrictive means among available, effective alternatives.[40]

The Court then considered the available alternative to COPA’s age-verification regime: blocking and filtering software. They found that such tools are clearly less-restrictive means, focusing not only on the software’s granting parents the ability to prevent their children from accessing inappropriate material, but also that adults would retain access to any content blocked by the filter by simply turning it off.[41] In fact, the Court noted that the evidence presented to the District Court suggested that filters, while imperfect, were probably even more effective than the age-verification regime.[42] Finally, the Court noted that, even if Congress couldn’t require filtering software, it could encourage it through parental education, by providing incentives to libraries and schools to use it, and by subsidizing development of the industry itself. Each of these, the Court argued, would be clearly less-restrictive means of promoting COPA’s goals.[43]

In Coasean language, the Court found that parents using technological and practical means are the least-cost avoider of the harm of exposing children to unwanted adult content. Government promotion and support of those means were held up as clearly less-restrictive alternatives than imposing transaction costs on publishers of adult content.

C.    Brown v Entertainment Merchants Association

In Brown v. Entertainment Merchants Association,[44] the Court considered California Assembly Bill 1179, which prohibited the sale or rental of “violent video games” to minors.[45] The Court first disposed of the argument that the government could create a new category of speech that it considered unprotected, just because it is directed at children, stating:

The California Act is something else entirely. It does not adjust the boundaries of an existing category of unprotected speech to ensure that a definition designed for adults is not uncritically applied to children. California does not argue that it is empowered to prohibit selling offensively violent works to adults—and it is wise not to, since that is but a hair’s breadth from the argument rejected in Stevens. Instead, it wishes to create a wholly new category of content-based regulation that is permissible only for speech directed at children.

That is unprecedented and mistaken. “[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.” Erznoznik v. Jacksonville, 422 U.S. 205, 212-213, 95 S.Ct. 2736*2736 2268, 45 L.Ed.2d 125 (1975) (citation omitted). No doubt a State possesses legitimate power to protect children from harm, Ginsberg, supra, at 640-641, 88 S.Ct. 1274; Prince v. Massachusetts, 321 U.S. 158, 165, 64 S.Ct. 438, 88 L.Ed. 645 (1944), but that does not include a free-floating power to restrict the ideas to which children may be exposed. “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” Erznoznik, supra, at 213-214, 95 S.Ct. 2268.[46]

The Court rejected that there was any “longstanding tradition” of restricting children’s access to depictions of violence, as demonstrated by copious examples of violent content in children’s books, high-school reading lists, motion pictures, radio dramas, comic books, television, music lyrics, etc. Moreover, to the extent there was a time when government enforced such regulations, the courts have eventually overturned them.[47] The fact that video games were interactive did not matter either, the Court found, as all literature is potentially interactive, especially genres like choose-your-own-adventure stories.[48]

Thus, because the law was clearly content-based, the Court applied strict scrutiny. The Court was skeptical even of whether the government had a compelling state interest, finding the law to be both seriously over- and under-inclusive. The same effects of exposure to violent content, the Court noted, could be found from covered video games and cartoons not subject to the law’s provisions. Moreover, the law allowed a parent or guardian (or any adult) to buy violent video games for their children.[49]

The Court then gets to the law’s real justification, which it summarily rejected as inconsistent with the First Amendment:

California claims that the Act is justified in aid of parental authority: By requiring that the purchase of violent video games can be made only by adults, the Act ensures that parents can decide what games are appropriate. At the outset, we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.[50]

In Coasean language, the Court is saying that video games—even violent ones—are subjectively valued by those who play them, including minors. There may be negative externalities from playing such games, in that exposure to violence could be linked to psychological harm, and that they are interactive, but these content and design features are still protected speech. Placing the transaction costs on parents/adults to buy such games on behalf of minors, just in case some parents disapprove of their children playing them, is not a compelling state interest.

While the Court is only truly focused on whether there is a compelling state interest in California’s statutory scheme regulating violent video games, some of the language would equally apply to a least-restrictive means analysis:

But leaving that aside, California cannot show that the Act’s restrictions meet a substantial need of parents who wish to restrict their children’s access to violent video games but cannot do so. The video-game industry has in place a voluntary rating system designed to inform consumers about the content of games. The system, implemented by the Entertainment Software Rating Board (ESRB), assigns age-specific ratings to each video game submitted: EC (Early Childhood); E (Everyone); E10 + (Everyone 10 and older); T (Teens); M (17 and older); and AO (Adults Only—18 and older). App. 86. The Video Software Dealers Association encourages retailers to prominently display information about the ESRB system in their stores; to refrain from renting or selling adults-only games to minors; and to rent or sell “M” rated games to minors only with parental consent. Id., at 47. In 2009, the Federal Trade Commission (FTC) found that, as a result of this system, “the video game industry outpaces the movie and music industries” in “(1) restricting target-marketing of mature-rated products to children; (2) clearly and prominently disclosing rating information; and (3) restricting children’s access to mature-rated products at retail.” FTC, Report to Congress, Marketing Violent Entertainment to Children 30 (Dec.2009), online at http://www. ftc.gov/os/2009/12/P994511violent entertainment.pdf (as visited June 24, 2011, and available in Clerk of Court’s case file) (FTC Report). This system does much to ensure that minors cannot purchase seriously violent games on their own, and that parents who care about the matter can readily evaluate the games their children bring home. Filling the remaining modest gap in concerned parents’ control can hardly be a compelling state interest.

And finally, the Act’s purported aid to parental authority is vastly overinclusive. Not all of the children who are forbidden to purchase violent video games on their own have parents who care whether they purchase violent video games. While some of the legislation’s effect may indeed be in support of what some parents of the restricted children actually want, its entire effect is only in support of what the State thinks parents ought to want. This is not the narrow tailoring to “assisting parents” that restriction of First Amendment rights requires.[51]

In sum, the Court suggests that the law would not be narrowly tailored, because there are already market-based systems in place to help parents and minors make informed decisions about which video games to buy—most importantly from the rating system that judges appropriateness by age and offers warnings about violence. Government paternalism is simply insufficient to justify imposing new transaction costs on parents and minors who wish to buy even violent video games.

Interestingly, the concurrence of Justice Samuel Alito, joined by Chief Justice John Roberts, also contains some language that could be interpreted through a Coasean lens. The concurrence allows, in particular, the possibility that harms from interactive violent video games may differ from other depictions of violence that society has allowed children to view, although it concludes that reasonable minds may differ.[52] In other words, the concurrence basically notes that the negative externalities may be greater than the majority opinion would allow, but nonetheless, that Justices Alito and Roberts agreed the law was not drafted in a constitutional manner that comports with the obscenity exception to the First Amendment.

Nonetheless, it appears the Court applies an implicit Coasean framework when it rejects the imposition of transaction costs on parents and minors to gain access to protected speech—in this case, violent video games. Parents and minors remain the least-cost avoiders of the potential harms of violent video games.

IV.   Coase Theorem Applied to Age-Verification and Verifiable-Consent Laws

As outlined above, the issue is whether social media needs age-verification and parental-consent laws in order to address negative externalities to minor users. This section will analyze this question under the Coasean framework introduced in Part II.

The basic argument proceeds as follows:

  1. Transaction costs for age verification and verifiable consent from parents and/or teens are sufficient large to prevent a bargain from being struck;
  2. The lowest-cost avoiders are parents and teens working together, using practical and technological means, including low-cost monitoring and filtering services, to make marginal decisions about minors’ social-media use; and
  3. Placing the transaction costs on social-media companies to obtain age verification and verifiable consent from parents and/or teens would actually reduce their ability to make marginal decisions about minors’ social-media use, as social-media companies will respond by investing more in excluding minors from access than in creating safe and vibrant spaces for interaction.

Part IV.A will detail the substantial transaction costs associated with obtaining age verification and verifiable parental consent. Part IV.B argues that parents and teens working together using practical and technological means are the lowest-cost avoiders of the harms of social-media use. Part IV.C will consider the counterfactual scenario of placing the transaction costs on social-media companies and argue that the result would be teens’ exclusion from social media, to their detriment, as well as the detriment of parents who would have made different choices.

A.    Transaction Costs, Age Verification, and Verifiable Parental Consent[53]

As Coase taught, in a world without transaction costs (or where such costs are sufficiently low), age-verification laws or mandates to obtain verifiable parental consent would not matter, because the parties would bargain to arrive at an efficient solution. Because there are high transaction costs that prevent such bargains from being easily struck, making the default that teens cannot join social media without verifiable parental consent could have the effect of excluding them from the great benefits of social media usage altogether.[54]

There is considerable evidence that, even despite the internet and digital technology serving to reduce transaction costs considerably across a wide range of fronts,[55] transaction costs remain high when it comes to age verification and verifiable parental consent. A data point that supports this conclusion is the experience of social-media platforms under the Children’s Online Privacy Protection Act (COPPA).[56] In their working paper “COPPAcalypse? The YouTube Settlement’s Impact on Kids Content,”[57] Garrett Johnson, Tesary Lin, James C. Cooper, & Liang Zhong summarized the issue as follows:

The Children’s Online Privacy Protection Act (COPPA), and its implementing regulations, broadly prohibit operators of online services directed at children under 13 from collecting personal information without providing notice of its data collection and use practices and obtaining verifiable parental consent. Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA essentially acts as a de facto ban on the collection of personal information by providers of free child-directed content. In 2013, the FTC amended the COPPA rules to include in the definition of personal information “persistent identifier that can be used to recognize a user over time and across different Web sites or online services,” such as a “customer number held in a cookie . . . or unique device identifier.” This regulatory change meant that, as a practical matter, online operators who provide child-directed content could no longer engage in personalized advertising.

On September 4, 2019, the FTC entered into a consent agreement with YouTube to settle charges that it had violated COPPA. The FTC’s allegations focused on YouTube’s practice of serving personalized advertising on child-directed content at children without obtaining verifiable parental consent. Although YouTube maintains it is a general audience website and users must be at least 13 years old to obtain a Google ID (which makes personalized advertising possible), the FTC complaint alleges that YouTube knew that many of its channels were popular with children under 13, citing YouTube’s own claims to advertisers. The settlement required YouTube to identify child-directed channels and videos and to stop collecting personal information from visitors to these channels. In response, YouTube required channel owners producing [“made-for-kids”] MFK content to designate either their entire channels or specific videos as MFK, beginning on January 1, 2020. YouTube supplemented these self-designations with an automated classifier designed to identify content that was likely directed at children younger than 13.9 In so doing, YouTube effectively shifted liability under COPPA to the channel owners, who could face up to $42,530 in fines per video if they fail to self-designate and are not detected by YouTube’s classifier.[58]

The rule change and settlement increased the transaction costs imposed on social-media platforms by requiring verifiable parental consent. YouTube’s economically rational response was to restrict the content creators’ ability to benefit from (considerably more lucrative) personalized advertising. The end result was less content created for children, with competitive effects to boot:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.[59]

This is not the only finding regarding COPPA’s role in reducing the production of content for children. The president of the App Association, a global trade association for small and medium-sized technology companies, presented extensively at the Federal Trade Commission’s (FTC) 2019 COPPA Workshop.[60] The testimony from App Association President Morgan Reed detailed that the transaction costs associated with obtaining verifiable parental consent did little to enhance parental control, but much to reduce the quality and quantity of content directed to children. But it is worth highlighting Reed’s constant use of the words “friction,” “restriction,” and “cost” to describe how the institutional environment of COPPA affects the behavior of the social media platforms, parents, and children. While noting that general audience content is “unfettered, meaning that you don’t feel restricted by what you can get to, how you do it. It’s easy, it’s low friction. Widely available. I can get it on any platform, in any case, in any context and I can get to it rapidly,” COPPA-regulated apps and content are, Reed said, all about:

Friction, restriction, and cost. Every layer of friction you add alters parent behavior significantly. We jokingly refer to it as the over the shoulder factor. If a parent wants access to something and they have to pass it from the back seat to the front seat of the car more than one time, the parent moves on to the next thing. So the more friction you add to an application directed at children the less likely it is that the parent is going to take the steps necessary to get through it because the competition, of course, is as I said, free, unfettered, widely available. Restriction. Kids balk against some of the restrictions. I can’t get to this, I can’t do that. And they say that to the parent. And from the parent’s perspective, fine, I’ll just put in a different age date. They’re participating, they’re parenting but they’re not using the regulatory construction that we all understand.

The COPPA side, expensive, onerous or friction full. We have to find some way around that. Restrictive, fewer features, fewer capabilities, less known or available, and it’s entertaining-ish. …

Is COPPA the barrier? I thought this quote really summed it up. “Seamlessness is expected. But with COPPA, seamlessness is impossible.” And that has been one of the single largest areas of concern. Our folks are looking to provide a COPPA compliant environment. And they’re finding doing VPC is really hard. We want to make it this way, we just walked away. And why do they want to do it? We wanted to create a hub for kids to promote creativity. So these are not folks who are looking to take data and provide interest based advertising. They’re trying to figure out how to do it so they can build an engaging product. Parental consent makes the whole process very complicated. And this is the depressing part. …

We say that VPC is intentional friction. It’s clear from everything we’ve heard in the last two panels that the authors of COPPA, we don’t really want information collected on kids. So friction is intentional. And this is leading to the destruction of general audience applications basically wiping out COPPA apps off the face of the map.[61]

Reed’s use of the word “friction” is particularly enlightening. Mike Munger has often described transaction costs as frictions, explaining that, to consumers, all costs are transaction costs.[62] When higher transaction costs are imposed on social-media platforms, end users feel the impact. In this case, the result is that children and parents receive less quality children’s apps and content.

A similar example can be seen in the various battles between traditional media and social-media companies in Australia, Canada, and the EU, where laws have been passed that would require platforms to pay for linking to certain news content.[63] Because these laws raise transaction costs, social-media platforms have responded by restricting access to news links,[64] to the detriment of users and the news-media organizations themselves. In other words, much like with verifiable parental consent, the intent of these laws is thwarted by the underlying economics.

More evidence that imposing transaction costs on social-media companies can have the effect of diminishing the user experience can be found in the preliminary injunction issued by the U.S. District Court in Austin, Texas in Free Speech Coalition Inc. v. Colmenero.[65] The court cited evidence from the plaintiff’s complaint that included bills for “several commercial verification services, showing that they cost, at minimum, $40,000.00 per 100,000 verifications.”[66] The court also noted that “[Texas law] H.B. 1181 imposes substantial liability for violations, including $10,000.00 per day for each violation, and up to $250,000.00 if a minor is shown to have viewed the adult content.”[67]

Moreover, the transaction costs in this example also include the subjective costs borne by those who actually go through with verifying their age to access pornography. As the court noted “the law interferes with the Adult Video Companies’ ability to conduct business, and risks deterring adults from visiting the websites.”[68] The court issued a preliminary injunction against the law’s age-verification provision, finding that other means—such as content-filtering software—are clearly more effective than age verification to protect children from unwanted content.[69]

In sum, transaction costs for age verification and verifiable parental consent are sufficiently high as to prevent an easy bargain from being struck. Thus, which party bears the burden of those costs will determine the outcome. The lessons from COPPA, news-media laws, and online-pornography age-verification laws are clear: if the transaction costs are imposed on the online platforms and apps, it will lead to access restrictions on the speech those platforms provide, almost all of which is protected speech. This is the type of collateral censorship that the First Amendment is designed to avoid.[70]

B.     Parents and Teens as the Least-Cost Avoiders of Negative Externalities

If transaction costs due to online age-verification and verifiable-parent-consent laws are substantial, the question becomes which party or parties should be subject to the burden of avoiding the harms arising from social-media usage.

It is possible, in theory, that social-media platforms are the best-positioned to monitor and control content posted to their platforms—for instance, when it comes to harms associated with anonymous or pseudonymous accounts imposing social costs on society.[71] In such cases, a duty of care that would allow for intermediary liability against social-media companies may make sense.[72]

On the other hand, when it comes to online age-verification and parental-consent laws, widely available practical and technological means appear to be lowest-cost way to avoid the negative externalities associated with social-media usage. As NetChoice put it in their complaint against Arkansas’ social-media age-verification law, “[p]arents have myriad ways to restrict their children’s access to online services and to keep their children safe on such services.”[73]

In their complaint, NetChoice recognizes the subjective nature of negative externalities, stating:

Just as people inevitably have different opinions about what books, television shows, and video games are appropriate for minors, people inevitably have different views about whether and to what degree online services are appropriate for minors. While many minors use online services in wholesome and productive ways, online services, like many other technologies, can be abused in ways that may harm minors.[74]

They then expertly list all the ways that parents can take control and help their children avoid online harms, including with respect to the decisions to buy devices for their children and to set terms for how and when they are permitted to use them.[75] Parents can also choose to use tools from cell-phone carriers and broadband providers to block certain apps and sites from their children’s devices, or to control with whom their children can communicate and for how long they can use the devices.[76] They also point to wireless routers that allow for parents to filter and monitor online content;[77] parental controls at the device level;[78] third-party filtering applications;[79] and numerous tools offered by NetChoice members that all allow for relatively low-cost monitoring and control by parents and even teen users acting on their own behalf.[80] Finally, they note that NetChoice members, in response to market demand,[81]expend significant resources curating content to make sure it’s appropriate.[82]

The recent response from the Australian government to the proposed “Roadmap for Age Verification”[83] buttresses this analysis. The government pulled back from plans to “force adult websites to bring in age verification following concerns about privacy and the lack of maturity of the technology.”[84] In particular, the government noted that:

It is clear from the Roadmap that at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness and implementation issues. For age assurance to be effective, it must:

  • work reliably without circumvention;
  • be comprehensively implemented, including where pornography is hosted outside of Australia’s jurisdiction; and
  • balance privacy and security, without introducing risks to the personal information of adults who choose to access legal pornography.

Age assurance technologies cannot yet meet all these requirements. While industry is taking steps to further develop these technologies, the Roadmap finds that the age assurance market is, at this time, immature.

The Roadmap makes clear that a decision to mandate age assurance is not ready to be taken.[85]

As a better solution, the government offered “[m]ore support and resources for families,”[86] including promoting tools already available in the marketplace to help prevent children from accessing inappropriate content like pornography,[87] and promoting education for both parents and children on how to avoid online harms.[88]

In sum, this is all about transaction costs. The least-cost avoider from negative externalities imposed by social-media usage are the parents and teens themselves, working together to make marginal decisions about how to use these platforms through the use of widely available practical and technological means.

C.    Teen Exclusion Online and Reduced Parental Involvement in Social-Media Usage Decisions

If the burden of avoiding negative externalities is placed on social-media platforms, the result could be considerable collateral censorship of protected speech. This is because of transaction costs, as explained above in Part IV.A. Thus, while one could argue that the externalities imposed by social-media platforms on teen users and their parents represent a market failure, this is not the end of the analysis. Transaction costs help to explain that the institutional environment we create fosters the rules of the game that platforms, parents, and teens follow. If transaction costs are too high and placed incorrectly on social-media platforms, parents and teens’ ability to control how they use social media will actually suffer.

As can be seen most prominently in the COPPA examples discussed above,[89] the burden of obtaining verifiable parental consent leads to platforms reallocating investments into the exclusion of the protected class—in that case, children under age 13—that could otherwise go toward creating a safe and vibrant community from which children could benefit. Thus, proposals like COPPA 2.0,[90] which would extend the need for verifiable consent to teens, could yield an equivalent result of greater exclusion of teens. State laws that would require age verification and verifiable parental consent for teens are likely to produce the same result, as well. The irony, of course, is that parental consent laws would actually reduce the available choices for those parents who see the use value for their teenagers.

In sum, the economics of transaction costs explains why age-verification and verifiable-parental-consent laws will not satisfy their proponents’ stated objectives. As with minimum-wage laws[91] and rent control,[92] economics helps to explain the counterintuitive finding that well-intentioned laws can actually produce the exact opposite end result. Here, that means age-verification and verifiable-parental-consent laws lead to parents and teens being less able to make meaningful and marginal decisions about the costs and benefits of their own social-media usage.

V.     The Unconstitutionality of Social-Media Verification and Verifiable-Consent Laws

Bringing this all together, Part V will consider the constitutionality of the enacted and proposed laws on age verification and verifiable parental consent under the First Amendment. As several courts have already suggested, these laws will not survive First Amendment scrutiny.

The first question is whether these laws will be subject to strict scrutiny (because they are content-based) or instead to intermediate scrutiny as content-neutral regulations. There is a possibility that it will not matter, because a court could find—as one already has—that such laws burden more speech than necessary anyway. Part V.A will take up these questions.

The second set of questions is whether, assuming strict scrutiny applies, these enacted and proposed laws could survive the least-restrictive-means test. Part V.B will consider this set of questions and argue that, as the lowest-cost avoiders, parents and teens working together using widely available practical and technological means to avoid negative externalities also represents the least-restrictive means to promote the government’s interest in protecting minors from the harms of social media.

A.    Questions of Content Neutrality

The first important question is whether laws that attempt to protect minors from externalities associated with social-media usage are content-neutral. One argument that has been forwarded is that they are simply content-neutral contract laws that shift the consent default to parents before teens can establish an ongoing contractual relationship with a social-media company by creating a profile.[93]

Before delving into whether that argument could work, it is worth considering laws that are clearly content-based to help tell the difference. For instance, the Texas law challenged in Free Speech Coalition v. Colmenero is clearly content-based, because “the regulation is based on whether content contains sexual material.”[94]

Similarly, laws like the Kids Online Safety Act (KOSA)[95] are content-based, in that they require covered platforms to take:

reasonable measures in its design or operation of products and services to prevent or mitigate the following:

  • Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.

  • Patterns of use that indicate or encourage addiction-like behaviors.

  • Physical violence, online bullying, and harassment of the minor.

  • Sexual exploitation and abuse.

  • Promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol.

  • Predatory, unfair, or deceptive marketing practices, or other financial harms.[96]

While parts 4-6 and actual physical violence all constitute either unprotected speech or conduct, decisions about how to present information from part 2 is arguably protected speech.[97] Even true threats like online bullying and harassment are speech subject to at least some First Amendment scrutiny, in that they would require some type of mens rea to be constitutional.[98] Part 1 may be unconstitutionally vague as written.[99] Moreover, 1-3 are clearly content-based, in that it is necessary to consider the content presented, which will include at least some protected speech. This equally applies to the California Age Appropriate Design Code,[100] which places an obligation on covered companies to identify and mitigate speech that is harmful or potentially harmful to users under 18 years old, and to prioritize speech that promotes such users’ well-being and best interests.[101]

In each of these cases, it would be difficult to argue that strict scrutiny ought not apply. On the other hand, some have argued that the Utah and Arkansas laws requiring age verification and verifiable parental consent are simply content-neutral regulations of contract formation, which can be considered independently of speech.[102] Arkansas has argued that Act 689’s age-verification requirements are “merely a content-neutral regulation on access to speech at particular ‘locations,’ so intermediate scrutiny should apply.”[103]

But even in NetChoice v. Griffin,[104] the U.S. District Court in Arkansas, while skeptical that the law was content-neutral,[105] proceeded as if it was and still found, in granting a preliminary injunction, that the age-verification law “is likely to unduly burden adult and minor access to constitutionally protected speech.”[106] Similarly, the U.S. District Court for the Northern District of California found that all major provisions of California’s AADC were likely unconstitutional under a lax commercial-speech standard.[107]

Nonetheless, there are strong arguments that these laws are content-based. As the court in Griffin put it:

Deciding whether Act 689 is content-based or content-neutral turns on the reasons the State gives for adopting the Act. First, the State argues that the more time a minor spends on social media, the more likely it is that the minor will suffer negative mental health outcomes, including depression and anxiety. Second, the State points out that adult sexual predators on social media seek out minors and victimize them in various ways. Therefore, to the State, a law limiting access to social media platforms based on the user’s age would be content-neutral and require only intermediate scrutiny.

On the other hand, the State points to certain speech-related content on social media that it maintains is harmful for children to view. Some of this content is not constitutionally protected speech, while other content, though potentially damaging or distressing, especially to younger minors, is likely protected nonetheless. Examples of this type of speech include depictions and discussions of violence or self-harming, information about dieting, so-called “bullying” speech, or speech targeting a speaker’s physical appearance, race or ethnicity, sexual orientation, or gender. If the State’s purpose is to restrict access to constitutionally protected speech based on the State’s belief that such speech is harmful to minors, then arguably Act 689 would be subject to strict scrutiny.

During the hearing, the State advocated for intermediate scrutiny and framed Act 689 as “a restriction on where minors can be,” emphasizing it was “not a speech restriction” but “a location restriction.” The State’s briefing analogized Act 689 to a restriction on minors entering a bar or a casino. But this analogy is weak. After all, minors have no constitutional right to consume alcohol, and the primary purpose of a bar is to serve alcohol. By contrast, the primary purpose of a social media platform is to engage in speech, and the State stipulated that social media platforms contain vast amounts of constitutionally protected speech for both adults and minors. Furthermore, Act 689 imposes much broader “location restrictions” than a bar does. The Court inquired of the State why minors should be barred from accessing entire social media platforms, even though only some of the content was potentially harmful to them, and the following colloquy ensued:

THE COURT: Well, to pick up on Mr. Allen’s analogy of the mall, I haven’t been to the Northwest Arkansas mall in a while, but it used to be that there was a restaurant inside the mall that had a bar. And so certainly minors could not go sit at the bar and order up a drink, but they could go to the Barnes & Noble bookstore or the clothing store or the athletic store. Again, borrowing Mr. Allen’s analogy, the gatekeeping that Act 689 imposes is at the front door of the mall, not the bar inside the mall; yes?

THE STATE: The state’s position is that the whole mall is a bar, if you want to continue to use the analogy.

THE COURT: The whole mall is a bar?

THE STATE: Correct.

Clearly, the state’s analogy is not persuasive.

NetChoice argues that Act 689 is not a content-neutral restriction on minors’ ability to access particular spaces online, and the fact that there are so many exemptions to the definitions of “social media company” and “social media platform” proves that the State is targeting certain companies based either on a platform’s content or its viewpoint. Indeed, Act 689’s definitions and exemptions do seem to indicate that the State has selected a few platforms for regulation while ignoring all the rest. The fact that the State fails to acknowledge this causes the Court to suspect that the regulation may not be content neutral. “If there is evidence that an impermissible purpose or justification underpins a facially content-neutral restriction, for instance, that restriction may be content-based.” City of Austin v. Reagan Nat’l Advertising of Austin, LLC, 142 S. Ct. 1464, 1475 (2022).[108]

Utah’s laws HB 311 and 152 would also seem to suffer from a similar defect as KOSA and AADC,[109] though they have not yet been litigated.

B.     Least-Restrictive Means Is to Promote Monitoring and Filtering

Assuming that courts do, in fact, find that these laws are content-based, strict scrutiny would apply, including the least-restrictive-means test.[110] In that case, the caselaw is clear: the least-restrictive means to achieve the government’s interest of protecting minors from social media’s speech and design problems is to promote low-cost monitoring and filtering.

First, however, it is also worth inquiring whether the government would be able to establish a compelling state interest, as the Court discussed in Brown. The Court’s strong skepticism of government paternalism[111] applies equally to the verifiable-parental-consent laws enacted in Arkansas and Utah, as well as COPPA 2.0. Aiding parental consent likely fails to “meet a substantial need of parents who wish to restrict their children’s access”[112] to social media, but can’t do so, to use the late Justice Antonin Scalia’s language. Moreover, the “purported aid to parental authority” is likely to be found to be “vastly overinclusive” because “[n]ot all of the children who are forbidden” to join social media on “their own have parents who care whether” they do so.[113] While such laws “may indeed be in support of what some parents of the restricted children actually want, its entire effect is only in support of what the State thinks parents ought to want. This is not the narrow tailoring to ‘assisting parents’ that restriction of First Amendment rights requires.”[114]

As argued clearly above, Ashcroft is strong precedent that promoting the practical and technological means available in the marketplace, outlined by NetChoice in its brief in Griffin, is less restrictive than age-verification laws to protect minors from harms associated with social-media usage.[115] In fact, there is a strong argument that the market has subsequently produced more and more effective tools than were available even then. This makes it exceedingly unlikely that the Supreme Court will change its mind.

While some have argued that Justice Clarence Thomas’ dissent in Brown offers roadmap to reject these precedents,[116] there is little basis for that conclusion. First, Thomas’ dissent in Brown was not joined by any other members of the Supreme Court.[117] Second, Justice Thomas joined the majority in Ashcroft v. ACLU, suggesting he probably still sees age-verification laws as unconstitutional.[118] Even Associate Justice Samuel Alito issued a concurrence to the majority in that case,[119] expressing skepticism of Justice Thomas’ approach.[120]  Third, it seems unlikely that the newer conservative justices, whose jurisprudence has been more speech-protective by nature,[121] would join Justice Thomas in his opinion on the right of children to receive speech. And far from being vague on the issue of whether a minor has a right to receive speech, [122] Justice Scalia’s majority opinion clearly stated that:

[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them… but that does not include a free-floating power to restrict the ideas to which children may be exposed.[123]

Precedent is strong against age-verification and parental-consent laws, and there is no reason to think the personnel changes on the Supreme Court would change the analysis.

In sum, straightforward applications of Brown and Ashcroft doom these new social-media laws.

VI.   Conclusion

This issue brief has two main conclusions, one of interest to the scholarship of applying law & economics to constitutional law, and the other to the policy and legal questions surrounding social-media age-verification and parental-consent laws:

  1. The Supreme Court appears to implicitly adopt a Coasean framework in its approach to parental-consent and age-verification laws in the three major precedents of Playboy, Ashcroft, and Brown; and
  2. The application of this least-cost avoider analysis in the least-restrictive-means test, in particular, is likely to doom these laws constitutionally, but also as a matter of economically grounded policy.

In conclusion, these online age-verification laws should be rejected. Why? The answer is transaction costs.

[1] See, e.g., Kirsten Weir, Social Media Brings Benefits and Risks to Teens. Here’s How Psychology Can Help Identify a Path Forward, 54 Monitor on Psychology 46 (Sep. 1, 2023), https://www.apa.org/monitor/2023/09/protecting-teens-on-social-media.

[2] See, e.g., Khara Boender, Jordan Rodell, & Alex Spyropoulos, The State of Affairs: What Happened in Tech Policy During 2023 State Legislative Sessions?, Project Disco (Jul. 25, 2023), https://www.project-disco.org/competition/the-state-of-affairs-state-tech-policy-in-2023 (noting laws passed and proposed addressing children’s online safety at the state level, including California’s Age-Appropriate Design Code and age-verification laws in both Arkansas and Utah, all of which will be considered below).

[3] With apologies to Mike Munger for borrowing the title of his excellent podcast, invoked several times in this issue brief; see The Answer Is Transaction Costs, https://podcasts.apple.com/us/podcast/the-answer-is-transaction-costs/id1687215430 (last accessed Sept. 28, 2023).

[4] Steven G. Medema, “Failure to Appear”: The Use of the Coase Theorem in Judicial Opinions, at 4, Dep’t of Econ. Duke Univ., Working Paper No. 2.1 (2019), available at https://hope.econ.duke.edu/sites/hope.econ.duke.edu/files/Medema%20workshop%20paper.pdf.

[5] Fred R. Shapiro & Michelle Pearse, The Most Cited Law Review Articles of All Time, 110 Mich. L. Rev. 1483, 1489 (2012).

[6] R.H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960).

[7] See generally Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[8] Todd J. Zywicki & Edward Peter Stringham, Common Law and Economic Efficiency, Geo. Mason Univ.. L. & Econ. Rsch., Working Paper No. 10-43 (2010), available at https://www.law.gmu.edu/assets/files/publications/working_papers/1043CommonLawandEconomicEfficiency.pdf.

[9] See id. at 4.

[10] See id. at 3.

[11] See id. at 10.

[12] See id. at 34.

[13] Medema, supra note 4, at 39.

[14] See, e.g., Matti Cuorre & Andrew K. Przybylski, Estimating the Association Between Facebook Adoption and Well-Being in 72 Countries, 10 Royal Soc’y Open Sci. 1 (2023), https://royalsocietypublishing.org/doi/epdf/10.1098/rsos.221451; Sabrina Cipoletta, Clelia Malighetti, Chiara Cenedese, & Andrea Spoto, How Can Adolescents Benefit from the Use of Social Networks? The iGeneration on Instagram, 17 Int. J. Environ. Res. Pub. Health 6952 (2020), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7579040.

[15] See Jean M. Twenge, Thomas E. Joiner, Megan L Rogers, & Gabrielle N. Martin, Increases in Depressive Symptoms, Suicide-Related Outcomes, and Suicide Rates Among U.S. Adolescents After 2010 and Links to Increased New Media Screen Time, 6 Clinical Psych. Sci. 3 (2018), available at https://courses.engr.illinois.edu/cs565/sp2018/Live1_Depression&ScreenTime.pdf.

[16] Adam Candeub, Age Verification for Social Media: A Constitutional and Reasonable Regulation, FedSoc Blog (Aug. 7, 2023), https://fedsoc.org/commentary/fedsoc-blog/age-verification-for-social-media-a-constitutional-and-reasonable-regulation.

[17] See Wikipedia, List of Alcohol Laws of the United States, https://en.wikipedia.org/wiki/List_of_alcohol_laws_of_the_United_States (last accessed Sep. 28, 2023); Wikipedia, U.S. History of Tobacco Minimum Purchase Age by State, https://en.wikipedia.org/wiki/U.S._history_of_tobacco_minimum_purchase_age_by_state (last accessed Sep. 28, 2023).

[18] See Wikipedia, Driver’s Licenses in the United States, https://en.wikipedia.org/wiki/Driver%27s_licenses_in_the_United_States (last accessed Sep. 28, 2023).

[19] See Wikipedia, Gambling Age, https://en.wikipedia.org/wiki/Gambling_age (last accessed Sep. 28, 2023) (table on minimum age for lottery tickets and casinos by state). As far as this author is aware, every state and territory requires identification demonstrating the buyer is at least 18 years old to make a retail purchase of a pornographic magazine or video.

[20] See 15 U.S.C. § 6501, et seq. (2018); 16 CFR Part 312.

[21] See infra Part III. See Brown v. Ent. Merch. Ass’n, 564 U.S. 786, 794 (2011) (“California does not argue that it is empowered to prohibit selling offensively violent works to adults—and it is wise not to, since that is but a hair’s breadth from the argument rejected in Stevens. Instead, it wishes to create a wholly new category of content-based regulation that is permissible only for speech directed at children. That is unprecedented and mistaken. ‘[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them…’ No doubt a State possesses legitimate power to protect children from harm… but that does not include a free-floating power to restrict the ideas to which children may be exposed. ‘Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.’”) (internal citations omitted).

[22] See infra Part V.

[23] See infra Part IV.

[24] 529 U.S. 803 (2000).

[25] Id. at 806.

[26] See id.

[27] See id. at 806-807.

[28] Id. at 811.

[29] Id. at 813 (internal citation omitted).

[30] Id. at 814.

[31] Id. at 815.

[32] Id. at 816.

[33] See id. at 821 (“[M]arket-based solutions such as programmable televisions, VCR’s, and mapping systems []which display a blue screen when tuned to a scrambled signal[] may eliminate signal bleed at the consumer end of the cable.”).

[34] See id. at 823 (“The Government also failed to prove § 504 with adequate notice would be an ineffective alternative to § 505.”).

[35] Id. at 825-826.

[36] 542 U.S. 656 (2004).

[37] Id. at 659.

[38] Id. at 661.

[39] Id. at 662.

[40] Id. at 666.

[41] See id. at 667 (“Filters are less restrictive than COPA. They impose selective restrictions on speech at the receiving end, not universal restrictions at the source. Under a filtering regime, adults without children may gain access to speech they have a right to see without having to identify themselves or provide their credit card information. Even adults with children may obtain access to the same speech on the same terms simply by turning off the filter on their home computers. Above all, promoting the use of filters does not condemn as criminal any category of speech, and so the potential chilling effect is eliminated, or at least much diminished. All of these things are true, moreover, regardless of how broadly or narrowly the definitions in COPA are construed.”).

[42] See id. at 667-669.

[43] See id. at 669-670.

[44] 564 U.S. 786 (2011).

[45] See id. at 787.

[46] Id. at 793-795.

[47] See id. at 794-797.

[48] See id. at 796-799.

[49] See id. at 799-802.

[50] Id. at 801.

[51] Id. at 801-804.

[52] See id. at 812 (J. Alito, concurring):

“There is a critical difference, however, between obscenity laws and laws regulating violence in entertainment. By the time of this Court’s landmark obscenity cases in the 1960’s, obscenity had long been prohibited, See Roth v. U.S., 354 U.S. 476, at 484-485, and this experience had helped to shape certain generally accepted norms concerning expression related to sex.

There is no similar history regarding expression related to violence. As the Court notes, classic literature contains descriptions of great violence, and even children’s stories sometimes depict very violent scenes.

Although our society does not generally regard all depictions of violence as suitable for children or adolescents, the prevalence of violent depictions in children’s literature and entertainment creates numerous opportunities for reasonable people to disagree about which depictions may excite “deviant” or “morbid” impulses. See Edwards & Berman, Regulating Violence on Television, 89 Nw. U.L.Rev. 1487, 1523 (1995) (observing that the Miller test would be difficult to apply to violent expression because “there is nothing even approaching a consensus on low-value violence”).

Finally, the difficulty of ascertaining the community standards incorporated into the California law is compounded by the legislature’s decision to lump all minors together. The California law draws no distinction between young children and adolescents who are nearing the age of majority.”

See also id. at 819 (Alito, J., concurring) (“If the technological characteristics of the sophisticated games that are likely to be available in the near future are combined with the characteristics of the most violent games already marketed, the result will be games that allow troubled teens to experience in an extraordinarily personal and vivid way what it would be like to carry out unspeakable acts of violence.”).

[53] The following sections are adapted from Ben Sperry, Right to Anonymous Speech, Part 3: Anonymous Speech and Age-Verification Laws, Truth on the Market (Sep. 11, 2023), https://truthonthemarket.com/2023/09/11/right-to-anonymous-speech-part-3-anonymous-speech-and-age-verification-laws.

[54] See Ben Sperry, Online Safety Bills Will Mean Kids Are No Longer Seen or Heard Online, The Hill (May 12, 2023), https://thehill.com/opinion/congress-blog/4002535-online-safety-bills-will-mean-kids-are-no-longer-seen-or-heard-online;  Ben Sperry, Bills Aimed at ‘Protecting’ Kids Online Throw the Baby out with the Bathwater, The Hill (Jul. 26, 2023), https://thehill.com/opinion/congress-blog/4121324-bills-aimed-at-protecting-kids-online-throw-the-baby-out-with-the-bathwater; Przybylski & Vuorre, supra note 14; Mesfin A. Bekalu, Rachel F. McCloud, & K. Viswanath, Association of Social Media Use With Social Well-Being, Positive Mental Health, and Self-Rated Health: Disentangling Routine Use From Emotional Connection to Use, 42 Sage J. 69S, 69S-80S (2019), https://journals.sagepub.com/doi/full/10.1177/1090198119863768.

[55] See generally Michael Munger, Tomorrow 3.0: Transaction Costs and the Sharing Economy, Cambridge University Press (Mar. 22, 2018).

[56] The Future of the COPPA Rule: An FTC Workshop Part 2, Federal Trade Commission (Oct. 7, 2019), available at https://www.ftc.gov/system/files/documents/public_events/1535372/transcript_of_coppa_workshop_part_2_1.pdf.

[57] Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[58] Id. at 6-7 (emphasis added).

[59] Id. at 1.

[60] FTC, supra note 56.

[61] Id. at 6 (emphasis added).

[62] See Michael Munger, To Consumers, All Costs are Transaction Costs, Am. Inst. Econ. Rsch. (June 13, 2023), https://www.aier.org/article/to-consumers-all-costs-are-transaction-costs.

[63] See Katie Robertson, Meta Begins Blocking News in Canada, N.Y. Times (Aug. 2, 2023), https://www.nytimes.com/2023/08/02/business/media/meta-news-in-canada.html; Mark Collom, Australia Made a Deal to Keep News on Facebook. Why Couldn’t Canada?, CBC News (Aug. 3, 2023), https://www.cbc.ca/news/world/meta-australia-google-news-canada-1.6925726.

[64] See id.

[65] Free Speech Coal. Inc. v. Colmenero, No. 1:23-CV-917-DAE, 2023 U.S. Dist. LEXIS 154065 (W.D. Tex. 2023), available at https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172751222/gov.uscourts.txwd.1172751222.36.0.pdf.

[66] Id. at 10.

[67] Id.

[68] Id.

[69] Id. at 44.

[70] Geoffrey A. ManneBen Sperry, & Kristian Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Comput. & Tech. L.J. 26 (2022), https://laweconcenter.org/resources/who-moderates-the-moderators-a-law-economics-approach-to-holding-online-platforms-accountable-without-destroying-the-internet; Geoffrey A. Manne, Kristian Stout, & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-first-amendment-and-online-intermediary-liability.

[71] See Manne, Stout, & Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, supra note 70; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market. (Sep. 6, 2023), httsps://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach; Manne, Sperry, & Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, supra note 70.

[72] See Manne, Stout, & Sperry, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, supra note 70, at 28 (“To the extent that the current legal regime permits social harms online that exceed concomitant benefits, it should be reformed to deter those harms, provided it can be done so at sufficiently low cost.”); Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, supra note 71.

[73] See NetChoice Complaint, NetChoice LLC v. Griffin, NO. 5:23-CV-05105, available at 2023 U.S. Dist. LEXIS 154571 (W.D. Ark. 2023), https://netchoice.org/wp-content/uploads/2023/06/NetChoice-v-Griffin_-Complaint_2023-06-29.pdf.

[74] Id. at para. 13.

[75] See id. at para. 14

[76] See id.

[77] See id. at para 15.

[78] See id. at para 16.

[79] See id.

[80] See id. at para. 17, 19-21

[81] See Ben Sperry, Congress Should Focus on Protecting Teens from Real Harms, Not Targeted Ads, The Hill (Feb. 12, 2023), https://thehill.com/opinion/congress-blog/3862238-congress-should-focus-on-protecting-teens-from-real-harms-not-targeted-ads.

[82] See NetChoice Complaint, supra note 73 at para. 18.

[83] Government Response to the Roadmap for Age Verification, Australian Gov’t Dep’t of Infrastructure, Transp., Reg’l Dev., Commc’ns and the Arts (Aug. 2023), available at https://www.infrastructure.gov.au/sites/default/files/documents/government-response-to-the-roadmap-for-age-verification-august2023.pdf.

[84] See Josh Taylor, Australia Will Not Force Adult Websites to Bring in Age Verification Due To Privacy And Security Concerns, The Guardian (Aug. 30, 2023), https://www.theguardian.com/australia-news/2023/aug/31/roadmap-for-age-verification-online-pornographic-material-adult-websites-australia-law.

[85] See NetChoice Complaint, supra note 73 at 2.

[86] Id. at 6.

[87] See id.

[88] See id. at 6-8.

[89] Supra Part IV.A.

[90] See Children and Teen’s Online Privacy Protection Act, S. 1418, 118th Cong. (2023), as amended Jul. 27, 2023, available at https://www.congress.gov/bill/118th-congress/senate-bill/1418/text (last accessed Oct. 2, 2023). Other similar bills have been proposed as well. See Protecting Kids on Social Media Act, S. 1291, 118th Cong. (2023); Making Age-Verification Technology Uniform, Robust, and Effective Act, S. 419, 118th Cong. (2023); Social Media Child Protection Act, H.R. 821, 118th Cong. (2023).

[91] See David Neumark & Peter Shirley, Myth or Measurement: What Does the New Minimum Wage Research Say About Minimum Wages and Job Loss in the United States? (Nat’l Bur. Econ. Res. Working Paper 28388, Mar. 2022), available at https://www.nber.org/papers/w28388 (concluding that “(i) there is a clear preponderance of negative estimates in the literature; (ii) this evidence is stronger for teens and young adults as well as the less-educated; (iii) the evidence from studies of directly-affected workers points even more strongly to negative employment effects; and (iv) the evidence from studies of low-wage industries is less one-sided.”).

[92] See Lisa Sturtevant, The Impacts of Rent Control: A Research Review and Synthesis, at 6-7, Nat’l Multifamily Hous. Coun’cl Res. Found. (May 2018), available at https://www.nmhc.org/globalassets/knowledge-library/rent-control-literature-review-final2.pdf (“1. Rent control and rent stabilization policies do a poor job at targeting benefits. While some low-income families do benefit from rent control, so, too, do higher-income households. There are more efficient and effective ways to provide assistance to lower-income individuals and families who have trouble finding housing they can afford. 2. Residents of rent-controlled units move less often than do residents of uncontrolled housing units, which can mean that rent control causes renters to continue to live in units that are too small, too large or not in the right locations to best meet their housing needs. 3. Rent-controlled buildings potentially can suffer from deterioration or lack of investment, but the risk is minimized when there are effective local requirements and/or incentives for building maintenance and improvements. 4. Rent control and rent stabilization laws lead to a reduction in the available supply of rental housing in a community, particularly through the conversion to ownership of controlled buildings. 5. Rent control policies can hold rents of controlled units at lower levels but not under all circumstances. 6. Rent control policies generally lead to higher rents in the uncontrolled market, with rents sometimes substantially higher than would be expected without rent control. 7. There are significant fiscal costs associated with implementing a rent control program.”).

[93] See Candeub, supra note 16.

[94] Colmenero, supra note 65, at 22.

[95] See Kids Online Safety Act, S. 1409, 118th Cong. (2023), as amended and posted by the Senate Committee on Commerce, Science , and Transportation on July 27, 2023, available at https://www.congress.gov/bill/118th-congress/senate-bill/1409/text#toc-id6fefcf1d-a1ae-4949-a826-23c1e1b1ef26 (last accessed Oct. 2, 2023).

[96] See id. at Section 3.

[97] Cf. Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921, 1930-31 (2019):

[M]erely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints…

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.

[98] See Counterman v. Colorado, 600 U.S. 66 (2023); Ben Sperry (@RBenSperry), Twitter (June 28, 2023, 4:46 PM), https://twitter.com/RBenSperry/status/1674157227387547648.

[99] Cf. HØEG v. Newsom, 2023 WL 414258 (E.D. Cal. Jan. 25, 2023); Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, supra note 70.

[100] California Age-Appropriate Design Code Act, AB 2273 (2022), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202120220AB2273.

[101] See id. at § 1798.99.32(d)(1), (2), (4).

[102] See Candeub, supra note 16.

[103] NetChoice LLC. v. Griffin, Case No. 5:23-CV-05105 at 25 (Aug. 31, 2023), slip op., available at https://netchoice.org/wp-content/uploads/2023/08/GRIFFIN-NETCHOICE-GRANTED.pdf.

[104] Id.

[105] Id. at 38 (“Having considered both sides’ positions on the level of constitutional scrutiny to be applied, the Court tends to agree with NetChoice that the restrictions in Act 689 are subject to strict scrutiny. However, the Court will not reach that conclusion definitively at this early stage in the proceedings and instead will apply intermediate scrutiny, as the State suggests.”).

[106] Id. at 48 (“In sum, NetChoice is likely to succeed on the merits of the First Amendment claim it raises on behalf of Arkansas users of member platforms. The State’s solution to the very real problems associated with minors’ time spent online and access to harmful content on social media is not narrowly tailored. Act 689 is likely to unduly burden adult and minor access to constitutionally protected speech. If the legislature’s goal in passing Act 689 was to protect minors from materials or interactions that could harm them online, there is no compelling evidence that the Act will be effective in achieving those goals.”).

[107] See NetChoice v. Bonta, Case No. 22-cv-08861-BLF (N.D. Cal. Sept. 18, 2023), slip op., available at https://netchoice.org/wp-content/uploads/2023/09/NETCHOICE-v-BONTA-PRELIMINARY-INJUNCTION-GRANTED.pdf; Ben Sperry, What Does NetChoice v. Bonta Mean for KOSA and Other Attempts to Protect Children Online?, Truth on the Market (Sep. 29, 2023), https://truthonthemarket.com/2023/09/29/what-does-netchoice-v-bonta-mean-for-kosa-and-other-attempts-to-protect-children-online.

[108] Id. at 36-38.

[109] See Carl Szabo, NetChoice Sends Veto Request to Utah Gov. Spencer Cox on HB 311 and SB 152, NetChoice (Mar. 3, 2023),  https://netchoice.org/netchoice-sends-veto-request-to-utah-gov-spencer-cox-on-hb-311-and-sb-153.

[110] See, e.g., Sable Commcn’s v. FCC, 492 U.S. 115, 126 (1989) (“The Government may, however, regulate the content of constitutionally protected speech in order to promote a compelling interest if it chooses the least restrictive means to further the articulated interest.”).

[111] Brown, 564 U.S. at 801 (“California claims that the Act is justified in aid of parental authority: By requiring that the purchase of violent video games can be made only by adults, the Act ensures that parents can decide what games are appropriate. At the outset, we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.”).

[112] Brown, 564 U.S. at 801.

[113] Id. at 803

[114] Id.

[115] See supra IV.B.

[116] See Clare Morrell, Adam Candeub, & Michael Toscano, No, Big Tech Doesn’t Have A Right To Speak To Kids Without Their Parent’s Consent, The Federalist (Sept. 21, 2023), https://thefederalist.com/2023/09/21/no-big-tech-doesnt-have-a-right-to-speak-to-kids-without-their-parents-consent (noting “Justice Clarence Thomas wrote in his dissent in the Brown case that “the ‘freedom of speech,’ as originally understood, does not include a right to speak to minors (or a right of minors to access speech) without going through the minors’ parents or guardians.”).

[117] Brown, 564 U.S. at 821.

[118] Id. at 822.

[119] Id. at 805.

[120] Id. at 813.

[121] See, e.g., Ben Sperry, There’s Nothing ‘Conservative’ About Trump’s Views on Free Speech and the Regulation of Social Media, Truth on the Market (Jul. 12, 2019), https://truthonthemarket.com/2019/07/12/theres-nothing-conservative-about-trumps-views-on-free-speech (noting Kavanaugh’s majority opinion in Halleck on compelled speech included all the conservative justices; at the time he and Gorsuch were relatively new Trump appointees); Justice Amy Comey Barrett has also joined the majority opinion in 303 Creative LLC v. Elenis, 600 U.S. 570 (2023), written by Gorsuch and joined by all the conservatives, which found public-accommodations laws are subject to strict scrutiny if they implicate expressive activity.

[122] Clare Morell (@ClareMorellEPPC), Twitter (Sept. 7, 2023, 8:27 PM), https://twitter.com/ClareMorellEPPC/status/1699942446711357731.

[123] Brown, 564 U.S. at 786.

Continue reading
Innovation & the New Economy

What’s In a Name?: Common Carriage, Social Media, and the First Amendment

Scholarship Abstract Courts and legislatures have suggested that classifying social media as common carriers would make restrictions on their right to exclude users more constitutionally permissible . . .

Abstract

Courts and legislatures have suggested that classifying social media as common carriers would make restrictions on their right to exclude users more constitutionally permissible under the First Amendment. A review of the relevant statutory definitions reveals that the statutes provide no support for classifying social media as common carriers. Moreover, the fact that a legislature may apply a label to a particular actor plays no significant role in the constitutional analysis. A further review of the elements of the common law definition of common carrier reveals that four of the purported criteria (whether the industry is affected with a public interest, whether the social media companies possess monopoly power, whether they are involved in the transportation and communication industries, and whether social media companies received compensating benefits) do not apply to social media and do not affect the application of the First Amendment. The only legitimate common law basis (whether an actor holds itself out as serving all members of the public without engaging in individualized bargaining) would again seem inapplicable to social media and have little bearing on the First Amendment. The weakness of these arguments suggests that advocates for limiting social media’s freedom to decide which voices to carry are attempting to gain some vague benefit from associating their efforts with common carriage’s supposed historical pedigree to avoid having to undertake the case-specific analysis demanded by the First Amendment’s established principles.

Continue reading
Innovation & the New Economy

What Does NetChoice v. Bonta Mean for KOSA and Other Attempts to Protect Children Online?

TOTM With yet another win for NetChoice in the U.S. District Court for the Northern District of California—this time a preliminary injunction granted against California’s Age Appropriate Design Code (AADC)—it is . . .

With yet another win for NetChoice in the U.S. District Court for the Northern District of California—this time a preliminary injunction granted against California’s Age Appropriate Design Code (AADC)—it is worth asking what this means for the federally proposed Kids Online Safety Act (KOSA) and other laws of similar import that have been considered in a few states. I also thought it was worthwhile to contrast them with the duty-of-care proposal we at the International Center for Law & Economics have put forward, in terms of how best to protect children from harms associated with social media and other online platforms.

In this post, I will first consider the Bonta case, its analysis, and what it means going forward for KOSA. Next, I will explain how our duty-of-care proposal differs from KOSA and the AADC, and why it would, in select circumstances, open online platforms to intermediary liability where they are best placed to monitor and control harms to minors, by making it possible to bring products-liability suits. I will also outline a framework for considering how the First Amendment and the threat of collateral censorship interacts with such suits.

Read the full piece here.

Continue reading
Innovation & the New Economy

The Marketplace of Ideas: Government Failure Is Worse Than Market Failure When It Comes to Social-Media Misinformation

TOTM Today marks the release of a white paper I have been working on for a long time, titled “Knowledge and Decisions in the Information Age: . . .

Today marks the release of a white paper I have been working on for a long time, titled “Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social-Media Platforms.” In it, I attempt to outline an Austrian law & economics theory of state action under the First Amendment, and then explain why it is important to the problem of misinformation on social-media platforms.

Read the full piece here.

Continue reading
Innovation & the New Economy

Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social-Media Platforms

ICLE White Paper “If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in . . .

“If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. If there are any circumstances which permit an exception, they do not now occur to us.” – West Virginia Board of Education v. Barnette (1943)[1]

“Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth.” – United States v. Alvarez (2012)[2]

Introduction

In April 2022, the U.S. Department of Homeland Security (DHS) announced the creation of the Disinformation Governance Board, which would be designed to coordinate the agency’s response to the potential effects of disinformation threats.[3] Almost immediately upon its announcement, the agency was met with criticism. Congressional Republicans denounced the board as “Orwellian,”[4] and it was eventually disbanded.[5]

The DHS incident followed years of congressional hearings in which Republicans had castigated leaders of the so-called “Big Tech” firms for allegedly censoring conservatives, while Democrats had criticized those same leaders for failing to combat and remove misinformation.[6] Moreover, media outlets have reported on systematic attempts by government officials to encourage social-media companies to remove posts and users based on alleged misinformation. For example, The Intercept in 2022 reported on DHS efforts to set up backchannels with Facebook for flagging posts and misinformation.[7]

The “Twitter Files” released earlier this year by the company’s CEO Elon Musk—and subsequently reported on by journalists Barry Weiss, Matt Taibbi, and Michael Shellenberger—suggest considerable efforts by government agents to encourage Twitter to remove posts as misinformation and to bar specific users for being purveyors of misinformation.[8] What’s more, communications unveiled as part of discovery in the Missouri v. Biden case have offered further evidence a variety of government actors cajoling social-media companies to remove alleged misinformation, along with the development of a considerable infrastructure to facilitate what appears to be a joint project to identify and remove the same.[9]

With all of these details coming into public view, the question that naturally arises is what role, if any, does the government have in regulating misinformation disseminated through online platforms? The thesis of this paper is that the First Amendment forecloses government agents’ ability to regulate misinformation online, but it protects the ability of private actors—i.e., the social-media companies themselves—to regulate misinformation on their platforms as they see fit.

The primary reason for this conclusion is the state-action doctrine, which distinguishes public and private action. Public actions are subject to constitutional constraints (such as the First Amendment), while private actors are free from such regulation.[10] A further thesis of this paper is that application of the state-action doctrine to the question of misinformation on online platforms promotes the bedrock constitutional value of “protect[ing] a robust sphere of individual liberty,”[11] while also creating outlets for more speech to counteract false speech.[12]

Part I of this paper outlines a law & economics theory of state-action requirements under the First Amendment and explains its importance for the online social-media space. The right to editorial discretion and Section 230 will also be considered as part of this background law, which places the responsibility for regulating misinformation on private actors like social-media platforms. Such platforms must balance the interests of each side of their platforms to maximize value. This means, in part, setting moderation rules on misinformation that keep users engaged in order to provide increased opportunities to generate revenue from advertisers.

Part II considers various theories of state action and whether they apply to social-media platforms. It appears clear that some state-action theory—like the idea that social-media companies exercise a “traditional, exclusive public function”—are foreclosed in light of Manhattan Community Access Corp. v. Halleck. But it remains an open question whether a social-media company could be found a state actor under a coercion or collusion theory under facts that have been revealed in the Twitter Files and litigation over this question.

Part III completes the First Amendment analysis of what government agents can do to regulate misinformation on social media. The answer: not much. The U.S. Constitution forbids direct regulation of false speech simply because it is false. A more difficult question concerns how to define truth and falsity in contested areas of fact, where legal questions may run into vagueness concerns. We recommend that a better way forward is for government agents to invest in telling their own version of the facts, but where they have no authority to mandate or pressure social-media companies into regulating misinformation.

I.        A Theory of State Action and Speech Rights on Online Social-Media Platforms

Among the primary rationales for the First Amendment’s speech protections is to shield the “marketplace of ideas”:[13] in most circumstances, the best remedy for false or harmful speech is “more speech, not enforced silence.”[14] But this raises the question of why private abridgments of speech—such as those enforced by powerful online social-media platforms—should not be subject to the same First Amendment restrictions as government action.[15] After all, if the government can’t intervene in the marketplace of ideas by deciding what is true or false, then why should that privilege be held by Facebook or Google?

Here enters the state-action doctrine, which is the legal principle (discussed further below) that, in some cases, private entities may function as extensions of the state. Under this doctrine, the actions of such private actors would give rise to similar First Amendment concerns as if the state had acted on its own. It has been said that there is insufficient theorizing about the “why” of the state-action doctrine.[16] What follows is a theory of why the state-action doctrine is fundamental to protecting those private intermediaries who are best positioned to make marginal decisions about the benefits and harms of speech, including social-media companies through their moderation policies on misinformation.

Governance structures are put in place by online platforms as a response to market pressures to limit misinformation and other harmful speech. At the same time, there are also market pressures to not go too far in limiting speech.[17] The balance that must be struck by online intermediaries is delicate, and there is no reason to expect government regulators to do a better job than the marketplace in determining the optimal rules. The state-action doctrine protects a marketplace for speech governance by limiting the government’s reach into these spaces.

In order to discuss the state-action doctrine meaningfully, we must first outline its basic contours and the why identified by the Supreme Court. In Part I.A, we will provide a description of the Supreme Court’s most recent First Amendment state-action decision, Manhattan Community Access Corp. v. Halleck, where the Court both defines and defends the doctrine’s importance. We will also briefly consider how the state-action doctrine’s protection of private ordering is bolstered by the right to editorial discretion and by Section 230 of the Communications Decency Act of 1998.

We will then consider whether there are good theoretical reasons to support the First Amendment’s state-action doctrine. In Part I.B, we will apply insights from the law & economics tradition associated with the interaction of institutions and dispersed knowledge.[18] We argue that the First Amendment’s dichotomy between public and private action allows for the best use of dispersed knowledge in society by creating a marketplace for speech governance. We also argue that, by protecting this marketplace for speech governance from state action, the First Amendment creates the best institutional framework for reducing harms from misinformation.[19]

A.      The State-Action Doctrine, the Right to Editorial Discretion, and Section 230

At its most basic, the First Amendment’s state-action doctrine says that government agents may not restrict speech, whether through legislation, rules, or enforcement actions, or by putting undue burdens on speech exercised on government-owned property.[20] Such restrictions will receive varying levels of scrutiny from the courts, depending on the degree of incursion. On the other hand, the state-action doctrine means that, as a general matter, private actors may set rules for what speech they are willing to abide or promote, including rules for speech on their own property. With a few exceptions where private actors may be considered state actors,[21] these restrictions will receive no scrutiny from courts, and the government may actually help remove those who break privately set speech rules.[22]

In Halleck, the Court set out a strong defense of the state-action doctrine under the First Amendment. Justice Brett Kavanaugh, writing for the majority, defended the doctrine based on the text and purpose of the First Amendment:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law … abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law ….” § 1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech…

In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty…

It is sometimes said that the bigger the government, the smaller the individual. Consistent with the text of the Constitution, the state-action doctrine enforces a critical boundary between the government and the individual, and thereby protects a robust sphere of individual liberty. Expanding the state-action doctrine beyond its traditional boundaries would expand governmental control while restricting individual liberty and private enterprise.[23]

Applying the state-action doctrine, the Court held that even the heavily regulated operation of cable companies’ public-access channels constituted private action. The Court opined that “merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.”[24] The Court went on to explain:

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.[25]

Similarly, the Court has found that private actors have the right to editorial discretion that can’t generally be overcome by a government compelling the carriage of speech.[26] In Miami Herald v. Tornillo, the Supreme Court ruled that a right-to-reply statute for political candidates was unconstitutional because it “compel[s] editors or publishers to publish that which ‘reason tells them should not be published.’”[27] The Court found that the marketplace of ideas was still worth protecting from government-compelled speech, even in a media environment where most localities only had one (monopoly) newspaper.[28] The effect of Tornillo was to establish a general rule whereby the limits on media companies’ editorial discretion were defined not by government edict but by “the acceptance of a sufficient number of readers—and hence advertisers —to assure financial success; and, second, the journalistic integrity of its editors and publishers.”[29]

Section 230 of the Communications Decency Act supplements the First Amendment’s protections by granting “providers and users of an interactive computer service” immunity from (most) lawsuits for speech generated by other “information content providers” on their platforms.[30] The effect of this statute is far-ranging in its implications for online speech. It protects online social-media platforms from lawsuits for the third-party speech they host, as well as for the platforms’ decisions to take certain third-party speech down.[31]

As with the underlying First Amendment protections, Section 230 augments social-media companies’ ability to manage misinformation on their services. Specifically, it shields them from an unwarranted flood of litigation for failing to remove the defamatory speech of third parties when they make efforts to remove some undesirable speech from their platforms.

B.      Regulating Speech in Light of Dispersed Knowledge[32]

One of the key insights of the late Nobel laureate economist F.A. Hayek was that knowledge is dispersed.[33] In other words, no one person or centralized authority has access to all the tidbits of knowledge possessed by countless individuals spread out through society. Even the most intelligent among us have but a little bit more knowledge than the least intelligent. Thus, the economic problem facing society is not how to allocate “given” resources, but how to “secure the best use of resources known to any of the members of society, for ends whose relative importance only these individuals know.”[34]

This is particularly important when considering the issue of regulating alleged misinformation. As noted above, the First Amendment is premised on the idea that a marketplace of ideas will lead to the best information eventually winning out, with false ideas pushed aside by true ones.[35] Much like the economic problem, there are few, if any, given answers that are true for all time when it comes to opinions or theories in science, the arts, or any other area of knowledge. Thus, the question is: how do we establish a system that promotes the generation and adoption of knowledge, recognizing there will be “market failures” (and possibly, corresponding “government failures”) along the way?

Like virtually any other human activity, there are benefits and costs to speech. It is ultimately subjective individual preference that determines how to manage those tradeoffs. Although the First Amendment protects speech from governmental regulation, that does not mean that all speech is acceptable or must be tolerated. As noted above, U.S. law places the power to decide what speech to allow in the public square firmly into the hands of the people. The people’s preferences are expressed individually and collectively through their participation in online platforms, news media, local organizations, and other fora, and it via that process that society arrives at workable solutions to such questions.

Very few people believe that all speech protected by the First Amendment should be without consequence. Just as very few people, if pressed, would really believe that it is, generally speaking, a wise idea to vest the power to determine what is true or false in a vast governmental bureaucracy. Instead, proposals for government regulation of misinformation generally are offered as an expedient to effect short-term political goals that are perceived to be desirable. But given the dispersed nature of knowledge and given that very few “facts” are set in stone for all time,[36] such proposals threaten to undermine the very process through which new knowledge is discovered and disseminated.

Moreover, such proposals completely fail to account for how “bad” speech has, in fact, long been regulated via informal means, or what one might call “private ordering.” In this sense, property rights have long played a crucial role in determining the speech rules of any given space. If a man were to come into another man’s house and start calling his wife racial epithets, he would not only have the right to ask that person to leave but could exercise his right as a property owner to eject the trespasser—if necessary, calling the police to assist him. One similarly could not expect to go to a restaurant and yell at the top of her lungs about political issues and expect the venue—even those designated as “common carriers” or places of public accommodation—to allow her to continue.[37] A Christian congregation may in most circumstances be extremely solicitous of outsiders with whom they want to share their message, but they would likewise be well within their rights to prevent individuals from preaching about Buddhism or Islam within their walls.

In each of these examples, the individual or organization is entitled to eject individuals on the basis of their offensive (or misinformed) speech with no cognizable constitutional complaint about the violation of rights to free speech. The nature of what is deemed offensive is obviously context- and listener-dependent, but in each example, the proprietors of the relevant space are able to set and enforce appropriate speech rules. By contrast, a centralized authority would, by its nature, be forced to rely on far more generalized rules. As the economist Thomas Sowell once put it:

The fact that different costs and benefits must be balanced does not in itself imply who must balance them?or even that there must be a single balance for all, or a unitary viewpoint (one “we”) from which the issue is categorically resolved.[38]

When it comes to speech, the balance that must be struck is between one individual’s desire for an audience and that prospective audience’s willingness to listen. Asking government to make categorical decisions for all of society is substituting centralized evaluation of the costs and benefits of access to communications for the individual decisions of many actors. Rather than incremental decisions regarding how and under what terms individuals may relate to one another—which can evolve over time in response to changes in what individuals find acceptable—governments can only hand down categorical guidelines: “you must allow a, b, and c speech” or “you must not allow z, y, and z speech.”

It is therefore a fraught proposition to suggest that government could have both a better understanding of what is true and false, and superior incentives to disseminate the truth, than the millions of individuals who make up society.[39] Indeed, it is a fundamental aspect of both the First Amendment’s Establishment Clause[40] and of free-speech jurisprudence[41] that the government is in no position to act as an arbiter of what is true or false.

Thus, as much as the First Amendment protects a marketplace of ideas, by excluding the government as a truth arbiter, it also protects a marketplace for speech governance. Private actors can set the rules for speech on their own property, including what is considered true or false, with minimal interference from the government. And as the Court put it in Halleck, opening one’s property for the speech of third parties need not make the space take all-comers.[42]

This is particularly relevant in the social-media sphere. Social-media companies must resolve social-cost problems among their users.[43] In his famous work “The Problem of Social Cost,” the economist Ronald Coase argued that the traditional approach to regulating externalities was wrong, because it failed to apprehend the reciprocal nature of harms.[44] For example, the noise from a factory is a potential cost to the doctor next door who consequently can’t use his office to conduct certain testing, and simultaneously the doctor moving his office next door is a potential cost to the factory’s ability to use its equipment. In a world of well-defined property rights and low transaction costs, the initial allocation of a right would not matter, because the parties could bargain to overcome the harm in a beneficial manner—i.e., the factory could pay the doctor for lost income or to set up sound-proof walls, or the doctor could pay the factory to reduce the sound of its machines.[45] Similarly, on social media, misinformation and other speech that some users find offensive may be inoffensive or even patently true to other users. There is a reciprocal nature to the harms of offensive speech, much as with other forms of nuisance. But unlike the situation of the factory owner and the doctor, social-media users use the property of social-media companies, who must balance these varied interests to maximize the platform’s value.

Social-media companies are what economists call “multi-sided” platforms.[46] They are profit seeking, to be sure, but the way they generate profits is by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users will abandon the platform. Without users, advertisers would have no interest in buying ads. And without advertisers, there is no profit to be made. Social-media companies thus need to maximize the value of their platform by setting rules that keep users sufficiently engaged that there are advertisers who will pay to reach them.

In the cases of Facebook, Twitter, and YouTube, the platforms have set content-moderation standards that restrict many kinds of speech, including misinformation. [47] In some cases, these policies are viewed negatively by some users, particularly given that the First Amendment would foreclose the government from regulating those same types of content. But social-media companies’ ability to set and enforce moderation policies could actually be speech-enhancing. Because social-media companies are motivated to maximize the value of their platforms, for any given policy that gives rise to enforcement actions that leave some users disgruntled, there are likely to be an even greater number of users who agree with the policy. Moderation policies end up being speech-enhancing when they promote more speech overall, as the proliferation of harmful speech may push potential users away from the platforms.

Currently, all social-media companies rely on an advertising-driven revenue model. As a result, their primary goal is to maximize user engagement. As we have recently seen, this can lead to situations where advertisers threaten to pull ads if they don’t like the platform’s speech-governance decisions. After Elon Musk began restoring the accounts of Twitter users who had been banned for what the company’s prior leadership believed was promoting hate speech and misinformation, major advertisers left the platform.[48] A different business model (about which Musk has been hinting for some time[49]) might generate different incentives for what speech to allow and disallow. There would, however, still be a need for any platform to allow some speech and not other speech, in line with the expectations of its user base and advertisers. The bottom line is that the motive to maximize profits and the tendency of markets to aggregate information leaves the platforms themselves best positioned to make these incremental decisions about their users’ preferences, in response to the feedback mechanism of consumer demand.

Moreover, there is a fundamental difference between private action and state action, as alluded to by the Court in Halleck: one is voluntary, and the other based on coercion. If Facebook or Twitter suspends a user for violating community rules, that decision terminates a voluntary association. When the government removes someone from a public forum for expressing legal speech, its censorship and use of coercion are inextricably intertwined. The state-action doctrine empowers courts to police this distinction because the threats to liberty are much greater when one party in a dispute over the content of a particular expression is also empowered to impose its will with the use of force.

Imagine instead that courts were to decide that they, in fact, were best situated to balance private interests in speech against other interests, or even among speech interests. There are obvious limitations on courts’ access to knowledge that couldn’t be easily overcome through the processes of adjudication, which depend on the slow development of articulable facts and categorical reasoning over a lengthy period of time and an iterative series of cases. Private actors, on the other hand, can act relatively quickly and incrementally in response to ever-changing consumer demand in the marketplace. As Sowell put it:

The courts’ role as watchdogs patrolling the boundaries of governmental power is essential in order that others may be secure and free on the other side of those boundaries. But what makes watchdogs valuable is precisely their ability to distinguish those people who are to be kept at bay and those who are to be left alone. A watchdog who could not make that distinction would not be a watchdog at all, but simply a general menace.

The voluntariness of many actions—i.e., personal freedom—is valued by many simply for its own sake. In addition, however, voluntary decision-making processes have many advantages which are lost when courts attempt to prescribe results rather than define decision-making boundaries.[50]

The First Amendment’s complementary right of editorial discretion also protects the right of publishers, platforms, and other speakers to be free from an obligation to carry or transmit government-compelled speech.[51] In other words, not only is private regulation of speech not state action, but as a general matter, private regulation of speech is protected by the First Amendment from government action. The limits on editorial discretion are marketplace pressures, such as user demand and advertiser support, and social mores about what is acceptable to be published.[52]

There is no reason to think that social-media companies today are in a different position than was the newspaper in Tornillo.[53] These companies must determine what, how, and where content is presented within their platform. While this right of editorial discretion protects social-media companies’ moderation decisions, its benefits accrue to society at-large, who get to use those platforms to interact with people from around the world and to thereby grow the “marketplace of ideas.”

Moreover, Section 230 amplifies online platforms’ ability to make editorial decisions by immunizing most of their choices about third-party content. In fact, it is interesting to note that the heading for Section 230 is “Protection for private blocking and screening of offensive material.”[54] In other words, Section 230 is meant, along with the First Amendment, to establish a market for speech governance free from governmental interference.

Social-media companies’ abilities to differentiate themselves based on functionality and moderation policies are important aspects of competition among them.[55] How each platform is used may differ depending on those factors. In fact, many consumers use multiple social-media platforms throughout the day for different purposes.[56] Market competition, not government power, has enabled internet users to have more avenues than ever to get their message out.[57]

If social-media users and advertisers demand less of the kinds of content commonly considered to be misinformation, platforms will do their best to weed those things out. Platforms won’t always get these determinations right, but it is by no means clear that centralizing decisions about misinformation by putting them in the hands of government officials would promote the societal interest in determining the truth.

It is true that content-moderation policies make it more difficult for speakers to communicate some messages, but that is precisely why they exist. There is a subset of protected speech to which many users do not wish to be subject, including at least some perceived misinformation. Moreover, speakers have no inherent right to an audience on a social-media platform. There are always alternative means to debate the contested issues of the day, even if it may be more costly to access the desired audience.

In sum, the First Amendment’s state-action doctrine assures us that government may not make the decision about what is true or false, or to restrict a citizen’s ability to reach an audience with ideas. Governments do, however, protect social-media companies’ rights to exercise editorial discretion on their own property, including their right to make decisions about regulating potential misinformation. This puts the decisions in the hands of the entities best placed to balance the societal demands for online speech and limits on misinformation. In other words, the state-action doctrine protects the marketplace of ideas.

II.      Are Online Platforms State Actors?

As the law currently stands, the First Amendment grants online platforms the right to exercise their own editorial discretion, free from government intervention. By contrast, if government agents pressure or coerce platforms into declaring certain speech misinformation, or to remove certain users, a key driver of the marketplace of ideas—the action of differentiated actors experimenting with differing speech policies—will be lost.[58]

Today’s public debate is not actually centered on a binary choice between purely private moderation and legislatively enacted statutes to literally define what is true and what is false. Instead, the prevailing concerns relate to the circumstances under which some government activity—such as chastising private actors for behaving badly, or informing those actors about known threats—might transform online platforms’ moderation policies into de facto state actions. That is, at what point do private moderation decisions constitute state action? To this end, we will now consider sets of facts under which online platforms could be considered state actors for the purposes of the First Amendment.

In Halleck, the Supreme Court laid out three exceptions to the general rule that private actors are not state actors:

Under this Court’s cases, a private entity can qualify as a state actor in a few limited circumstances—including, for example, (i) when the private entity performs a traditional, exclusive public function; (ii) when the government compels the private entity to take a particular action; or (iii) when the government acts jointly with the private entity.[59]

Below, we will consider each of these exceptions, as applied to online social-media platforms. Part II.A will make the case that Halleck decisively forecloses the theory that social-media platforms perform a “traditional, exclusive public function,” as has been found by many federal courts. Part II.B will consider whether government agents have coerced or encouraged platforms to make specific enforcement decisions on misinformation in ways that would transform their moderation actions into state action. Part II.C will look at whether the social-media companies have essentially colluded with government actors, through either joint action or in a relationship sufficiently intertwined as to be symbiotic.

A.      ‘Traditional, Exclusive Public Function’

The classic case that illustrates the traditional, exclusive public function test is Marsh v. Alabama.[60] There, the Supreme Court found that a company town, while private, was a state actor for the purposes of the First Amendment. At issue was whether the company town could prevent a Jehovah’s Witness from passing out literature on the town’s sidewalks. The Court noted that “[o]wnership does not always mean absolute dominion. The more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it.”[61] The Court then situated the question as one where it was being asked to balance property rights with First Amendment rights. Within that framing, it found that the First Amendment’s protections should be in the “preferred position.”[62]

Despite nothing in Marsh suggesting a limitation to company towns or the traditional, exclusive public function test, future courts eventually cabined it. But there was a time when it looked like the Court would expand this reasoning to other private actors who were certainly not engaged in a traditional, exclusive public function. A trio of cases involving shopping malls eventually ironed this out.

First, in Food Employees v. Logan Valley Plaza,[63] the Court—noting the “functional equivalence” of the business block in Marsh and the shopping center[64] —found that the mall could not restrict the peaceful picketing of a grocery store by a local food-workers union.[65]

But then, the Court seemingly cabined-in both Logan Valley and Marsh just a few years later in Lloyd Corp. v. Tanner.[66] Noting the “economic anomaly” that was company towns, the Court said Marsh “simply held that where private interests were substituting for and performing the customary functions of government, First Amendment freedoms could not be denied where exercised in the customary manner on the town’s sidewalks and streets.”[67] Moreover, the Court found that Logan Valley applied “only in a context where the First Amendment activity was related to the shopping center’s operations.”[68] The general rule, according to the Court, was that private actors had the right to restrict access to property for the purpose of exercising free-speech rights.[69] Importantly, “property does not lose its private character merely because the public is generally invited to use it for designated purposes.”[70] Since the mall did not dedicate any part of its shopping center to public use in a way that would entitle the protestors to use it, the Court allowed it to restrict hand billing by Vietnam protestors within the mall.[71]

Then, in Hudgens v. NLRB,[72] the Court went a step further and reversed Logan Valley and severely cabined-in Marsh. Now, the general rule was that “the constitutional guarantee of free speech is a guarantee only against abridgment by government, federal or state.”[73] Marsh is now a narrow exception, limited to situations where private property has taken on all attributes of a town.[74] The Court also found that the reasoning—if not the holding—of Tanner had already reversed Logan Valley.[75] The Court concluded bluntly that “under the present state of the law the constitutional guarantee of free expression has no part to play in a case such as this.”[76] In other words, private actors, even those that open themselves up to the public, are not subject to the First Amendment. Following Hudgens, the Court would further limit the public-function test to “the exercise by a private entity of powers traditionally exclusively reserved to the State.”[77] Thus, the “traditional, exclusive public function” test.

Despite this history, recent litigants against online social-media platforms have argued, often citing Marsh, that these platforms are the equivalent of public parks or other public forums for speech.[78] On top of that, the Supreme Court itself has described social-media platforms as the “modern public square.”[79] The Court emphasized the importance of online platforms because they:

allow[] users to gain access to information and communicate with one another about it on any subject that might come to mind… [give] access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge. These websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard. They allow a person with an Internet connection to “become a town crier with a voice that resonates farther than it could from any soapbox.”[80]

Seizing upon this language, many litigants have argued that online social-media platforms are public forums for First Amendment purposes. To date, all have failed in federal court under this theory,[81] and the Supreme Court officially foreclosed it in Halleck.

In Halleck, the Court considered whether a public-access channel operated by a cable provider was a government actor for purposes of the First Amendment under the traditional, exclusive public function test. Summarizing the caselaw, the Court said the test required more than just a finding that the government at some point exercised that function, or that the function serves the public good. Instead, the government must have “traditionally and exclusively performed the function.”[82]

The Court then found that operating as a public forum for speech is not a function traditionally and exclusively performed by the government. On the contrary, a private actor that provides a forum for speech normally retains “editorial discretion over the speech and speakers in the forum”[83] because “[it] is not an activity that only governmental entities have traditionally performed.”[84] The Court reasoned that:

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.[85]

If the applicability of Halleck to the question of whether online social-media platforms are state actors under the “traditional, exclusive public function” test isn’t already clear, there have been appellate courts who have squarely addressed the question. In Prager University v. Google, LLC,[86] the 9th U.S. Circuit Court of Appeals took on the question of whether social-media platforms are state actors subject to First Amendment. Prager relied primarily upon Marsh and Google’s representations that YouTube is a “public forum” to argue that YouTube is a state actor under the traditional, public function test.[87] Citing primarily Halleck, along with a healthy dose of both Hudgens and Tanner, the 9th Circuit rejected this argument, for the reasons noted above. [88] YouTube was not a state actor just because it opened itself up to the public as a forum for free speech.

In sum, there is no basis for arguing that online social-media platforms fit into the narrow Marsh exception to the general rule that private actors can use their own editorial discretion over own their digital property to set their own rules for speech, including misinformation policies.

That this exception to the general private/state action dichotomy has been limited as applied to social-media platforms is consistent with the reasoning laid out above on the law & economics of the doctrine. Applying the Marsh theory to social-media companies would make all of their moderation decisions subject to First Amendment analysis. As will be discussed more below in Part III.A, this would severely limit the platforms’ ability to do anything at all with regard to online misinformation, since government actors can do very little to regulate such speech consistent with the First Amendment.

The inapplicability of the Marsh theory of state action means that a robust sphere of individual liberty will be protected. Social-media companies will be able to engage in a vibrant “market for speech governance” with respect to misinformation, responding to the perceived demands of users and advertisers and balancing those interests in a way that maximizes the value of their platforms in the presence of market competition.

B.      Government Compulsion or Encouragement

In light of the revelations highlighted in the introduction of this paper from The Intercept, the “Twitter Files,” and subsequent litigation in Missouri v. Biden,[89] the more salient theory of state action is that online social-media companies were either compelled by or colluded in joint action with the federal government to censor speech under their misinformation policies. This section will consider the government compulsion or encouragement theory and Part II.C below will consider the joint action/entwinement theory.

At a high level, the government may not coerce or encourage private actors to do what it may itself not do constitutionally.[90] But state action can be found for a private decision under this theory “only when it has exercised coercive power or has provided such significant encouragement, either overt or cover, that the choice must in law be deemed to be that of the State.”[91] But “[m]ere approval of or acquiescence in the initiatives of a private party is not sufficient to justify holding the State responsible” for private actions.[92] While each case is very fact-specific,[93] courts have developed several tests to determine when government compulsion or encouragement would transform a private actor into a state actor for constitutional purposes.

For instance, in Bantam Books v. Sullivan,[94] the Court considered whether letters sent by a legislatively created commission to book publishers declaring certain books and magazines objectionable for sale or distribution was sufficient to transform into state action the publishers’ subsequent decision not to publish further copies of the listed publications. The commission had no legal power to apply formal legal sanctions and there were no bans or seizures of books.[95] In fact, the book distributors were technically “free” to ignore the commission’s notices.[96] Nonetheless, the Court found “the Commission deliberately set about to achieve the suppression of publications deemed ‘objectionable’ and succeeded in its aim.”[97] Particularly important to the Court was that the notices could be seen as a threat to refer them for prosecution, regardless how the commission styled them. As the Court stated:

People do not lightly disregard public officers’ thinly veiled threats to institute criminal proceedings against them if they do not come around, and [the distributor’s] reaction, according to uncontroverted testimony, was no exception to this general rule. The Commission’s notices, phrased virtually as orders, reasonably understood to be such by the distributor, invariably followed up by police visitations, in fact stopped the circulation of the listed publications ex proprio vigore. It would be naive to credit the State’s assertion that these blacklists are in the nature of mere legal advice, when they plainly serve as instruments of regulation…[98]

Similarly, in Carlin Communications v. Mountain States Telephone Co.,[99] the 9th U.S. Circuit Court of Appeals found it was state action when a deputy county attorney threatened prosecution of a regional telephone company for carrying an adult-entertainment messaging service.[100] “With this threat, Arizona ‘exercised coercive power’ over Mountain Bell and thereby converted its otherwise private conduct into state action…”[101] The court did not find it relevant whether or not the motivating reason for the removal was the threat of prosecution or the telephone company’s independent decision.[102]

In a more recent case dealing with Backpage.com, the 7th U.S. Circuit Court of Appeals found a sheriff’s campaign to shut down the site by cutting off payment processing for ads from Visa and Mastercard was impermissible under the First Amendment.[103] There, the sheriff sent a letter to the credit-card companies asking them to “cease and desist” from processing payment for advertisements on Backpage.com and for “contact information” for someone within the companies he could work with.[104] The court spent considerable time distinguishing between “attempts to convince and attempts to coerce,”[105] coming to the conclusion that “Sheriff Dart is not permitted to issue and publicize dire threats against credit card companies that process payments made through Backpage’s website, including threats of prosecution (albeit not by him, but by other enforcement agencies that he urges to proceed against them), in an effort to throttle Backpage.”[106] The court also noted “a threat is actionable and thus can be enjoined even if it turns out to be empty—the victim ignores it, and the threatener folds his tent.”[107]

In sum, the focus under the coercion or encouragement theory is on what the state objectively did and not on the subjective understanding of the private actor. In other words, the question is whether the state action is reasonably understood as coercing or encouraging private action, not whether the private actor was actually responding to it.

To date, several federal courts have dismissed claims that social-media companies are state actors under the compulsion/encouragement theory, often distinguishing the above cases on the grounds that the facts did not establish a true threat, or were not sufficiently connected to the enforcement action again the plaintiff.

For instance, in O’Handley v. Weber,[108] the 9th U.S. Circuit Court of Appeals dealt directly with the question of the coercion theory in the context of social-media companies moderating misinformation, allegedly at the behest of California’s Office of Elections Cybersecurity (OEC). The OEC flagged allegedly misleading posts on Facebook and Twitter and the social-media companies removed most of those flagged posts.[109] First, the court found there was no threats from the OEC like those in Carlin, nor any incentive offered to take the posts down.[110]  The court then distinguished between “attempts to convince and attempts to coerce,”[111] noting that “[a] private party can find the government’s stated reasons for making a request persuasive, just as it can be moved by any other speaker’s message. The First Amendment does not interfere with this communication so long as the intermediary is free to disagree with the government and to make its own independent judgment about whether to comply with the government’s request.”[112] The court concluded that the OEC did not pressure Twitter to take any particular action against the plaintiff, but went even further by emphasizing that, even if their actions could be seen as a specific request to remove his post, Twitter’s compliance was “purely optional.”[113] In other words, if there is no threat in a government actor’s request to take down content, then it is not impermissible coercion or encouragement.

In Hart v. Facebook,[114] the plaintiff argued that the federal government defendants had—through threats of removing Section 230 immunity and antitrust investigations, as well as comments by President Joe Biden stating that social-media companies were “killing people” by not policing misinformation about COVID-19—coerced Facebook and Twitter into removing his posts.[115] The plaintiff also pointed to recommendations from Biden and an advisory from Surgeon General Vivek Murthy as further evidence of coercion or encouragement. The court rejected this evidence, stating that “the government’s vague recommendations and advisory opinions are not coercion. Nor can coercion be inferred from President Biden’s comment that social media companies are ‘killing people’… A President’s one-time statement about an industry does not convert into state action all later decisions by actors in that industry that are vaguely in line with the President’s preferences.”[116] But even more importantly, the court found that there was no connection between the allegations of coercion and the removal of his particular posts: “Hart has not alleged any connection between any (threat of) agency investigation and Facebook and Twitter’s decisions… even if Hart had plausibly pleaded that the Federal Defendants exercised coercive power over the companies’ misinformation policies, he still fails to specifically allege that they coerced action as to him.”[117]

Other First Amendment cases against social-media companies alleging coercion or encouragement from state actors have been dismissed for reasons similar to those in Hart.[118] In Missouri et al. v. Biden, et al.,[119] the U.S. District Court for the Western District of Louisiana became the first court to find social-media companies could be state actors for purposes of the First Amendment due to a coercion or encouragement theory. After surveying (most of the same) cases as above, the court found that:

Here, Plaintiffs have clearly alleged that Defendants attempted to convince social-media companies to censor certain viewpoints. For example, Plaintiffs allege that Psaki demanded the censorship of the “Disinformation Dozen” and publicly demanded faster censorship of “harmful posts” on Facebook. Further, the Complaint alleges threats, some thinly veiled and some blatant, made by Defendants in an attempt to effectuate its censorship program. One such alleged threat is that the Surgeon General issued a formal “Request for Information” to social-media platforms as an implied threat of future regulation to pressure them to increase censorship. Another alleged threat is the DHS’s publishing of repeated terrorism advisory bulletins indicating that “misinformation” and “disinformation” on social-media platforms are “domestic terror threats.” While not a direct threat, equating failure to comply with censorship demands as enabling acts of domestic terrorism through repeated official advisory bulletins is certainly an action social-media companies would not lightly disregard. Moreover, the Complaint contains over 100 paragraphs of allegations detailing “significant encouragement” in private (i.e., “covert”) communications between Defendants and social-media platforms.

The Complaint further alleges threats that far exceed, in both number and coercive power, the threats at issue in the above-mentioned cases. Specifically, Plaintiffs allege and link threats of official government action in the form of threats of antitrust legislation and/or enforcement and calls to amend or repeal Section 230 of the CDA with calls for more aggressive censorship and suppression of speakers and viewpoints that government officials disfavor. The Complaint even alleges, almost directly on point with the threats in Carlin and Backpage, that President Biden threatened civil liability and criminal prosecution against Mark Zuckerberg if Facebook did not increase censorship of political speech. The Court finds that the Complaint alleges significant encouragement and coercion that converts the otherwise private conduct of censorship on social-media platforms into state action, and is unpersuaded by Defendants’ arguments to the contrary.[120]

There is obvious tension between Missouri v. Biden and the O’Handley and Hart opinions. As noted above, the Missouri v. Biden court did attempt to incorporate O’Handley into its opinion. That court tried to distinguish O’Handley on the grounds that the OEC’s conduct at issue was a mere advisory, whereas the federal defendants in Missouri v. Biden made threats against the plaintiffs.[121]

It is perhaps plausible that Hart can also be read as consistent with Missouri v. Biden, in the sense that while Hart failed to allege sufficient facts of coercion/encouragement or a connection with his specific removal, the plaintiffs in Missouri v. Biden did. Nonetheless, the Missouri v. Biden court accepted many factual arguments that were rejected in Hart, such as those about the relevance of certain statements made by President Biden and his press secretary; threats to revoke Section 230 liability protections; and threats to start antitrust proceedings. Perhaps the difference is that the factual allegations in Missouri v. Biden were substantially longer and more detailed than those in Hart. And while the Missouri v. Biden court did not address it in its First Amendment section, they did note that the social-media companies’ censorship actions generated sufficient injury-in-fact to the plaintiffs to establish standing.[122] In other words, it could just be that what makes the difference is the better factual pleading in Missouri v. Biden, due to more available revelations of government coercion and encouragement.[123]

On the other hand, there may be value to cabining Missouri v. Biden with some of the criteria in O’Handley and Hart. For instance, there could be value in the government having the ability to share information with social-media companies and make requests to review certain posts and accounts that may purvey misinformation. O’Handley emphasizes that there is a difference between convincing and coercing. This is not only important for dealing with online misinformation, but with things like terrorist activity on the platforms. Insofar as Missouri v. Biden is too lenient in allowing cases to go forward, this may be a fruitful distinction for courts to clarify.[124]

Similarly, the requirement in Hart that a specific moderation decision be connected to a particular government action is very important to limit the universe of activity subject to First Amendment analysis. The Missouri v. Biden court didn’t deal sufficiently with whether the allegations of coercion and encouragement were connected to the plaintiffs’ content and accounts being censored. As Missouri v. Biden reaches the merits stage of the litigation, the court will also need to clarify the evidence needed to infer state action, assuming there is no explicit admission of direction by state actors.[125]

Under the law & economics theory laid out in Part I, the coercion or encouragement exception to the strong private/state action distinction is particularly important. The benefits of private social-media companies using their editorial judgment to remove misinformation in response to user and advertiser demand is significantly reduced when the government coerces, encourages, or otherwise induces moderation decisions. In such cases, the government is essentially engaged in covert regulation by deciding for private actors what is true and what is false. This is inconsistent with a “marketplace of ideas” or the “marketplace for speech governance” that the First Amendment’s state-action doctrine protects.

There is value, however, to limiting the Missouri v. Biden holding to ensure that not all requests by government agents automatically transform moderation decisions into state action, and in connecting coercion or encouragement to particular allegations of censorship. Government actors, as much as private actors, should be able to alert social-media companies to the presence of misinformation and even persuade social-media companies to act in certain cases, so long as that communication doesn’t amount to a threat. This is consistent with a “marketplace for speech governance.” Moreover, social-media companies shouldn’t be considered state actors for all moderation decisions, or even all moderation decisions regarding misinformation, due to government coercion or encouragement in general. Without a nexus between the coercion or encouragement and a particular moderation decision, social-media companies would lose the ability to use their editorial judgment on a wide variety of issues in response to market demand, to the detriment of their users and advertisers.

C.      Joint Action or Symbiotic Relationship

There is also state action for the purposes of the First Amendment when the government acts jointly with a private actor,[126] when there is a “symbiotic relationship” between the government and a private actor,[127] or when there is “inextricable entwinement” between a private actor and the government.[128] None of these theories is necessarily distinct,[129] and it is probably easier to define them through examples.[130]

In Lugar v. Edmonson Oil Co., the plaintiff, an operator of a truck stop, was indebted to his supplier.[131] The defendant was a creditor who used a state law in Virginia to get a prejudgment attachment to the truck-stop operator’s property, which was then executed by the county sheriff.[132] A hearing was held 34 days later, pursuant to the relevant statute.[133] The levy at-issue was dismissed because the creditor failed to satisfy the statute. The plaintiff then brought a Section 1983 claim against the defendant on grounds that it had violated the plaintiff’s Due Process rights by taking his property without first providing him with a hearing. The Supreme Court took the case to clarify how the state-action doctrine applied in such matters. The Court, citing previous cases, stated that:

Private persons, jointly engaged with state officials in the prohibited action, are acting “under color” of law for purposes of the statute. To act “under color” of law does not require that the accused be an officer of the State. It is enough that he is a willful participant in joint activity with the State or its agents.[134]

The Court also noted that “we have consistently held that a private party’s joint participation with state officials in the seizure of disputed property is sufficient to characterize that party as a ‘state actor.’”[135] Accordingly, the Court found that the defendant’s use of the prejudgment statute was state action that violated Due Process.[136]

In Burton v. Wilmington Parking Authority,[137] the Court heard a racial-discrimination case in which the question was whether state action was involved when a restaurant refused to serve black customers in a space leased from a publicly owned building attached to a public parking garage.[138] The Court determined that it was state action, noting that “[i]t cannot be doubted that the peculiar relationship of the restaurant to the parking facility in which it is located confers on each an incidental variety of mutual benefits… Addition of all these activities, obligations and responsibilities of the Authority, the benefits mutually conferred, together with the obvious fact that the restaurant is operated as an integral part of a public building devoted to a public parking service, indicates that degree of state participation and involvement in discriminatory action which it was the design of the Fourteenth Amendment to condemn.”[139] While Court didn’t itself call this theory the “symbiotic relationship” test in Burton, later Court opinions did exactly that.[140]

Brentwood Academy v. Tennessee Secondary School Athletic Association arose concerned a dispute between a private Christian school and the statewide athletics association governing interscholastic sports over a series of punishments for alleged “undue influence” in recruiting athletes.[141] The central issue was whether the athletic association was a state actor. The Court analyzed whether state actors were so “entwined” with the private actors in the association to make the resulting action state action.[142] After reviewing the record, the Court noted that 84% of the members of the athletic association were public schools and the association’s rules were made by representatives from those schools.[143] The Court concluded that the “entwinement down from the State Board is therefore unmistakable, just as the entwinement up from the member public schools is overwhelming. Entwinement will support a conclusion that an ostensibly private organization ought to be charged with a public character and judged by constitutional standards; entwinement to the degree shown here requires it.”[144]

Other cases have also considered circumstances in which government regulation, combined with other government actions, can create a situation where private action is considered that of the government. In Skinner v. Railway Labor Executives Association,[145] the Court considered a situation where private railroads engaged in drug testing of employees, pursuant to a federal regulation that authorized them to adopt a policy of drug testing and preempted state laws restricting testing.[146] The Court stated that “[t]he fact that the Government has not compelled a private party to perform a search does not, by itself, establish that the search is a private one. Here, specific features of the regulations combine to convince us that the Government did more than adopt a passive position toward the underlying private conduct.”[147] The Court found the preemption of state law particularly important, finding “[t]he Government has removed all legal barriers to the testing authorized by Subpart D and indeed has made plain not only its strong preference for testing, but also its desire to share the fruits of such intrusions.”[148]

Each of these theories has been pursued by litigants who have had social-media posts or accounts removed by online platforms due to alleged misinformation, including in the O’Handley and Hart cases discussed earlier.

For instance, in O’Handley, the 9th Circuit rejected that Twitter was a state actor under the joint-action test. The court stated there were two ways to prove joint action: either by a conspiracy theory that required a “meeting of the minds” to violate constitutional rights, or by a “willful participant” theory that requires “a high degree of cooperation between private parties and state officials.”[149] The court rejected the conspiracy theory, stating there was no meeting of the minds to violate constitutional rights because Twitter had its own independent interest in “not allowing users to leverage its platform to mislead voters.”[150] The court also rejected the willful-participant theory because Twitter was free to consider and reject flags made by the OEC in the Partner Support Portal under its own understanding of its policy on misinformation.[151] The court analogized the case to Mathis v. Pac. Gas & Elec. Co.,[152] finding this “closely resembles the ‘consultation and information sharing’ that we held did not rise to the level of joint action.”[153] The court concluded that “this was an arm’s-length relationship, and Twitter never took its hands off the wheel.”[154]

Similarly, in Hart, the U.S. District Court for the Northern District of California rejected the joint action theory as applied to Twitter and Facebook. The court found that much of the complained-of conduct by Facebook predated the communications with the federal defendants about misinformation, making it unlikely that there was a “meeting of the minds” to deprive the plaintiff of his constitutional rights.[155] The court also found “the Federal Defendants’ statements… far too vague and precatory to suggest joint action,” adding that recommendations and advisories are both vague and unenforceable.[156] Other courts followed similar reasoning in rejecting First Amendment claims against social-media companies.[157]

Finally, in Children’s Health Defense v. Facebook,[158] the court considered the argument of whether Section 230, much like the regulation at issue in Skinner, could make Facebook into a joint actor with the state when it removes misinformation. The U.S. District Court for the Northern District of California distinguished Skinner, citing a previous case finding “[u]nlike the regulations in Skinner, Section 230 does not require private entities to do anything, nor does it give the government a right to supervise or obtain information about private activity.”[159]

For the first time, a federal district court found state action under the joint action or entwinement theory in Missouri v. Biden. The court found that:

Here, Plaintiffs have plausibly alleged joint action, entwinement, and/or that specific features of Defendants’ actions combined to create state action. For example, the Complaint alleges that “[o]nce in control of the Executive Branch, Defendants promptly capitalized on these threats by pressuring, cajoling, and openly colluding with social-media companies to actively suppress particular disfavored speakers and viewpoints on social media.” Specifically, Plaintiffs allege that Dr. Fauci, other CDC officials, officials of the Census Bureau, CISA, officials at HHS, the state department, and members of the FBI actively and directly coordinated with social-media companies to push, flag, and encourage censorship of posts the Government deemed “Mis, Dis, or Malinformation.”[160]

The court also distinguished O’Handley, finding there was more than an “arms-length relationship” between the federal defendants and the social-media companies:

Plaintiffs allege a formal government-created system for federal officials to influence social-media censorship decisions. For example, the Complaint alleges that federal officials set up a long series of formal meetings to discuss censorship, setting up privileged reporting channels to demand censorship, and funding and establishing federal-private partnership to procure censorship of disfavored viewpoints. The Complaint clearly alleges that Defendants specifically authorized and approved the actions of the social-media companies and gives dozens of examples where Defendants dictated specific censorship decisions to social-media platforms. These allegations are a far cry from the complained-of action in O’Handley: a single message from an unidentified member of a state agency to Twitter.[161]

Finally, the court also found similarities between Skinner and Missouri v Biden that would support a finding of state action:

Section 230 of the CDA purports to preempt state laws to the contrary, thus removing all legal barriers to the censorship immunized by Section 230. Federal officials have also made plain a strong preference and desire to “share the fruits of such intrusions,” showing “clear indices of the Government’s encouragement, endorsement, and participation” in censorship, which “suffice to implicate the [First] Amendment.”

The Complaint further explicitly alleges subsidization, authorization, and preemption through Section 230, stating: “[T]hrough Section 230 of the Communications Decency Act (CDA) and other actions, the federal government subsidized, fostered, encouraged, and empowered the creation of a small number of massive social-media companies with disproportionate ability to censor and suppress speech on the basis of speaker, content, and viewpoint.” Section 230 immunity constitutes the type of “tangible financial aid,” here worth billions of dollars per year, that the Supreme Court identified in Norwood, 413 U.S. at 466, 93 S.Ct. 2804. This immunity also “has a significant tendency to facilitate, reinforce, and support private” censorship. Id. Combined with other factors such as the coercive statements and significant entwinement of federal officials and censorship decisions on social-media platforms, as in Skinner, this serves as another basis for finding government action.[162]

Again, there is tension in the opinions of these cases on the intersection of social media and the First Amendment under the joint-action or symbiotic-relationship test. But there are ways to read the cases consistently. First, there were far more factual allegations in Missouri v. Biden relative to the O’Handley, Hart, or Children’s Health Defense cases, particularly regarding how involved the federal defendants were in prodding social-media companies to moderate misinformation. There is even a way to read the different legal conclusions on Section 230 and Skinner consistently. The court in Biden v. Missouri made clear that it wasn’t Section 230 alone that made it like Skinner, but the combination of Section 230 immunity with other factors present:

The Defendants’ alleged use of Section 230’s immunity—and its obvious financial incentives for social-media companies—as a metaphorical carrot-and-stick combined with the alleged back-room meetings, hands-on approach to online censorship, and other factors discussed above transforms Defendants’ actions into state action. As Defendants note, Section 230 was designed to “reflect a deliberate absence of government involvement in regulating online speech,” but has instead, according to Plaintiffs’ allegations, become a tool for coercion used to encourage significant joint action between federal agencies and social-media companies.[163]

While there could be dangers inherent in treating Section 230 alone as an argument that social-media companies are state actors, the court appears inclined to say it is not Section 230 but rather the threat of removing it, along with the other dealings and communications from the federal government, that makes this state action.

Under the law & economics theory outlined in Part I, the joint-action or symbiotic-relationship test is also an important exception to the general dichotomy between private and state action. In particular, it is important to deter state officials from engaging in surreptitious speech regulation by covertly interjecting themselves into social-media companies’ moderation decisions. The allegations in Missouri v. Biden, if proven true, do appear to outline a vast and largely hidden infrastructure through which federal officials use backchannels to routinely discuss social-media companies’ moderation decisions and often pressure them into removing disfavored content in the name of misinformation. This kind of government intervention into the “marketplace of ideas” and the “market for private speech governance” takes away companies’ ability to respond freely to market incentives in moderating misinformation, and replaces their own editorial discretion with the opinions of government officials.

III.    Applying the First Amendment to Government Regulation of Online Misinformation

A number of potential consequences might stem from a plausible claim of state action levied against online platforms using one of the theories described above. Part III.A will explore the likely result, which is that a true censorship-by-deputization scheme enacted through social-media companies would be found to violate the First Amendment. Part III.B will consider the question of remedies: even if there is a First Amendment violation, those whose content or accounts have been removed may not be restored. Part III.C will then offer alternative ways for the government to deal with the problem of online misinformation without offending the First Amendment.

A.      If State Action Is Found, Removal of Content Under Misinformation Policies Would Violate the First Amendment

At a high level, First Amendment jurisprudence does allow for government regulation of speech in limited circumstances. In those cases, the threshold question is whether the type of speech at issue is protected speech and whether the regulation is content-based.[164] If it is, then the government must show the state action is narrowly tailored to a compelling governmental interest: this is the so-called “strict scrutiny” standard.[165] A compelling governmental interest is the highest interest the state has, something considered necessary or crucial, and beyond simply legitimate or important.[166] “Narrow tailoring” means the regulation uses the least-restrictive means “among available, effective alternatives.”[167] While not an impossible standard for the government to reach, “[s]trict scrutiny leave[s] few survivors.”[168] Moreover, prior restraints of speech, which are defined as situations where speech is restricted before publication, are presumptively unconstitutional.[169]

Only for content- and viewpoint-neutral “time, place, and manner restrictions” will regulation of protected speech receive less than strict scrutiny.[170] In those cases, as long as the regulation serves a “significant” government interest, and there are alternative channels available for the expression, the regulation is permissible.[171]

There are also situations where speech regulation—whether because the regulation aims at conduct but has speech elements or because the speech is not fully protected for some other reason—receives “intermediate scrutiny.”[172] In those cases, the government must show the state action is narrowly tailored to an important or substantial governmental interest, and burdens no more speech than necessary.[173] Beyond the levels of scrutiny to which speech regulation is subject, state actions involving speech also may be struck down for overbreadth[174] or vagueness.[175] Together, these doctrines work to protect a very large sphere of speech, beyond what is protected in most jurisdictions around the world.

The initial question that arises with alleged misinformation is how to even define it. Neither social-media companies nor the government actors on whose behalf they may be acting are necessarily experts in misinformation. This can result in “void-for-vagueness” problems.

In Høeg v. Newsom,[176] the U.S. District Court for the Eastern District of California considered California’s state law AB 2098, which would charge medical doctors with “unprofessional conduct” and subject them to discipline if they shared with patients “false information that is contradicted by contemporary scientific consensus contrary to the standard of care” as part of treatment or advice.[177] The court stated that “[a] statute is unconstitutionally vague when it either ‘fails to provide a person of ordinary intelligence fair notice of what is prohibited, or is so standardless that it authorizes or encourages seriously discriminatory enforcement’”[178] and that “[v]ague statutes are particularly objectionable when they ‘involve sensitive areas of First Amendment freedoms” because “they operate to inhibit the exercise of those freedoms.’”[179] The court rejected the invitation to apply a lower vagueness standard typically used for technical language because “contemporary scientific consensus” has no established technical meaning in the scientific community.[180] The court also asked a series of questions that would be particularly relevant to social-media companies acting on behalf of government actors in efforts to combat misinformation:

[W]ho determines whether a consensus exists to begin with? If a consensus does exist, among whom must the consensus exist (for example practicing physicians, or professional organizations, or medical researchers, or public health officials, or perhaps a combination)? In which geographic area must the consensus exist (California, or the United States, or the world)? What level of agreement constitutes a consensus (perhaps a plurality, or a majority, or a supermajority)? How recently in time must the consensus have been established to be considered “contemporary”? And what source or sources should physicians consult to determine what the consensus is at any given time (perhaps peer-reviewed scientific articles, or clinical guidelines from professional organizations, or public health recommendations)?[181]

The court noted that defining the consensus with reference to pronouncements from the U.S. Centers for Disease Control and Prevention or the World Health Organization would be unhelpful, as those entities changed their recommendations on several important health issues over the course of the COVID-19 pandemic:

Physician plaintiffs explain how, throughout the course of the COVID-19 pandemic, scientific understanding of the virus has rapidly and repeatedly changed. (Høeg Decl. ¶¶ 15-29; Duriseti Decl. ¶¶ 7-15; Kheriaty Decl. ¶¶ 7-10; Mazolewski Decl. ¶¶ 12-13.) Physician plaintiffs further explain that because of the novel nature of the virus and ongoing disagreement among the scientific community, no true “consensus” has or can exist at this stage. (See id.) Expert declarant Dr. Verma similarly explains that a “scientific consensus” concerning COVID-19 is an illusory concept, given how rapidly the scientific understanding and accepted conclusions about the virus have changed. Dr. Verma explains in detail how the so-called “consensus” has developed and shifted, often within mere months, throughout the COVID-19 pandemic. (Verma Decl. ¶¶ 13-42.) He also explains how certain conclusions once considered to be within the scientific consensus were later proved to be false. (Id. ¶¶ 8-10.) Because of this unique context, the concept of “scientific consensus” as applied to COVID-19 is inherently flawed.[182]

As a result, the court determined that “[b]ecause the term ‘scientific consensus’ is so ill-defined, physician plaintiffs are unable to determine if their intended conduct contradicts the scientific consensus, and accordingly ‘what is prohibited by the law.’”[183] The court upheld a preliminary injunction against the law because of a high likelihood of success on the merits.[184]

Assuming the government could define misinformation in a way that wasn’t vague, the next question is what level of First Amendment scrutiny would such edicts receive? It is clear for several reasons that regulation of online misinformation would receive, and fail, the highest form of constitutional scrutiny.

First, the threat of government censorship of speech through social-media misinformation policies could be considered a prior restraint. Prior restraints occur when the government (or actors on their behalf) restrict speech before publication. As the Supreme Court has put it many times, “any system of prior restraints of expression comes to this Court bearing a heavy presumption against its constitutional validity.”[185]

In Missouri v. Biden, the court found the plaintiffs had plausibly alleged prior restraints against their speech, and noted that “[t]hreatening penalties for future speech goes by the name of ‘prior restraint,’ and a prior restraint is the quintessential first-amendment violation.”[186] The court found it relevant that social-media companies could “silence” speakers’ voices at a “mere flick of the switch,”[187] and noted this could amount to “a prior restraint by preventing a user of the social-media platform from voicing their opinion at all.”[188] The court further stated that “bans, shadow-bans, and other forms of restrictions on Plaintiffs’ social-media accounts, are… de facto prior restraints, [a] clear violation of the First Amendment.”[189]

Second, it is clear that any restriction on speech based upon its truth or falsity would be a content-based regulation, and likely a viewpoint-based regulation, as it would require the state actor to take a side on a matter of dispute.[190] Content-based regulation requires strict scrutiny, and a reasonable case can be made that viewpoint-based regulation of speech is per se inconsistent with the First Amendment.[191]

In Missouri v. Biden, the court noted that “[g]overnment action, aimed at the suppression of particular views on a subject which discriminates on the basis of viewpoint, is presumptively unconstitutional.”[192] The court found that “[p]laintiffs allege a regime of censorship that targets specific viewpoints deemed mis-, dis-, or malinformation by federal officials. Because Plaintiffs allege that Defendants are targeting particular views taken by speakers on a specific subject, they have alleged a clear violation of the First Amendment, i.e., viewpoint discrimination.”[193]

Third, even assuming there is clearly false speech that government agents (and social-media companies acting on their behalf) could identify, false speech presumptively receives full First Amendment protection. In United States v. Alvarez[194] the Supreme Court stated that while older cases may have stated that false speech does not receive full protection, those were “confined to the few ‘historic and traditional categories [of expression] long familiar to the bar.’”[195] In other words, there was no “general exception to the First Amendment for false statements.”[196] Thus, as protected speech, any regulation of false speech, as such, would run into strict scrutiny.

In order to survive First Amendment scrutiny, government agents acting through social-media companies would have to demonstrate a parallel or alternative justification to regulate the sort of low-value speech the Supreme Court has recognized as outside the protection of the First Amendment.[197] These exceptions include defamation, fraud, the tort of false light, false statements to government officials, perjury, falsely representing oneself as speaking for the government (and impersonation), and other similar examples of fraud or false speech integral to criminal conduct.[198]

But the Alvarez Court noted that, even in areas where false speech does not receive protection, such as fraud and defamation, the Supreme Court has found the First Amendment requires that claims of fraud be based on more than falsity alone.[199]

When it comes to fraud,[200] for instance, the Supreme Court has repeatedly noted that the First Amendment offers no protection.[201] But “[s]imply labeling an action one for ‘fraud’… will not carry the day.”[202] Prophylactic rules aimed at protecting the public from the (sometimes fraudulent) solicitation of charitable donations, for instance, have been found to be unconstitutional prior restraints on several occasions by the Court.[203] The Court has found that “in a properly tailored fraud action the State bears the full burden of proof. False statement alone does not subject a fundraiser to fraud liability… Exacting proof requirements… have been held to provide sufficient breathing room for protected speech.”[204]

As for defamation,[205] the Supreme Court found in New York Times v. Sullivan[206] that “[a]uthoritative interpretations of the First Amendment guarantees have consistently refused to recognize an exception for any test of truth—whether administered by judges, juries, or administrative officials—and especially one that puts the burden of proving truth on the speaker.”[207] In Sullivan, the Court struck down an Alabama defamation statute, finding that in situations dealing with public officials, the mens rea must be actual malice: knowledge that the statement was false or reckless disregard for whether it was false.[208]

Since none of these exceptions would apply to online misinformation dealing with medicine or election law, social-media companies’ actions on behalf of the government against such misinformation would likely fail strict scrutiny. While it is possible that a court would find protecting public health or election security to be a compelling interest, the government would still face great difficulty showing that a ban on false information is narrowly tailored. It is highly unlikely that a ban on false information, as such, will ever be the least-restrictive means of controlling a harm. As the Court put it in Alvarez:

The remedy for speech that is false is speech that is true… Freedom of speech and thought flows not from the beneficence of the state but from the inalienable rights of the person. And suppression of speech by the government can make exposure of falsity more difficult, not less so. Society has the right and civic duty to engage in open, dynamic, rational discourse. These ends are not well served when the government seeks to orchestrate public discussion through content-based mandates.[209]

As argued above in Part I, a vibrant marketplace of ideas requires that individuals have the ability to express their ideas, so that the best ideas win. This means counter-speech is better than censorship from government actors to help society determine what is true. The First Amendment’s protection against government intervention into the marketplace of ideas promotes a better answer to online misinformation. Thus, a finding that government actors can’t use social-media actors to censor, based on vague definitions of misinformation, through prior restraints and viewpoint discrimination, and aimed at protected speech, is consistent with an understanding of the world where information is dispersed.

B.      The Problem of Remedies for Social-Media ‘Censorship’: The First Amendment Still Only Applies to Government Action

There is a problem, however, for plaintiffs who win cases against social-media companies that are found to be state actors when they remove posts and accounts due to alleged misinformation: the remedies are limited.

First, once the state action is removed through injunction, social-media companies would be free to continue to moderate misinformation as they see fit, free from any plausible First Amendment claim. For instance, in Carlisle Communications, the 9th Circuit found that, once the state action was enjoined, the telecommunications company was again free to determine whether or not to extend its service to the plaintiff. As the court put it:

Mountain Bell insists that its new policy reflected its independent business judgment. Carlin argues that Mountain Bell was continuing to yield to state threats of prosecution. However, the factual question of Mountain Bell’s true motivations is immaterial.

This is true because, inasmuch as the state under the facts before us may not coerce or otherwise induce Mountain Bell to deprive Carlin of its communication channel, Mountain Bell is now free to once again extend its 976 service to Carlin. Our decision substantially immunizes Mountain Bell from state pressure to do otherwise. Should Mountain Bell not wish to extend its 976 service to Carlin, it is also free to do that. Our decision modifies its public utility status to permit this action. Mountain Bell and Carlin may contract, or not contract, as they wish.[210]

This is consistent with the district court’s actions in Missouri v. Biden. There, the court granted the motion for a preliminary injunction, but it only applied against government action and not against the social-media companies at all.[211] For instance, the injunction prohibits a number of named federal officials and agencies from:

(1) meeting with social-media companies for the purpose of urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech posted on social-media platforms;

(2) specifically flagging content or posts on social-media platforms and/or forwarding such to social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression, or reduction of content containing protected free speech;

(3) urging, encouraging, pressuring, or inducing in any manner social-media companies to change their guidelines for removing, deleting, suppressing, or reducing content containing protected free speech;

(4) emailing, calling, sending letters, texting, or engaging in any communication of any kind with social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression ,or reduction of content containing protected free speech;

(5) collaborating, coordinating, partnering, switchboarding, and/or jointly working with the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group for the purpose of urging, encouraging, pressuring, or inducing in any manner removal, deletion, suppression, or reduction of content posted with social-media companies containing protected free speech;

(6) threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech;

(7) taking any action such as urging, encouraging, pressuring, or inducing in any manner social-media companies to remove, delete, suppress, or reduce posted content protected by the Free Speech Clause of the First Amendment to the United States Constitution;

(8) following up with social-media companies to determine whether the social-media companies removed, deleted, suppressed, or reduced previous social-media postings containing protected free speech;

(9) requesting content reports from social-media companies detailing actions taken to remove, delete, suppress, or reduce content containing protected free speech; and

(10) notifying social-media companies to Be on The Lookout (BOLO) for postings containing protected free speech.[212]

In other words, a social-media company would not necessarily even be required to reinstate accounts or posts of those who have been excluded under their misinformation policies. It would become a question of whether, responding to marketplace incentives sans government involvement, the social-media companies continue to find it in their interest to enforce such policies against those affected persons and associated content.

Another avenue for private plaintiffs may be with a civil rights claim under Section 1983.[213] If it can be proved that social-media companies participated in a joint action with government officials to restrict First Amendment rights, it may be possible to collect damages from them, as well as from government officials.[214] Plaintiffs may struggle, however, to prove compensatory damages, which would require proof of harm. Categories of harm like physical injury aren’t relevant to social-media moderation policies, leaving things like diminished earnings or impairment of reputation. In most cases, it is likely that the damages to plaintiffs are de minimis and hardly worth the expense of filing suit. To receive punitive damages, plaintiffs would have to prove “the defendant’s conduct is… motivated by evil motive or intent, or when it involves reckless or callous indifference to the federally protected rights of others.”[215] This seems like it would be difficult to establish against the social-media companies unless there was an admission in the record that those companies’ goal was to suppress rights, rather than that they were attempting in good faith to restrict misinformation or simply acceding to government inducements.

The remedies available for constitutional violations in claims aimed at government officials are consistent with a theory of the First Amendment that prioritizes protecting the marketplace of ideas from intervention. While it leaves many plaintiffs with limited remedies against the social-media companies once the government actions are enjoined or deterred, it does return the situation to one where the social-media companies can freely compete in a market for speech governance on misinformation, as well.

C.      What Can the Government Do Under the First Amendment in Response to Misinformation on Social-Media Platforms?

If direct government regulation or implicit intervention through coercion or collusion with social-media companies is impermissible, the question may then arise as to what, exactly, the government can do to combat online misinformation.

The first option was already discussed in Part III.A in relation to Alvarez and narrow tailoring: counter-speech. Government agencies concerned about health or election misinformation could use social=media platforms to get their own message out. Those agencies could even amplify and target such counter-speech through advertising campaigns tailored to those most likely to share or receive misinformation.

Similarly, government agencies could create their own apps or social-media platforms to publicize information that counters alleged misinformation. While this may at first appear to be an unusual step, the federal government does, through the Corporation for Public Broadcasting, subsidize public television and public radio. If there is a fear of online misinformation, creating a platform where the government can promote its own point of view could combat online misinformation in a way that doesn’t offend the First Amendment.

Additionally, as discussed above in Part II.B in relation to O’Handley and the distinction between convincing and coercion: the government may flag alleged misinformation and even attempt to persuade social-media companies to act, so long as such communications involve no implicit or explicit threats of regulation or prosecution if nothing is done. The U.S. District Court for the Western District of Louisiana distinguished between constitutional government speech and unconstitutional coercion or encouragement in its memorandum accompanying its preliminary injunction in Missouri v. Biden:

Defendants also argue that a preliminary injunction would restrict the Defendants’ right to government speech and would transform government speech into government action whenever the Government comments on public policy matters. The Court finds, however, that a preliminary injunction here would not prohibit government speech… The Defendants argue that by making public statements, this is nothing but government speech. However, it was not the public statements that were the problem. It was the alleged use of government agencies and employees to coerce and/or significantly encourage social-media platforms to suppress free speech on those platforms. Plaintiffs point specifically to the various meetings, emails, follow-up contacts, and the threat of amending Section 230 of the Communication Decency Act. Plaintiffs have produced evidence that Defendants did not just use public statements to coerce and/or encourage social-media platforms to suppress free speech, but rather used meetings, emails, phone calls, follow-up meetings, and the power of the government to pressure social-media platforms to change their policies and to suppress free speech. Content was seemingly suppressed even if it did not violate social-media policies. It is the alleged coercion and/or significant encouragement that likely violates the Free Speech Clause, not government speech, and thus, the Court is not persuaded by Defendants’ arguments here.[216]

As the court highlights, there is a special danger in government communications that remain opaque to the public. Requests for action from social-media companies on misinformation should all be public information and not conducted behind closed doors or in covert communications. Such transparency would make it much easier for the public and the courts to determine whether state actors are engaged in government speech or crossing the line into coercion or substantial encouragement to suppress speech.

On the other hand, laws like the recent SB 262 in Florida[217] go beyond the delicate First Amendment balance that courts have tried to achieve. That law would limit government officials’ ability to share any information with social-media companies regarding misinformation, limiting contacts to the removal of criminal content or accounts, or an investigation or inquiry to prevent imminent bodily harm, loss of life, or property damage.[218] While going beyond the First Amendment standard may be constitutional, these restrictions could be especially harmful when the government has information that may not be otherwise available to the public. As important as it is to restrict government intervention, it would harm the marketplace of ideas to prevent government participation altogether.

Finally, Section 230 reform efforts aimed at limiting immunity in instances where social-media companies have “red flag” knowledge of defamatory material would be another constitutional way to address misinformation.[219] For instance, if a social-media company was presented with evidence that a court or arbitrator finds certain statements to be untrue, it could be required to make reasonable efforts to take down such misinformation, and keep it down.

Such a proposal would have real-world benefits. For instance, in the recent litigation brought by Dominion Voting Systems against Fox News, the court found the various factual claims about Dominion rigging the election for Joseph Biden were false.[220] While there was no final finding of liability due to Fox and Dominion coming to a settlement,[221] if Dominion were to present the court’s findings to a social-media company, the company would, under this proposal, have an obligation to remove content that repeats the claims the court found to be false. Similarly, an arbitrator finding that MyPillow CEO Mike Lindell’s claims that he had evidence of Chinese interference in the election were demonstrably false[222] could be enough to have those claims removed, as well. Rudy Giuliani’s recent finding of liability for defamation against two Georgia election workers could similarly be removed.[223]

However, these benefits may be limited by the fact that not every defamation claim resolves with a court finding falsity of a statement. Some cases settle before it gets that far, and the underlying claims remain unproven allegations. And, as discussed above, defamation itself is not easy to prove, especially for public figures who must also be able to show “actual malice.”[224] As a result, many cases won’t even be brought. This means there could be quite a bit defamatory information put out into the world that courts or arbitrators are unlikely to have occasion to consider.

On the other hand, to make a social-media company responsible for removing allegedly defamatory information in the absence of some competent legal authority finding the underlying claim false could be ripe for abuses that could have drastic chilling effects on speech. Thus, any Section 230 reform must be limited to those occasions where a court or arbitrator of competent authority (and with some finality of judgment) has spoken on the falsity of a statement.

Conclusion

There is an important distinction in First Amendment jurisprudence between private and state action. To promote a free market in ideas, we must also protect private speech governance, like that of social-media companies. Private actors are best placed to balance the desires of people for speech platforms and the regulation of misinformation.

But when the government puts its thumb on the scale by pressuring those companies to remove content or users in the name of misinformation, there is no longer a free marketplace of ideas. The First Amendment has exceptions in its state-action doctrine that would allow courts to enjoin government actors from initiating coercion of or collusion with private actors to do that which would be illegal for the government to do itself. Government censorship by deputization is no more allowed than direct regulation of alleged misinformation.

There are, however, things the government can do to combat misinformation, including counter-speech and nonthreatening communications with social-media platforms. Section 230 could also be modified to require the takedown of adjudicated misinformation in certain cases.

At the end of the day, the government’s role in defining or policing misinformation is necessarily limited in our constitutional system. The production of true knowledge in the marketplace of ideas may not be perfect, but it is the least bad system we have yet created.

[1] West Virginia Bd. of Ed. v. Barnette, 319 U.S. 624, 642 (1943).

[2] United States v. Alvarez, 567 U.S. 709, 728 (2012).

[3] See Amanda Seitz, Disinformation Board to Tackle Russia, Migrant Smugglers, Associated Press (Apr. 28, 2022), https://apnews.com/article/russia-ukraine-immigration-media-europe-misinformation-4e873389889bb1d9e2ad8659d9975e9d.

[4] See, e.g., Rep. Doug Lamafa, Brave New World? Orwellian ‘Disinformation Governance Board’ Goes Against Nation’s Principles, The Hill (May 4, 2022), https://thehill.com/opinion/congress-blog/3476632-brave-new-world-orwellian-disinformation-governance-board-goes-against-nations-principles; Letter to Secretary Mayorkas from Ranking Members of the House Committee on Oversight and Reform (Apr. 29, 2022), available at https://oversight.house.gov/wp-content/uploads/2022/04/Letter-to-DHS-re-Disinformation-Governance-Board-04292022.pdf (stating “DHS is creating the Orwellian-named “Disinformation Governance Board”); Jon Jackson, Joe Biden’s Disinformation Board Likened to Orwell’s ‘Ministry of Truth’, Newsweek (Apr. 29, 2022), https://www.newsweek.com/joe-bidens-disinformation-board-likened-orwells-ministry-truth-1702190.

[5] See Geneva Sands, DHS Shuts Down Disinformation Board Months After Its Efforts Were Paused, CNN (Aug. 24, 2022), https://www.cnn.com/2022/08/24/politics/dhs-disinformation-board-shut-down/index.html.

[6] For an example of this type of hearing, see Preserving Free Speech and Reining in Big Tech Censorship, Hearing before the U.S. House Energy and Commerce Subcommittee on Communications and Technology (Mar. 28, 2023), https://www.congress.gov/event/118th-congress/house-event/115561.

[7] See Ken Klippenstein & Lee Fang, Truth Cops: Leaked Documents Outline DHS’s Plans to Police Disinformation, The Intercept (Oct. 31, 2022), https://theintercept.com/2022/10/31/social-media-disinformation-dhs.

[8] See Matt Taibbi, Capsule Summaries of all Twitter Files Threads to Date, With Links and a Glossary, Racket News (last updated Mar. 17, 2023), https://www.racket.news/p/capsule-summaries-of-all-twitter. For evidence that Facebook received similar pressure from and/or colluded with government officials, see Robby Soave, Inside the Facebook Files: Emails Reveal the CDC’s Role in Silencing COVID-19 Dissent, reason (Jan. 19, 2023), https://reason.com/2023/01/19/facebook-files-emails-cdc-covid-vaccines-censorship; Ryan Tracy, Facebook Bowed to White House Pressure, Removed Covid Posts, Wall St. J. (Jul. 28, 2023), https://www.wsj.com/articles/facebook-bowed-to-white-house-pressure-removed-covid-posts-2df436b7.

[9] See Missouri, et al. v. Biden, et al., No. 23-30445 (5th Cir. Sept. 8, 2023), slip op. at 2-14, available at https://www.ca5.uscourts.gov/opinions/pub/23/23-30445-CV0.pdf. Hearing on the Weaponization of the Federal Government, Hearing Before the Select Subcomm. on the Weaponization of the Fed. Gov’t (Mar. 30, 2023) (written testimony of D. John Sauer), available at https://judiciary.house.gov/sites/evo-subsites/republicans-judiciary.house.gov/files/2023-03/Sauer-Testimony.pdf.

[10] See infra Part I.

[11] Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019).

[12] Cf. Whitney v. California274 U.S. 357, 377 (1927) (Brandeis, J., concurring) (“If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence”).

[13] See, e.g., Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting) (“Persecution for the expression of opinions seems to me perfectly logical. If you have no doubt of your premises or your power and want a certain result with all your heart you naturally express your wishes in law and sweep away all opposition. To allow opposition by speech seems to indicate that you think the speech impotent, as when a man says that he has squared the circle, or that you do not care whole-heartedly for the result, or that you doubt either your power or your premises. But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution. It is an experiment, as all life is an experiment. Every year if not every day we have to wager our salvation upon some prophecy based upon imperfect knowledge. While that experiment is part of our system I think that we should be eternally vigilant against attempts to check the expression of opinions that we loathe and believe to be fraught with death, unless they so imminently threaten immediate interference with the lawful and pressing purposes of the law that an immediate check is required to save the country.”).

[14] Whitney v. California, 274 U.S. 357, 377 (1927). See also, Alvarez, 567 U.S. at 727-28 (“The remedy for speech that is false is speech that is true. This is the ordinary course in a free society. The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straight-out lie, the simple truth. The theory of our Constitution is ‘that the best test of truth is the power of the thought to get itself accepted in the competition of the market.’ The First Amendment itself ensures the right to respond to speech we do not like, and for good reason. Freedom of speech and thought flows not from the beneficence of the state but from the inalienable rights of the person. And suppression of speech by the government can make exposure of falsity more difficult, not less so. Society has the right and civic duty to engage in open, dynamic, rational discourse. These ends are not well served when the government seeks to orchestrate public discussion through content-based mandates.”) (citations omitted).

[15] See, e.g., Jonathan Peters, The “Sovereigns of Cyberspace” and State Action: The First Amendment’s Applications—or Lack Thereof—to Third-Party Platforms, 32 Berk. Tech. L. J. 989 (2017) .

[16] See id. at 990, 992 (2017) (emphasizing the need to “talk about the [state action doctrine] until we settle on a view both conceptually and functionally right.”) (citing Charles L. Black, Jr., The Supreme Court, 1966 Term—Foreword: “State Action,” Equal Protection, and California’s Proposition 14, 81 Harv. L. Rev. 69, 70 (1967)).

[17] Or, in the framing of some: to allow too much harmful speech, including misinformation, if it drives attention to the platforms for more ads to be served. See Karen Hao, How Facebook and Google Fund Global Misinformation, MIT Tech. Rev. (Nov. 20, 2021), https://www.technologyreview.com/2021/11/20/1039076/facebook-google-disinformation-clickbait.

[18] See, e.g., Thomas Sowell, Knowledge and Decisions (1980).

[19] That is to say, the marketplace will not perfectly remove misinformation, but will navigate the tradeoffs inherent in limiting misinformation without empowering any one individual or central authority to determine what is true.

[20] See, e.g., Halleck, 139 S. Ct. at 1928; Denver Area Ed. Telecommunications Consortium, Inc. v. FCC, 518 U.S. 727, 737 (1996) (plurality opinion); Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, 566 (1995); Hudgens v. NLRB, 424 U.S. 507, 513 (1976).

[21] See Part II below.

[22] For instance, a person could order a visitor to leave their home for saying something offensive and the police would, if called upon, help to eject them as trespassers. In general, courts will enforce private speech restrictions that governments could never constitutionally enact. See Mark D. Rosen, Was Shelley v. Kraemer Incorrectly Decided? Some New Answers, 95 Cal. L. Rev. 451, 458-61 (2007) (listing a number of cases where the holding of Shelley v. Kraemer that court enforcement of private agreements was state action did not extend to the First Amendment, meaning that private agreements to limit speech are enforced).

[23] Halleck, 139 S. Ct. at 1928, 1934 (citations omitted) (emphasis added).

[24] Id. at 1930.

[25] Id. at 1930-31.

[26] It is worth noting that application of the right to editorial discretion to social-media companies is a question that will soon be before the Supreme Court in response to common-carriage laws passed in Florida and Texas that would require carriage of certain speech. The 5th and 11th U.S. Circuit Courts of Appeal have come to opposite conclusions on this point. Compare NetChoice, LLC v. Moody, 34 F.4th 1196 (11th Cir. 2022) (finding the right to editorial discretion was violated by Florida’s common-carriage law) and NetChoice, LLC v. Paxton, 49 F.4th 439 (5th Cir. 2022) (finding the right to editorial discretion was not violated by Texas’ common-carriage law).

[27] Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241, 256 (1974).

[28] See id. at 247-54.

[29] Id. at 255 (citing Columbia Broadcasting System, Inc. v. Democratic National Committee, 412 U. S. 94, 117 (1973)),

[30] 47 U.S.C. §230(c).

[31] For a further discussion, see generally Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L. J. 26 (2022).

[32] Much of this section is adapted from Ben Sperry, An L&E Defense of the First Amendment’s Protection of Private Ordering, Truth on the Market (Apr. 23, 2021), https://truthonthemarket.com/2021/04/23/an-le-defense-of-the-first-amendments-protection-of-private-ordering.

[33] See F.A. Hayek, The Use of Knowledge in Society, 35 Am. Econ. Rev. 519 (1945).

[34] Id. at 520.

[35] See supra notes 13-14 and associated text. See also David Schultz, Marketplace of Ideas, First Amendment Encyclopedia, https://www.mtsu.edu/first-amendment/article/999/marketplace-of-ideas (last updated by Jun. 2017 by David L. Hudson) (noting the history of the “marketplace of ideas” justification by the Supreme Court for the First Amendment’s protection of free speech from government intervention); J.S. Mill, On Liberty, Ch. 2 (1859); John Milton, Areopagitica (1644).

[36] Without delving too far into epistemology, some argue that this is even the case in the scientific realm. See, e.g., Thomas Kuhn, The Structure of Scientific Revolutions (1962). Even according to the perspective that some things are universally true across time and space, they still amount to a tiny fraction of what we call human knowledge. “Information” may be a better term for what economists are actually talking about.

[37] The Supreme Court has recently affirmed that the government may not compel speech by businesses subject to public-accommodation laws. See 303 Creative LLC v. Elenis, No. 21-476, slip op. (Jun. 30, 2023), available at https://www.supremecourt.gov/opinions/22pdf/21-476_c185.pdf. The Court will soon also have to determine whether common-carriage laws can be applied to social-media companies consistent with the First Amendment in the NetChoice cases noted above. See supra note 26.

[38] Sowell, supra note 18, at 240.

[39] Even those whom we most trust to have considered opinions and an understanding of the facts may themselves experience “expert failure”—a type of market failure—that is made likelier still when government rules serve to insulate such experts from market competition. See generally Roger Koppl, Expert Failure (2018).

[40] See, e.g., West Virginia Bd. of Ed. v. Barnette, 319 U.S. 624, 642 (1943) (“If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. If there are any circumstances which permit an exception, they do not now occur to us.”).

[41] See, e.g., Alvarez, 567 U.S. at 728 (“Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth.”).

[42] Cf. Halleck, 131 S. Ct. at 1930-31.

[43] For a good explanation, see Jamie Whyte, Polluting Words: Is There a Coasean Case to Regulate Offensive Speech?, ICLE White Paper (Sep. 2021), available at https://laweconcenter.org/wp-content/uploads/2021/09/Whyte-Polluting-Words-2021.pdf.

[44] R.H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1, 2 (1960) (“The traditional approach has tended to obscure the nature of the choice that has to be made. The question is commonly thought of as one in which A inflicts harm on B and what has to be decided is: how should we restrain A? But this is wrong. We are dealing with a problem of a reciprocal nature. To avoid the harm to B would inflict harm on A. The real question that has to be decided is: should A be allowed to harm B or should B be allowed to harm A? The problem is to avoid the more serious harm.”).

[45] See id. at 8-10.

[46] See generally David S. Evans & Richard Shmalensee, Matchmakers: The New Economics of Multisided Platforms (2016).

[47] For more on how and why social-media companies govern online speech, see Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 HARV. L. REV. 1598 (2018).

[48] See Kate Conger, Tiffany Hsu, & Ryan Mac, Elon Musk’s Twitter Faces Exodus of Advertisers and Executives, The New York Times (Nov. 1, 2022), https://www.nytimes.com/2022/11/01/technology/elon-musk-twitter-advertisers.html (“[A]dvertisers — which provide about 90 percent of Twitter’s revenue — are increasingly grappling with Mr. Musk’s ownership of the platform. The billionaire, who is meeting advertising executives in New York this week, has spooked some advertisers because he has said he would loosen Twitter’s content rules, which could lead to a surge in misinformation and other toxic content.”); Ryan Mac & Tiffany Hsu, Twitter’s US Ad Sales Plunge 59% as Woes Continue, The New York Times (Jun. 5, 2013), https://www.nytimes.com/2023/06/05/technology/twitter-ad-sales-musk.html (“Six ad agency executives who have worked with Twitter said their clients continued to limit spending on the platform. They cited confusion over Mr. Musk’s changes to the service, inconsistent support from Twitter and concerns about the persistent presence of misleading and toxic content on the platform.”).

[49] See, e.g., Brian Fung, Twitter Prepares to Roll Out New Paid Subscription Service That Includes Blue Checkmark, CNN (Nov. 5, 2022), https://www.cnn.com/2022/11/05/business/twitter-blue-checkmark-paid-subscription/index.html.

[50] Sowell, supra note 18, at 244.

[51] See Halleck, 139 S. Ct. at 1931 (“The Constitution does not disable private property owners and private lessees from exercising editorial discretion over speech and speakers on their property.”).

[52] Cf. Tornillo, 418 U.S. at 255 (“The power of a privately owned newspaper to advance its own political, social, and economic views is bounded by only two factors: first, the acceptance of a sufficient number of readers—and hence advertisers —to assure financial success; and, second, the journalistic integrity of its editors and publishers.”).

[53] See Ben Sperry & R.J. Lehmann, Gov. Desantis’ Unconstitutional Attack on Social Media, Tampa Bay Times (Mar. 3, 2021), https://www.tampabay.com/opinion/2021/03/03/gov-desantis-unconstitutional-attack-on-social-media-column (“Social-media companies and other tech platforms find themselves in a very similar position [as the newspaper in Tornillo] today. Just as newspapers do, Facebook, Google and Twitter have the right to determine what kind of content they want on their platforms. This means they can choose whether and how to moderate users’ news feeds, search results and timelines consistent with their own views on, for example, what they consider to be hate speech or misinformation. There is no obligation for them to carry speech they don’t wish to carry, which is why DeSantis’ proposal is certain to be struck down.”).

[54] See 47 U.S.C. §230.

[55] See, e.g., Jennifer Huddleston, Competition and Content Moderation: How Section 230 Enables Increased Tech Marketplace Entry, at 4, Cato Policy Analysis No. 922 (Jan. 31, 2022), available at https://www.cato.org/sites/cato.org/files/2022-01/policy-analysis-922.pdf (“The freedom to adopt content moderation policies tailored to their specific business model, their advertisers, and their target customer base allows new platforms to please internet users who are not being served by traditional media. In some cases, the audience that a new platform seeks to serve is fairly narrowly tailored. This flexibility to tailor content moderation policies to the specific platform’s community of users, which Section 230 provides, has made it possible for websites to establish online communities for a highly diverse range of people and interests, ranging from victims of sexual assault, political conservatives, the LGBTQ+ community, and women of color to religious communities, passionate stamp collectors, researchers of orphan diseases, and a thousand other affinity groups. Changing Section 230 to require websites to accept all comers, or to limit the ability to moderate content in a way that serves specific needs, would seriously curtail platforms’ ability to serve users who might otherwise be ignored by incumbent services or traditional editors.”). 

[56] See, e.g., Rui Gu, Lih-Bin Oh, & Kanliang Wang, Multi-Homing On SNSS: The Role of Optimum Stimulation Level and Perceived Complementarity in Need Gratification, 53 Information & Management 752 (2016), available at https://kd.nsfc.gov.cn/paperDownload/ZD19894097.pdf (“Given the increasingly intense competition for social networking sites (SNSs), ensuring sustainable growth in user base has emerged as a critical issue for SNS operators. Contrary to the common belief that SNS users are committed to using one SNS, anecdotal evidence suggests that most users use multiple SNSs simultaneously. This study attempts to understand this phenomenon of users’ multi-homing on SNSs. Building upon optimum stimulation level (OSL) theory, uses and gratifications theory, and literature on choice complementarity, a theoretical model for investigating SNS users’ multi-homing intention is proposed. An analysis of survey data collected from 383 SNS users shows that OSL positively affects users’ perceived complementarity between different SNSs in gratifying their four facets of needs, namely, interpersonal communication, self-presentation, information, and entertainment. Among the four dimensions of perceived complementarity, only interpersonal communication and information aspects significantly affect users’ intention to multi-home on SNSs. The results from this study offer theoretical and practical implications for understanding and managing users’ multi-homing use of SNSs.”).

[57] See, e.g., How Has Social Media Emerged as a Powerful Communication Medium, University Canada West Blog (Sep. 25, 2022), https://www.ucanwest.ca/blog/media-communication/how-has-social-media-emerged-as-a-powerful-communication-medium:

Social media has taken over the business sphere, the advertising sphere and additionally, the education sector. It has had a long-lasting impact on the way people communicate and has now become an integral part of their lives. For instance, WhatsApp has redefined the culture of IMs (instant messaging) and taken it to a whole new level. Today, you can text anyone across the globe as long as you have an internet connection. This transformation has not only been brought about by WhatsApp but also Facebook, Twitter, LinkedIn and Instagram. The importance of social media in communication is a constant topic of discussion.

Online communication has brought information to people and audiences that previously could not be reached. It has increased awareness among people about what is happening in other parts of the world. A perfect example of the social media’s reach can be seen in the way the story about the Amazon Rainforest fire spread. It started with a single post and was soon present on everyone’s newsfeed across different social media platforms.

Movements, advertisements and products are all being broadcasted on social media platforms, thanks to the increase in the social media users. Today, businesses rely on social media to create brand awareness as well as to promote and sell their products. It allows organizations to reach customers, irrespective of geographical boundaries. The internet has facilitated a resource to humankind that has unfathomable reach and benefits.

[58] Governmental intervention here could be particularly destructive if it leads to the imposition of “expert” opinions from insulated government actors from the “intelligence community.” Koppl, in his study on expert failure, described the situation as “the entangled deep state,” stating in relevant part:

The entangled deep state is an only partially hidden informal network linking the intelligence community, military, political parties, large corporations including defense contractors, and others. While the interests of participants in the entangled deep state often conflict, members of the deep state share a common interest in maintaining the status quo of the political system independently of democratic processes. Therefore, denizens of the entangled deep state may sometimes have an incentive to act, potentially in secret, to tamp down resistant voices and to weaken forces challenging the political status quo… The entangled deep state produces the rule of experts. Experts must often choose for the people because the knowledge on the basis of which choices are made is secret, and the very choice being made may also be a secret involving, supposedly, “national security.”… The “intelligence community” has incentives that are not aligned with the general welfare or with democratic process. Koppl, supra note 39, at 228, 230-31.

[59] Halleck, 139 S. Ct. at 1928 (internal citations omitted).

[60] 326 U.S. 501 (1946).

[61] Id. at 506.

[62] Id. at 509 (“When we balance the Constitutional rights of owners of property against those of the people to enjoy freedom of press and religion, as we must here, we remain mindful of the fact that the latter occupy a preferred position.”).

[63] 391 U.S. 308 (1968).

[64] See id. at 316-19. In particular, see id. at 318 (“The shopping center here is clearly the functional equivalent of the business district of Chickasaw involved in Marsh.”).

[65] See id. at 325.

[66] 407 U.S. 551 (1972).

[67] Id. at 562.

[68] Id.

[69] See id. at 568 (“[T]he courts properly have shown a special solicitude for the guarantees of the First Amendment, this Court has never held that a trespasser or an uninvited guest may exercise general rights of free speech on property privately owned and used nondiscriminatorily for private purposes only.”).

[70] Id. at 569.

[71] See id. at 570.

[72] 424 U.S. 507 (1976).

[73] Id. at 513.

[74] See id. at 516 (“Under what circumstances can private property be treated as though it were public? The answer that Marsh gives is when that property has taken on all the attributes of a town, i. e., `residential buildings, streets, a system of sewers, a sewage disposal plant and a “business block” on which business places are situated.’ (Logan Valley, 391 U.S. at 332 (Black, J. dissenting) (quoting Marsh, 326 U.S. at 502)).

[75] See id. at 518 (“It matters not that some Members of the Court may continue to believe that the Logan Valley case was rightly decided. Our institutional duty is to follow until changed the law as it now is, not as some Members of the Court might wish it to be. And in the performance of that duty we make clear now, if it was not clear before, that the rationale of Logan Valley did not survive the Court’s decision in the Lloyd case.”).

[76] Id. at 521.

[77] Jackson v. Metropolitan Edison Co., 419 U.S. 345, 352 (1974).

[78] See, e.g., the discussion about Prager University v. Google below.

[79] Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017).

[80] Id. (internal citation omitted).

[81] See, e.g., Brock v. Zuckerberg, 2021 WL 2650070, at *3 (S.D.N.Y. Jun. 25, 2021); Freedom Watch, Inc. v. Google Inc., 816 F. App’x 497, 499 (D.C. Cir. 2020); Zimmerman v. Facebook, Inc., 2020 WL 5877863 at *2 (N.D. Cal. Oct. 2, 2020); Ebeid v. Facebook, Inc., 2019 WL 2059662 at *6 (N.D. Cal. May 9, 2019); Green v. YouTube, LLC, 2019 WL 1428890, at *4 (D.N.H. Mar. 13, 2019); Nyabwa v. FaceBook, 2018 WL 585467, at *1 (S.D. Tex. Jan. 26, 2018); Shulman v. Facebook.com, 2017 WL 5129885, at *4 (D.N.J. Nov. 6, 2017).

[82] Halleck, 139 S. Ct. at 1929 (emphasis in original).

[83] Id. at 1930.

[84] Id.

[85] Id. at 1930-31.

[86] 951 F.3d 991 (9th Cir. 2020).

[87] See id. at 997-98. See also, Prager University v. Google, LLC, 2018 WL 1471939, at *6 (N.D. Cal. Mar. 26, 2018) (“Plaintiff primarily relies on the United States Supreme Court’s decision in Marsh v. Alabama to support its argument, but Marsh plainly did not go so far as to hold that any private property owner “who operates its property as a public forum for speech” automatically becomes a state actor who must comply with the First Amendment.”).

[88] See PragerU, 951 F.3d at 996-99 (citing Halleck 12 times, Hudgens 3 times, and Tanner 3 times).

[89] See supra n. 7-9 and associated text.

[90] Cf. Norwood v. Harrison, 413 U.S. 455, 465 (1973) (“It is axiomatic that a state may not induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.”).

[91] Blum v. Yaretsky, 457 U.S. 991, 1004 (1982).

[92] Id. at 1004-05.

[93] Id. (noting that “the factual setting of each case will be significant”).

[94] 372 U.S. 58 (1963).

[95] See id. at 66-67.

[96] See id. at 68.

[97] Id. at 67.

[98] Id. at 68-69.

[99] 827 F.2d 1291 (9th Cir. 1987).

[100] See id. at 1295.

[101] Id.

[102] See id. (“Simply by ‘command[ing] a particular result,’ the state had so involved itself that it could not claim the conduct had actually occurred as a result of private choice.”) (quoting Peterson v. City of Greenville, 373 U.S. 244, 248 (1963)).

[103] See Backpage.com, LLC v. Dar, 807 F.3d 229 (7th Cir. 2015).

[104] See id. at 231, 232.

[105] Id. at 230.

[106] Id. at 235.

[107] Id. at 231.

[108] 2023 WL 2443073 (9th Cir. Mar. 10, 2023).

[109] See id. at *2-3.

[110] See id. at *5-6.

[111] Id. at *6.

[112] Id.

[113] Id.

[114] 2022 WL 1427507 (N.D. Cal. May 5, 2022).

[115] See id. at *8.

[116] Id.

[117] Id. (emphasis in original).

[118] See, e.g., Trump v. Twitter, 602 F.Supp.3d 1213, 1218-26 (2022); Children’s Health Def. v. Facebook, 546 F.Supp.3d 909, 932-33 (2021).

[119] 2023 WL 2578260 (W.D. La. Mar. 20, 2023). See also Missouri, et al. v. Biden, et al., 2023 WL 4335270 (W.D. La. Jul. 4., 2023) (memorandum opinion granting the plaintiffs’ motion for preliminary injunction).

[120] 2023 WL 2578260 at *30-31.

[121] See id.

[122] See id. at *17-19.

[123] It is worth noting that all of these cases were decided at the motion-to-dismiss stage, during which all of the plaintiffs’ allegations are assumed to be true. The plaintiffs in Missouri v. Biden will have to prove their factual case of state action. Now that the Western District of Louisiana has ruled on the motion for preliminary injunction, it is likely that there will be an appeal before the case gets to the merits.

[124] The district court in Missouri v. Biden discussed this distinction further in the memorandum ruling on request for preliminary injunction:

The Defendants argue that by making public statements, this is nothing but government speech. However, it was not the public statements that were the problem. It was the alleged use of government agencies and employees to coerce and/or significantly encourage social-media platforms to suppress free speech on those platforms. Plaintiffs point specifically to the various meetings, emails, follow-up contacts, and the threat of amending Section 230 of the Communication Decency Act. Plaintiffs have produced evidence that Defendants did not just use public statements to coerce and/or encourage social-media platforms to suppress free speech, but rather used meetings, emails, phone calls, follow-up meetings, and the power of the government to pressure social-media platforms to change their policies and to suppress free speech. Content was seemingly suppressed even if it did not violate social-media policies. It is the alleged coercion and/or significant encouragement that likely violates the Free Speech Clause, not government speech, and thus, the Court is not persuaded by Defendants’ arguments here.

Missouri v. Biden, 2023 WL 4335270, at *56 (W.D. La. July 4, 2023).

[125] While the district court did talk in significantly greater detail about specific allegations as to each federal defendant’s actions in coercing or encouraging changes in moderation policies or enforcement actions, there is still a lack of specificity as to how it affected the plaintiffs. See id. at *45-53 (applying the coercion/encouragement standard to each federal defendant). As in its earlier decision at the motion-to-dismiss stage, the court’s opinion accompanying the preliminary injunction does deal with this issue to a much greater degree in its discussion of standing, and specifically of traceability. See id. at *61-62:

Here, Defendants heavily rely upon the premise that social-media companies would have censored Plaintiffs and/or modified their content moderation policies even without any alleged encouragement and coercion from Defendants or other Government officials. This argument is wholly unpersuasive. Unlike previous cases that left ample room to question whether public officials’ calls for censorship were fairly traceable to the Government; the instant case paints a full picture. A drastic increase in censorship, deboosting, shadow-banning, and account suspensions directly coincided with Defendants’ public calls for censorship and private demands for censorship. Specific instances of censorship substantially likely to be the direct result of Government involvement are too numerous to fully detail, but a birds-eye view shows a clear connection between Defendants’ actions and Plaintiffs injuries.

The Plaintiffs’ theory of but-for causation is easy to follow and demonstrates a high likelihood of success as to establishing Article III traceability. Government officials began publicly threatening social-media companies with adverse legislation as early as 2018. In the wake of COVID-19 and the 2020 election, the threats intensified and became more direct. Around this same time, Defendants began having extensive contact with social-media companies via emails, phone calls, and in-person meetings. This contact, paired with the public threats and tense relations between the Biden administration and social-media companies, seemingly resulted in an efficient report-and-censor relationship between Defendants and social-media companies. Against this backdrop, it is insincere to describe the likelihood of proving a causal connection between Defendants’ actions and Plaintiffs’ injuries as too attenuated or purely hypothetical.

The evidence presented thus goes far beyond mere generalizations or conjecture: Plaintiffs have demonstrated that they are likely to prevail and establish a causal and temporal link between Defendants’ actions and the social-media companies’ censorship decisions. Accordingly, this Court finds that there is a substantial likelihood that Plaintiffs would not have been the victims of viewpoint discrimination but for the coercion and significant encouragement of Defendants towards social-media companies to increase their online censorship efforts.

[126] See Lugar v. Edmonson Oil Co., 457 U.S. 922, 941-42 (1982).

[127] See Brentwood Acad. v. Tennessee Secondary Sch. Athletic Ass’n, 531 U.S. 288, 294 (2001).

[128] See id. at 296.

[129] For instance, in Mathis v. Pacific Gas & Elec. Co., 75 F.3d 498 (9th Cir. 1996), the 9th Circuit described the plaintiff’s “joint action” theory as one where a private person could only be liable if the particular actions challenged are “inextricably intertwined” with the actions of the government. See id. at 503.

[130] See Brentwood, 531 U.S. at 296 (noting that “examples may be the best teachers”).

[131] See Lugar, 457 U.S. at 925.

[132] See id.

[133] See id.

[134] Id. at 941 (internal citations omitted).

[135] Id.

[136] See id. at 942.

[137] 365 U.S. 715 (1961).

[138] See id. at 717-20.

[139] Id. at 724.

[140] See Rendell-Baker v. Kohn, 457 U.S. 830, 842-43 (1982).

[141] See Brentwood, 531 U.S. at 292-93.

[142] See id. at 296 (“[A] challenged activity may be state action… when it is ‘entwined with governmental policies,’ or when government is ‘entwined in [its] management or control.’”) (internal citations omitted).

[143] See id. at 298-301.

[144] Id. at 302.

[145] 489 U.S. 602 (1989).

[146] See id. at 606-12, 615.

[147] Id. at 615.

[148] Id.

[149] O’Handley, 2023 WL 2443073, at *7.

[150] Id.

[151] See id. at *7-8.

[152] 75 F.3d 498 (9th Cir. 1996).

[153] O’Handley, 2023 WL 2443073, at *8.

[154] Id.

[155] Hart, 2022 WL 1427507, at *6.

[156] Id. at *7.

[157] See, e.g., Fed. Agency of News LLC v. Facebook, Inc., 432 F. Supp. 3d 1107, 1124-27 (N.D. Cal. 2020); Children’s Health Def. v. Facebook Inc., 546 F. Supp. 3d 909, 927-31 (N.D. Cal. 2021); Berenson v. Twitter, 2022 WL1289049, at *3 (N.D. Cal. Apr. 29, 2022).

[158] 546 F. Supp. 3d 909 (N.D. Cal. 2021).

[159] Id. at 932 (citing Divino Grp. LLC v. Google LLC, 2021 WL 51715, at *6 (N.D. Cal. Jan. 6, 2021)).

[160] Missouri v. Biden, 2023 WL 2578260, at *33.

[161] Id.

[162] Id. at *33-34.

[163] Id. at *34.

[164] A government action is content based if it can’t be applied without considering its content. See, e.g., Reed v. Town of Gilbert, Ariz., 576 U.S. 155, 163 (2015) (“Government regulation of speech is content based if a law applies to particular speech because of the topic discussed or the idea or message expressed.”).

[165] See, e.g., Citizens United v. Fed. Election Comm’n, 558 U.S. 310, 340 (2010) (“Laws that burden political speech are ‘subject to strict scrutiny,’ which requires the Government to prove that the restriction ‘furthers a compelling interest and is narrowly tailored to achieve that interest.’”) (internal citations omitted).

[166] See Fulton v. City of Philadelphia, Pennsylvania, 141 S. Ct. 1868, 1881 (2021) (“A government policy can survive strict scrutiny only if it advances ‘interests of the highest order’…”).

[167] Ashcroft v. ACLU, 542 U.S. 656, 666 (2004). In that case, the Court compared the Children’s Online Protection Act’s age-gating to protect children from online pornography to blocking and filtering software available in the marketplace, and found those alternatives to be less restrictive. The Court thus struck down the regulation. See id. at 666-70.

[168] Alameda Books v. City of Los Angeles, 535 U.S. 425, 455 (2002).

[169] See, e.g., New York Times Co. v. United States, 403 U.S. 713, 714 (1971).

[170] The classic example being an ordinance on noise that doesn’t require the government actor to consider the content or viewpoint of the speaker in order to enforce. See Ward v. Rock Against Racism, 491 U.S. 781 (1989).

[171] See id. at 791 (“Our cases make clear, however, that even in a public forum the government may impose reasonable restrictions on the time, place, or manner of protected speech, provided the restrictions ‘are justified without reference to the content of the regulated speech, that they are narrowly tailored to serve a significant governmental interest, and that they leave open ample alternative channels for communication of the information.’”) (internal citations omitted).

[172] See Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 662 (1994) (finding “the appropriate standard by which to evaluate the constitutionality of must-carry is the intermediate level of scrutiny applicable to content-neutral restrictions that impose an incidental burden on speech.”).

[173] See id. (“[A] content-neutral regulation will be sustained if ‘it furthers an important or substantial governmental interest; if the governmental interest is unrelated to the suppression of free expression; and if the incidental restriction on alleged First Amendment freedoms is no greater than is essential to the furtherance of that interest.’”) (quoting United States v. O’Brien, 391 U.S. 367, 377 (1968)).

[174] See Broadrick v. Oklahoma, 413 U.S. 601, 615 (1973) (holding that “the overbreadth of a statute must not only be real, but substantial as well, judged in relation to the statute’s plainly legitimate sweep”).

[175] See Kolender v. Lawson, 461 U.S. 352, 357 (1983) (holding that a law must have “sufficient definiteness that ordinary people can understand what conduct is prohibited and in a manner that does not encourage arbitrary and discriminatory enforcement”).

[176] 2023 WL 414258 (E.D. Cal. Jan. 25, 2023).

[177] Cal. Bus. & Prof. Code § 2270.

[178] Høeg, 2023 WL 414258, at *6 (internal citations omitted).

[179] Id. at *7.

[180] See id.

[181] Id. at *8.

[182] Id. at *9.

[183] Id. at *9.

[184] See id. at *12.

[185] New York Times Co. v. United States, 403 U.S. 713, 714 (1971) (quoting Bantam Books, 372 U.S. at 70).

[186] Missouri v. Biden, 2023 WL2578260, at *35 (quoting Backpage.com, 807 F.3d at 230).

[187] See id. (comparing the situation to cable operators in the Turner Broadcasting cases).

[188] Id.

[189] Id.

[190] See discussion of United States v. Alvarez, 567 U.S. 709 (2012) below.

[191] See Minnesota Voters Alliance v. Mansky, 138 S. Ct. 1876, 1885 (2018) (“In a traditional public forum — parks, streets, sidewalks, and the like — the government may impose reasonable time, place, and manner restrictions on private speech, but restrictions based on content must satisfy strict scrutiny, and those based on viewpoint are prohibited.”).

[192] Missouri v. Biden, 2023 WL2578260, at *35.

[193] Id.

[194] 567 U.S. 709 (2012).

[195] Id. at 717 (quoting United States v. Stevens, 559 U.S. 460, 468 (2010)).

[196] Id. at 718.

[197] See Chaplinsky v. New Hampshire, 315 U.S. 568, 571-72 (1942) (“There are certain well-defined and narrowly limited classes of speech, the prevention and punishment of which has never been thought to raise any Constitutional problem.”)

[198] See Alvarez, 567 U.S. at 718-22.

[199] See id. at 719 (“Even when considering some instances of defamation and fraud, moreover, the Court has been careful to instruct that falsity alone may not suffice to bring the speech outside the First Amendment. The statement must be a knowing or reckless falsehood.”). This means that the First Amendment was found to limit common law actions against false speech which did not receive constitutional protection.

[200] Under the common law, the elements of fraud include (1) a misrepresentation of a material fact or failure to disclose a material fact the defendant was obligated to disclose, (2) intended to induce the victim to rely on the misrepresentation or omission, (3) made with knowledge that the statement or omission was false or misleading, (4) the plaintiff relied upon the representation or omission, and (5) suffered damages or injury as a result of the reliance. See, e.g., Mandarin Trading Ltd v. Wildenstein, 919 N.Y.S.2d 465, 469 (2011); Kostryckyj v. Pentron Lab. Techs., LLC, 52 A.3d 333, 338-39 (Pa. Super. 2012); Masingill v. EMC Corp., 870 N.E.2d 81, 88 (Mass. 2007). Similarly, commercial speech regulation on deceptive or misleading advertising or health claims have also been found to be consistent with the First Amendment. See Virginia State Bd. of Pharmacy v. Virginia Citizens Consumer Council, 425 U.S. 748, 771-72 (1976) (“Obviously, much commercial speech is not provably false, or even wholly false, but only deceptive or misleading. We foresee no obstacle to a State’s dealing effectively with this problem. The First Amendment, as we construe it today does not prohibit the State form insuring that the stream of commercial information flow cleanly as well as freely.”).

[201] See, e.g., Donaldson v. Read Magazine, Inc. 333 U.S. 178, 190 (1948) (the government’s power “to protect people against fraud” has “always been recognized in this country and is firmly established”).

[202] Illinois, ex rel. Madigan v. Telemarketing Associates, Inc., 538 U.S. 600, 617 (2003).

[203] See, e.g., Schaumburg v. Citizens for a Better Environment, 444 U.S. 620 (1980); Secretary of State of Md. v. Joseph H. Munson Co., 467 U.S. 947 (1984); Riley v. National Federation of Blind of N. C., Inc., 487 U.S. 781 (1988).

[204] Madigan, 538 U.S. at 620.

[205] Under the old common-law rule, proving defamation required a plaintiff to present a derogatory statement and demonstrate that it could hurt their reputation. The falsity of the statement was presumed, and the defendant had the burden to prove the statement was true in all of its particulars. Re-publishing something from someone else could also open the new publisher to liability. See generally Samantha Barbas, The Press and Libel Before New York Times v. Sullivan, 44 Colum. J.L. & Arts 511 (2021).

[206] 376 U.S. 254 (1964).

[207] Id. at 271. See also id. at 271-72 (“Erroneous statement is inevitable in free debate, and [] it must be protected if the freedoms of expression are to have the ‘breathing space that they need to survive.’”) (quoting N.A.A.C.P. v. Button, 371 U.S. 415, 433 (1963)).

[208] Id. at 279-80.

[209] Id. at 727-28.

[210] Carlin Commc’ns, 827 F.2d at 1297.

[211] See Missouri, et al. v. Biden, et al., Case No. 3:22-CV-01213 (W.D. La. Jul. 4, 2023), available at https://int.nyt.com/data/documenttools/injunction-in-missouri-et-al-v/7ba314723d052bc4/full.pdf.

[212] Id. See also Missouri, et al. v. Biden, et al., 2023 WL 4335270, at *45-56 (W.D. La. Jul. 4., 2023) (memorandum ruling on request for preliminary injunction). But see Missouri, et al. v. Biden, et al., No. 23-30445 (5th Cir. Sept. 8, 2023), slip op., available at https://www.ca5.uscourts.gov/opinions/pub/23/23-30445-CV0.pdf (upholding the injunction but limiting the parties it applies to); Murthy et al. v. Missouri, et al., No: 3:22-cv-01213 (Sept. 14, 2023) (order issued by Justice Aliso issuing an administrative stay of the preliminary injunction until Sept. 22, 2023 at 11:509 p.m. EDT).

[213] 42 U.S.C. §1983.

[214] See, e.g., Adickes v. SH Kress & Co., 398 U.S. 144, 152 (1970) (“Although this is a lawsuit against a private party, not the State or one of its officials, our cases make clear that petitioner will have made out a violation of her Fourteenth Amendment rights and will be entitled to relief under § 1983 if she can prove that a Kress employee, in the course of employment, and a Hattiesburg policeman somehow reached an understanding to deny Miss Adickes service in the Kress store, or to cause her subsequent arrest because she was a white person in the company of Negroes. The involvement of a state official in such a conspiracy plainly provides the state action essential to show a direct violation of petitioner’s Fourteenth Amendment equal protection rights, whether or not the actions of the police were officially authorized, or lawful… Moreover, a private party involved in such a conspiracy, even though not an official of the State, can be liable under § 1983.”) (internal citations omitted).

[215] Smith v. Wade, 461 U.S. 30, 56 (1983).

[216] See Missouri, et al. v. Biden, et al., 2023 WL 4335270, at *55, 56 (W.D. La. Jul. 4., 2023).

[217] Codified at Fla. Stat. § 112.23, available at https://casetext.com/statute/florida-statutes/title-x-public-officers-employees-and-records/chapter-112-public-officers-and-employees-general-provisions/part-i-conditions-of-employment-retirement-travel-expenses/section-11223-government-directed-content-moderation-of-social-media-platforms-prohibited.

[218] Id.

[219] For more on this proposal, Manne, Stout, & Sperry, supra note 31, at 106-112.

[220] See Dominion Voting Sys. v. Fox News Network, LLC, C.A. No.: N21C-03-257 EMD (Sup. Ct. Del. Mar. 31, 2023), available at https://www.documentcloud.org/documents/23736885-dominion-v-fox-summary-judgment.

[221] See, e.g.,  Jeremy W. Peters & Katie Robertson, Fox Will Pay $787.5 Million to Settle Defamation Suit, New York Times (Apr. 18, 2023), https://www.nytimes.com/live/2023/04/18/business/fox-news-dominion-trial-settlement#fox-dominion-defamation-settle.

[222] See, e.g., Neil Vigdor, ‘Prove Mike Wrong’ for $5 Million, Lindell Pitched. Now, He’s Told to Pay Up., New York Times (Apr. 20, 2023), https://www.nytimes.com/2023/04/20/us/politics/mike-lindell-arbitration-case-5-million.html.

[223] See Stephen Fowler, Judge Finds Rudy Giuliani Liable for Defamation of Two Georgia Election Workers, national public radio (Aug. 30, 2023), https://www.npr.org/2023/08/30/1196875212/judge-finds-rudy-giuliani-liable-for-defamation-of-two-georgia-election-workers.

[224] See supra notes 206-09 and associated text.

Continue reading
Innovation & the New Economy

Right to Anonymous Speech, Part 3: Anonymous Speech and Age-Verification Laws

TOTM An issue that came up during a terrific panel that I participated in last Thursday—organized by the Federalist Society’s Regulatory Transparency Project—was whether age-verification laws for social-media use . . .

An issue that came up during a terrific panel that I participated in last Thursday—organized by the Federalist Society’s Regulatory Transparency Project—was whether age-verification laws for social-media use infringed on a First Amendment right of either adults or minors to receive speech anonymously.

My co-panelist Clare Morell of the Ethics and Public Policy Center put together an excellent tweet thread summarizing some of her thoughts, including on the anonymous-speech angle. Another co-panelist—Shoshana Weissmann of the R Street Institute—also has a terrific series of blog posts on this particular issue.

Continuing this ongoing Truth on the Market series on anonymous speech, I wanted to respond to some of these ideas, and to argue that the primary First Amendment and public-policy concerns with age-verification laws really aren’t about anonymous speech. Instead, they are about whether such laws place the burden of avoiding harms on the least-cost avoider. Or, in the language of First Amendment jurisprudence, whether they are the least restrictive means to achieve a particular policy end.

Read the full piece here.

Continue reading
Innovation & the New Economy