Showing 9 of 74 PublicationsScholarship

ICLE Amicus to US Supreme Court in Murthy v Missouri

Amicus Brief INTEREST OF AMICUS CURIAE[1] The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center aimed at building the . . .

INTEREST OF AMICUS CURIAE[1]

The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center aimed at building the intellectual foundations for sensible, economically sound policy.  ICLE promotes the use of law-and-economics methods and economic learning to inform policy debates.

ICLE has an interest in ensuring that First Amendment law promotes the public interest, the rule of law, and a rich marketplace of ideas.  To this end, ICLE’s scholars write extensively on social media regulation and free speech.  E.g., Int’l Ctr. for Law & Econ. Am. Br., Moody v. NetChoice, LLC, NetChoice, LLC v. Paxton, Nos. 22-277, 22-555 (Dec. 7, 2023); Ben Sperry, Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social-Media Platforms, 59 Gonzaga L. Rev. ___ (2024) (forthcoming); Geoffrey Manne, Ben Sperry & Kristian Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L. J. 26 (2022); Internet Law Scholars Am. Br., Gonzalez v. Google LLC, 21-1333 (Jan. 19, 2023); Ben Sperry, An L&E Defense of the First Amendment’s Protection of Private Ordering, Truth on the Market (Apr. 23, 2021), https://bit.ly/49tZ7XD.

ICLE is concerned about government meddling in—and the resulting impoverishment of—the marketplace of ideas.  That meddling is on display in this case—and another case before the Court this Term.  See No. 22-842, Nat’l Rifle Ass’n of Am. v. Vullo (state official coerced insurance companies not to partner with gun-rights organization to cover losses from gun use).  But this case and Vullo merely illustrate a larger problem.  See Backpage.com, LLC v. Dart, 807 F.3d 229 (7th Cir. 2015) (sheriff campaigned to shut down Backpage.com by pressuring Visa and Mastercard to stop processing Backpage transactions); Heartbeat Int’l, Inc. Am. Br. at 4–10, Vullo, supra (collecting examples); Will Duffield, Jawboning Against Speech: How Government Bullying Shapes the Rules of Social Media, Cato Inst. (Sept. 12, 2022) (collecting examples), bit.ly/41NEhjb; Victor Nava, Amazon “censored” COVID-19 vaccine books after “feeling pressure” from Biden White House: docs, New York Post (Feb. 5, 2024), https://bit.ly/3Sq5152.  With this brief, ICLE urges the Court to enforce the Constitution to protect the marketplace of ideas from all such government intrusions.

SUMMARY OF ARGUMENT

The First Amendment protects a public marketplace of ideas free from government interference.

“The First Amendment directs us to be especially skeptical of regulations that seek to keep people in the dark for what the government perceives to be their own good.” Sorrell v. IMS Health Inc., 564 U.S. 552, 577 (2011) (citation omitted).

“Our representative democracy only works if we protect the ‘marketplace of ideas.’  This free exchange facilitates an informed public opinion, which, when transmitted to lawmakers, helps produce laws that reflect the People’s will.  That protection must include the protection of unpopular ideas, for popular ideas have less need for protection.”  Mahanoy Area Sch. Dist. v. B.L., 594 U.S. ___, 141 S. Ct. 2038, 2046 (2021).

Without a free marketplace of ideas, bad ideas persist and fester.  With a free marketplace of ideas, they get challenged and exposed.  When we think of the marketplace, we think of Justice Holmes dissenting in Abrams v. United States, 250 U.S. 616, 630 (1919).  But the insight behind the concept dates back thousands of years, at least to the Hebrew Bible, and has been recognized by, among others, John Milton, the Founders, and John Stuart Mill.  The insight is that the solution for false speech is true speech.  The government may participate in the marketplace of ideas by speaking for itself.  But it ruins the marketplace by coercing speech.

This Court has long stressed the danger of restricting speech on public health, where information can save lives. Several respondents here are elite professors of medicine who dissented from the scientific judgments of government officials. The professors were just the kind of professionals whose views the public needed to make informed decisions.  Instead, the government pressured social media websites to suppress the professors’ views, which the government –at least at the time—saw as outside the mainstream.

Government intervention like this undermines the scientific enterprise.  The goal of science is not to follow the current consensus, but to challenge it with hard data.  For that challenge to happen, the government must not interfere with the open marketplace of ideas, where the current consensus can always yield to a new and better one.

As the “purchasers” in the marketplace of ideas, the people—including respondents here—were stripped of their First Amendment right to make informed decisions on crucial matters of public health. The right to speak includes a corresponding right to receive speech.  Based on the record here, respondent states can likely show that petitioners trampled on their right to receive information and ideas published by websites.  Similarly, respondent individuals will likely be able to show that they have been robbed of their right to hear other suppressed speakers. Today, the marketplace of ideas is stocked, in part, by social media companies exercising editorial discretion. What distinguishes one site from another is what it will, and will not, publish.  As commentators have noted, in the online world, content moderation is the product.  Social media companies are what economists call multi-sided platforms, which connect advertisers with users by curating third-party speech.  The better platforms become at curating speech, the more users engage, and the more valuable advertising becomes to advertisers and users alike.

At times, keeping users engaged requires removing harmful speech or even disruptive users.  But platforms must strike a balance in their content-moderation policies—allowing enough speech to attract users, but not so much speech that users are driven away.  Operating in the marketplace, social media companies are best placed to strike this balance.

Even if the online marketplace did not operate very efficiently (it does), it could not permissibly be controlled by the government.  The First Amendment forbids any abridgement of speech, including speech on the internet.  The way a website adjusts to the market shows what it thinks deserves “expression, consideration, and adherence,” or is “worthy of presentation” (phrases this Court has used to describe protected editorial discretion).  Pressuring social media companies to take down content changes the content of the platforms’ speech, intrudes on their editorial discretion, and violates the Constitution.

Given the record respondents have compiled, it is likely that they can show coercion by federal officials. The Fifth Circuit agreed, but its test for coercion fell short of the test applied in Bantam Books.  The focus of Bantam Books is not on the subjective understanding of the private actor, but on what the state actors objectively did—namely, was it reasonably understood as attempting to coerce private action?

Here it was.  Indeed, the allegations here include (a) many threats to have social media companies investigated, prosecuted, and regulated if they fail to remove disfavored speech, coupled with (b) extensive use of private meetings, emails, and digital portals to pressure social media companies to remove speech.  That was attempted coercion, and it was unlawful.

The remedy for unlawful coercion is an injunction against, or in some cases, damages from, government actors.  The court below focused the injunction on federal officials.  That was correct.  The marketplace of ideas—now freed from impermissible government intervention by the injunction—leaves its participants free to exercise their editorial discretion as they see fit.  The judgment should be affirmed.

ARGUMENT

I.       The First Amendment protects the marketplace of ideas from government meddling.

A.     A marketplace offering only government-approved ideas is no marketplace, logically and as historically understood.

The First Amendment protects an open marketplace of ideas.  “By allowing all views to flourish, the framers understood, we may test and improve our own thinking both as individuals and as a Nation.”  303 Creative LLC v. Elenis, 600 U.S. 570, 143 S. Ct. 2298, 2311 (2023).  “‘[I]f there is any fixed star in our constitutional constellation,’ it is the principle that the government may not interfere with ‘an uninhibited marketplace of ideas.’”  Id. (quoting West Virginia Bd. of Ed. v. Barnette, 319 U.S. 624, 642 (1943) and McCullen v. Coakley, 573 U.S. 464, 476 (2014)).

“[U]ninhibited” means uninhibited. “[T]he First Amendment protects an individual’s right to speak his mind regardless of whether the government considers his speech sensible and well intentioned or deeply ‘misguided,’ and likely to cause ‘anguish’ or ‘incalculable grief.’” 303 Creative, 143 S. Ct. at 2312 (quoting Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, 574 (1995) and Snyder v. Phelps, 562 U.S. 443, 456 (2011)).  “The First Amendment directs us to be especially skeptical of regulations that seek to keep people in the dark for what the government perceives to be their own good.”  Sorrell, 564 U.S. at 577 (citation omitted).  Without zealous protection, unpopular speech may be “chill[ed],” “would-be speakers [may] remain silent,” and “society will lose their contributions to the ‘marketplace of ideas.’”  United States v. Hansen, 599 U.S. 762, 143 S. Ct. 1932, 1939–40 (2023) (quoting Virginia v. Hicks, 539 U.S. 113, 119 (2003)).  Nor do speakers “shed their First Amendment protections by employing the corporate form to disseminate their speech.”  303 Creative, 143 S. Ct. at 2316.

When the marketplace of ideas is impoverished, it is not only “society” that loses (Hansen, 143 S. Ct. at 1939–40); it is democracy itself.  “Our representative democracy only works if we protect the ‘marketplace of ideas.’  This free exchange facilitates an informed public opinion, which, when transmitted to lawmakers, helps produce laws that reflect the People’s will.  That protection must include the protection of unpopular ideas, for popular ideas have less need for protection.”  Mahanoy Area Sch. Dist., 141 S. Ct. at 2046.  “A democratic people must be able to freely generate, debate, and discuss * * * ideas, hopes, and experiences.  They must then be able to transmit their resulting views and conclusions to their elected representatives[.]  Those representatives can respond by turning the people’s ideas into policies.  The First Amendment, by protecting the marketplace and the transmission of ideas, thereby helps to protect the basic workings of democracy itself.  City of Austin v. Reagan Nat’l Advert. of Austin, LLC, 596 U.S. 61, 142 S. Ct. 1464, 1476–77 (2022) (Breyer, concurring) (internal citations and quotation marks omitted).  In short, “[t]he First Amendment was fashioned to assure unfettered interchange of ideas for the bringing about of political and social changes desired by the people.”  Meyer v. Grant, 486 U.S. 414, 421 (1988) (internal citation and quotation marks omitted).

Without a free marketplace of ideas, bad ideas flourish, unchallenged by competition. “[T]ime has upset many fighting faiths”; and “the ultimate good desired is better reached by free trade in ideas—that the best test of truth is the power of the thought to get itself accepted in the competition of the market[.]  That at any rate is the theory of our Constitution.”  Abrams, 250 U.S. at 630 (Holmes, J., dissenting).  With a free marketplace, however, people enjoy the liberty to be wrong—even as their mistaken ideas tend to get exposed.  For this reason, after the divisive presidential election of 1800, winner Thomas Jefferson urged toleration of dissenters.  Even those in favor of changing our form of government, he urged, should be left “undisturbed as monuments of the safety with which error of opinion may be tolerated where reason is left free to combat it.”  First Inaugural Address (Mar. 4, 1801), https://bit.ly/42tAxUt.

Of course, neither Holmes nor Jefferson was the first to recognize that the best ideas emerge from the crucible of competition.  Thousands of years before the American republic, the Hebrew Bible observed that  “[t]he one who states his case first seems right, until the other comes and examines him.”  Prov. 18:17.   Much later, John Milton and John Stuart Mill would sound similar themes.  “Even a false statement may be deemed to make a valuable contribution to public debate, since it brings about ‘the clearer perception and livelier impression of truth, produced by its collision with error.’”  N.Y. Times Co. v. Sullivan, 376 U.S. 254, 279 n.19 (1964) (quoting Mill, On Liberty 15 (1947) and citing Milton, Areopagitica, Prose Works, Vol. II 561 (1959)).

In sum, “[t]he remedy for speech that is false is speech that is true.  This is the ordinary course in a free society.  The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straight-out lie, the simple truth.”  United States v. Alvarez, 567 U.S. 709, 727–28 (2012) (plurality).  “And suppression of speech by the government can make exposure of falsity more difficult, not less so.  Society has the right * * * to engage in open, dynamic, rational discourse.  These ends are not well served when the government seeks to orchestrate public discussion through content-based mandates.”  Id. at 728.  “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”  Whitney v. California, 274 U.S. 357, 377 (1927) (Brandeis, J., concurring).

Of course, the government itself may participate in the marketplace of ideas. Government agencies concerned about health or election misinformation may use social media platforms to broadcast their message.  Those agencies may even amplify and target their counter-speech through advertising campaigns tailored to those most likely to share or receive misinformation—including by creating their own apps or social media websites.

All these steps would combat alleged online misinformation in a way that promotes the marketplace of ideas rather than restricting it.  What is more, presidents may always directly use the bully pulpit to advocate their views.  Pet. Br. 24–25 (listing examples of presidential statements criticizing protected speech).  What the government may not do, as petitioners necessarily concede, is “use its authority to suppress contrary views.”  Id. at 23.  As the record shows, that is exactly what happened in this case.

Finally, protecting the marketplace of ideas from government interference of course does not guarantee that the best ideas win. To the contrary, the marketplace will still see a “good deal of market failure”—if success is measured by the truth winning out. Ronald Coase, The Market for Goods and the Market for Ideas, 64 Am. Econ. Rev. 384, 385 (1974).  But “that different costs and benefits must be balanced does not in itself imply who must balance them,” much less how the balance should be struck.  Thomas Sowell, Knowledge and Decisions 240 (1996).

In the First Amendment, the Founders struck the balance in favor of liberty.  However flawed an open marketplace of ideas may be, they decided, it is better than censorship.  “The liberal defense of free speech is not based on any claim that the market for ideas somehow eliminates error or erases human folly.  It is based on a comparative institutional analysis in which most state interventions make a bad situation worse.”  Roger Koppl, Expert Failure 217 (2018).

B.     As this Court instructs, it is especially crucial that the marketplace of ideas be uninhibited on matters of public health.

It is precisely this judgment of the Founders—that state interventions in the marketplace of ideas “make a bad situation worse” (Koppl, supra, at 217) —that petitioners here ignored.  White House officials pressured websites to take down “[c]laims that have been ‘debunked’ by public health authorities.”  J.A. 98.  So-called misinformation was itself dubbed an “urgent public health crisis.”  J.A. 113.  Indeed, said the Surgeon General, “misinformation poses an imminent threat to the nation’s health and takes away the freedom to make informed decisions.”  J.A. 125 (emphasis added).  These assertions are dead wrong—backwards even.  Public health is the last area in which the government should be deciding “which ideas should prevail.”  Nat’l Inst. of Family & Life Advocates v. Becerra, 138 S. Ct. 2361, 2375 (2018) (“NIFLA”).  “[T]his Court has stressed the danger of content-based regulations ‘in the fields of medicine and public health, where information can save lives.’”  Ibid. (quoting Sorrell, 564 U.S. at 566 (striking down statute restricting publication of pharmacy records)).

Several respondents here are professors of medicine at elite institutions who disagreed with the scientific judgments of government officials.  In other words, they were just the kind of professionals whose views the public needed “to make informed decisions.” J.A. 125.  Instead, the government pressured social media websites to suppress these professionals’ views, which the government at the time viewed as outside the mainstream.

“As with other kinds of speech, regulating the content of professionals’ speech ‘pose[s] the inherent risk that the Government seeks not to advance a legitimate regulatory goal, but to suppress unpopular ideas[.]’”  NIFLA, 138 S. Ct. at 2374 (quoting Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 641 (1994)).  “Take medicine, for example.  Doctors help patients make deeply personal decisions, and their candor is crucial.”  NIFLA, 138 S. Ct. at 2374.  Yet “[t]hroughout history, governments have ‘manipulat[ed] the content of doctor-patient discourse’ to increase state power and suppress minorities”:

For example, during the Cultural Revolution, Chinese physicians were dispatched to the countryside to convince peasants to use contraception. In the 1930s, the Soviet government expedited completion of a construction project on the Siberian railroad by ordering doctors to both reject requests for medical leave from work and conceal this government order from their patients.  In Nazi Germany, the Third Reich systematically violated the separation between state ideology and medical discourse. German physicians were taught that they owed a higher duty to the ‘health of the Volk’ than to the health of individual patients. Recently, Nicolae Ceausescu’s strategy to increase the Romanian birth rate included prohibitions against giving advice to patients about the use of birth control devices and disseminating information about the use of condoms as a means of preventing the transmission of AIDS. – Ibid. (quoting Thomas Berg, Toward a First Amendment Theory of Doctor-Patient Discourse and the Right To Receive Unbiased Medical Advice, 74 B. U. L. Rev. 201, 201–202 (1994) (footnotes omitted)).

None of this government interference makes sense if the goal is to discover the truth.  And that is the goal of the scientific enterprise:  to discover the truth by testing hypotheses.  The goal is not to follow the current consensus.  “The notion that scientists should agree with a consensus is contrary to how science advances—scientists challenge each other, ask difficult questions and explore paths untaken.  Expectations of conformance to a consensus undercuts scientific inquiry.  It also lends itself to the weaponization of consensus to delegitimize or deplatform inconvenient views, particularly in highly politicized settings.”  Roger Pielke, Jr., The Weaponization of “Scientific Consensus,” American Enterprise Institute (Feb. 5, 2024), https://bit.ly/3OBH3Tj.

We saw just this politicization during the recent pandemic.  “Reputable scientists and physicians have questioned—and in many cases debunked—the ‘official’ narratives on lockdowns, school closures, border testing, vaccine mandates, endless boosters, bivalent COVID shots, epidemic forecasting, natural immunity, vaccine-induced myocarditis, and more.  * * *  But it’s become untenable for those in charge to defend many of their initial positions.”  Matt Strauss, Marta Shaw, J. Edward Les & Pooya Kazemi, COVID dissent wasn’t always misinformation, but it was censored anyway, National Post (Mar. 1, 2023), https://bit.ly/3SQZ6Yb.  Yet that did not stop many of those in charge, in the meantime, from using government power effectively to censor dissenters.  That is what happened in this case.  As one liberal member of Congress said of the “lab leak” theory of COVID’s origin—itself a key exhibit in the shifting of accepted thinking about COVID—“If you take partisan politics and you mix that with science * * *, it’s a toxic combination.”  Sheryl Gay Stolberg & Benjamin Mueller, Lab Leak or Not? How Politics Shaped the Battle Over Covid’s Origin, New York Times (Mar. 19, 2023) (quoting U.S. Rep. Anna Eshoo).

In sum, “[p]rofessionals might have a host of good-faith disagreements, both with each other and with the government, on many topics in their respective fields.  Doctors and nurses might disagree about the ethics of assisted suicide or the benefits of medical marijuana; lawyers and marriage counselors might disagree about the prudence of prenuptial agreements or the wisdom of divorce; bankers and accountants might disagree about the amount of money that should be devoted to savings or the benefits of tax reform.  ‘[T]he best test of truth is the power of the thought to get itself accepted in the competition of the market,’ and the people lose when the government is the one deciding which ideas should prevail.”  NIFLA, 138 S. Ct. at 2374–75 (quoting Abrams, 250 U.S. at 630 (Holmes, J., dissenting)).  The people lost here.

C.     A marketplace offering only government-approved ideas violates the rights of speakers and listeners, the overlooked “purchasers” in the marketplace.

The people’s loss is constitutionally cognizable.  As the “purchasers” in the marketplace of ideas, the people—including respondents here—were robbed of their First Amendment right to make informed decisions.  After all, the right to speak includes a “reciprocal” right to receive speech.  Va. State Bd. of Pharm. v. Va. Citizens Consumer Council, 425 U.S. 748, 757 (1976); see First Amend. and Internet Law Scholars Am. Br., Moody v. NetChoice LLC, NetChoice LLC v. Paxton, Nos. 22-277, 22-555, at 4–5 (Dec. 6, 2023) (collecting authorities).  “To suppress free speech is a double wrong.  It violates the rights of the hearer as well as those of the speaker.  It is just as criminal to rob a man of his right to speak and hear as it would be to rob him of his money.”  Frederick Douglass, Address: A Plea for Free Speech in Boston (1860), in Great Speeches by Frederick Douglass 48, 50 (2013) (quoted in First Amend. and Internet Law Scholars Am. Br, supra, at 4–5).

Stated differently, “[t]he First Amendment protects ‘speech’ and not just speakers.”  Eugene Volokh, Mark Lemley & Peter Henderson, Freedom of Speech and AI Output, 3 J. Free Speech L. 653, 656 (2023).  As a result, “th[is] Court has long recognized First Amendment rights ‘to hear’ and ‘to receive information and ideas.’”  Id. at 657 & n.11 (citing, among other cases, Kleindienst v. Mandel, 408 U.S. 753, 762–763 (1972) (“In a variety of contexts this Court has referred to a First Amendment right to receive information and ideas”) (internal quotation marks omitted); Stanley v. Georgia, 394 U.S. 557, 564 (1969) (“It is now well established that the Constitution protects the right to receive information and ideas.”); Thomas v. Collins, 323 U.S. 516, 534 (1945) (“That there was restriction upon Thomas’ right to speak and the rights of the workers to hear what he had to say, there can be no doubt.”)).

Based on the record respondents have built, Missouri and Louisiana can likely show that petitioners have trampled on their right to “hear” and to “receive information and ideas” published by websites.  Volokh, supra, at 656–657; Resp. Br. 25–27.  And by the same token, respondent individuals will likely be able to show that they have been robbed of their right to hear other suppressed speakers, “whom [respondents] follow, engage with, and re-post on social media.”  Resp. Br. 22.  The judgment should be affirmed.

II.    Websites stock the online marketplace of ideas by exercising editorial discretion.

By effectively forcing websites to take down certain content, the government here “alte[red] the content of [the websites’] speech.”  NIFLA, 138 S. Ct. at 2371 (internal citation omitted).  Such laws “are presumptively unconstitutional and may be justified only if the government proves that they are narrowly tailored to serve compelling state interests.”  Reed v. Town of Gilbert, 576 U.S. 155, 163 (2015).  “This stringent standard reflects the fundamental principle that governments have no power to restrict expression because of its message, its ideas, its subject matter, or its content.”  NIFLA, 138 S. Ct. at 2371 (internal citation and quotation marks omitted).  Nor is government control necessary in the competitive marketplace of ideas stocked by social media companies.

What distinguishes one site from another is what it publishes and refuses to publish. “[C]ontent moderation is the product.” Thomas Germain, Actually, Everyone Loves Censorship. Even You., GIZMODO (Feb. 22, 2023) (emphasis added), http://bit.ly/3Rge8pI.  As private participants in the marketplace of ideas, social media firms set their own editorial policies and choose which ideas to publish.  “The Free Speech Clause does not prohibit private abridgment of speech.”  Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019) (emphasis in original).  Even as they openly publish the speech of others, social media platforms do not “lose the ability to exercise what they deem to be appropriate editorial discretion,” because then they would “face the unappetizing choice of allowing all comers or closing the platform altogether.”  Id. at 1931.  In turn, users participate in the marketplace of ideas by choosing which social media website best meets their needs, including through its respective moderation policies.

Social media firms are what economists call “matchmakers” or “multi-sided” platforms.  David Evans & Richard Schmalensee, Matchmakers: The New Economics of Multisided Platforms 10 (2016).  “[M]atchmakers’ raw materials are the different groups of customers that they help bring together.  And part of the stuff they sell to members of each group is access to members of the other groups.  All of them operate physical or virtual places where members of these different groups get together.  For this reason, they are often called multisided platforms.”  Ibid.  Social media firms bring together advertisers and users—including both speakers and listeners—by curating third-party speech.  Curating speech well keeps users engaged so advertisers can reach them.

At times, keeping users engaged requires removing harmful speech, or even removing users who break the rules.  See David Evans, Governing Bad Behavior by Users of Multi-Sided Platforms, 27 Berkeley Tech. L.J. 1201, 1215 (2012).  But a social media company cannot go too far in restricting speech that users value.  Otherwise, users will visit the platform less or even abandon it for other companies in the “attention market”—which includes not only other platforms, but newspapers, magazines, television, games, and apps.  Facing the prospect of fewer engaged users, advertisers will expect lower returns and invest less in the platform.  Eventually, if too many customers flee, the social media company will fail.

Social media companies must also consider brand-conscious advertisers who may not want to be associated with perceived misinformation or other harmful speech.  To take just one example, advertisers reportedly left X after that company loosened its moderation practices.  Ryan Mac, Brooks Barnes & Tiffany Hsu, Advertisers Flee X as Outcry Over Musk’s Endorsement of Antisemitic Post Grows, N.Y. Times (Nov. 17, 2023).  In other words, platforms must strike a balance in their content-moderation policies.  This balance includes creating rules discouraging misinformation if such speech drives away users or advertisers.  As active participants in the marketplace, social media firms are best positioned to discover the best way to serve their users.  See Int’l Ctr. for Law & Economics Am. Br. at 6–11, Moody v. NetChoice LLC, NetChoice LLC v. Paxton, Nos. 22-277, 22-555 (Dec. 7, 2023).  As competition plays out, though, consumers can deliver surprises—and platforms must adjust.  This is the marketplace of ideas in action.

All these product changes happen without government intervention, which, again, would be forbidden in any event. After all, the First Amendment forbids any “abridg[ement]” of speech, no matter where that speech is “publish[ed]” or “disseminat[ed]”—including the online marketplace of ideas. Reno v. ACLU, 521 U.S. 844, 853 (1997); 303 Creative, 600 U.S. at 594.  The way a social media company adjusts to the market shows what it deems “deserving of expression, consideration, and adherence,” or “worthy of presentation.”  Turner, 512 U.S. at 641; Hurley, 515 U.S. at 575.  By forcing platforms to take down content, government coercion “alte[red] the content of [the platforms’] speech.”  NIFLA, 138 S. Ct. at 2371 (internal citation omitted).

When a company “exercises editorial discretion in the selection and presentation of its programming, it engages in speech activity.”  Arkansas Ed. Television Comm’n v. Forbes, 523 U.S. 666, 674 (1997).  “[E]ditorial control” encompasses the “choice of material,” “decisions made as to limitations on the size and content,” and “treatment of public issues[.]”  Miami Herald Pub. Co. v. Tornillo, 418 U.S. 241, 258 (1974).  Any governmental “compulsion to publish that which reason tells them should not be published”—or vice versa—“is unconstitutional.”  Id. at 256 (internal citation and quotation marks omitted).

III. The online marketplace of ideas was impoverished by federal coercion here, and the Court should affirm the injunction insofar as it binds federal officials.

Although social media companies are private actors with a right to editorial discretion, the facts adduced so far in this case, if ultimately established, show coercion by federal officials, and not the exercise of discretion by websites. Relying on an extensive record, “the district court concluded that the officials, via both private and public channels, asked the platforms to remove content, pressed them to change their moderation policies, and threatened them—directly and indirectly—with legal consequences if they did not comply. And it worked—that ‘unrelenting pressure’ forced the platforms to act and take down users’ content.”  J.A. 16–17.

The Fifth Circuit agreed, holding that federal officials likely “ran afoul of the First Amendment by coercing and significantly encouraging social-media platforms to censor disfavored [speech], including by threats of adverse government action like antitrust enforcement and legal reforms.”  J.A. 32 (internal citations and quotation marks omitted).  In reaching this conclusion, the Fifth Circuit adopted a four-part test, ostensibly derived from Bantam Books, Inc. v. Sullivan, 372 U.S. 58 (1963), to tell when government actions aimed at private parties become coercive: “(1) the speaker’s word choice and tone; (2) “?whether the speech was perceived as a threat?”; (3) “?the existence of regulatory authority?”; and, “perhaps most importantly, (4) whether the speech refers to adverse consequences.”  J.A. 42 (internal citations and quotation marks omitted)

But the Fifth Circuit’s test falls short of the test applied in Bantam Books.  The focus of Bantam Books is not on the subjective understanding of the private actor, but on what the state actors objectively did—namely, was it reasonably understood as attempting to coerce private action.  The Bantam Books test is about the efforts of the state actor to suppress speech, not whether the private actor is in some hyper-literal sense “free” to ignore the state actor.  Surreptitious pressure in the form alleged by respondents is just as much an intervention into the marketplace of ideas as overt censorship.

Consider what happened in Bantam Books.  A legislatively created commission notified book publishers that certain books and magazines were objectionable for sale or distribution.  The commission had no power to sanction publishers or distributors, and there were no bans or seizures of books.  372 U.S. at 66–67.  In fact, the book distributors were technically “free” to ignore the commission’s notices.  Id. at 68 (“It is true * * * that [the distributor] was ‘free’ to ignore the Commission’s notices, in the sense that his refusal to ‘cooperate’ would have violated no law.”).  Nonetheless, this Court held, “the Commission deliberately set about to achieve the suppression of publications deemed ‘objectionable’ and succeeded in its aim.”  Id. at 67.  Particularly important was that the notices could be seen as a threat of prosecution.  See id. at 68–69 (“People do not lightly disregard public officers’ thinly veiled threats to institute criminal proceedings against them if they do not come around[.]  The Commission’s notices, phrased virtually as orders, reasonably understood to be such by the distributor, invariably followed up by police visitations, in fact stopped the circulation of the listed publications[.]  It would be naive to credit the State’s assertion that these blacklists are in the nature of mere legal advice, when they plainly serve as instruments of regulation.”).

Ignoring this lesson of Bantam Books, petitioners focus on the subjective response of social media companies rather than the objective actions of the government.  Petitioners emphasize that media companies did not always censor speech to the degree that federal officials asked.  Br. 39.  But under Bantam Books, that is not the question.  The question is whether the government’s communications could reasonably be seen as a threat.  372 U.S. at 68–69.

They could.  Indeed, the allegations here include (a) many threats to have social media firms investigated, prosecuted, and regulated if they failed to remove disfavored speech, coupled with (b) extensive use of private meetings, emails, and digital portals to pressure firms to remove speech.  Resp. Br. 2–16.  As a result of this pressure, social media firms removed speech against their policies and changed their policies.  Ibid.  Much as in Bantam Books, government pressure suppressed lawful speech.

All this government coercion is a first-order infringement of speech and an impermissible intervention into the marketplace of ideas.  It also destroys the business model of social media websites.  As multisided platforms, these companies must carefully balance users, advertisers, and speech.  Government intervention disrupts this careful balance.  Again, the value proposition of social media websites is that they—as actors in the market—are best situated to curate forums attractive to their users.  Destroying these privately curated forums will chill speech for all Americans.  The Court should find that respondents are likely to succeed on the merits of their First Amendment claim.

As noted, the government is free to use the bully pulpit to persuade—and even to argue publicly that certain content on social media platforms is misinformation that should be demoted or removed. Pet. Brief 23–25 (listing examples of presidential statements criticizing protected speech).  But this does not mean the First Amendment allows coercing private actors into shutting down speech, which is what is shown by the facts adduced here.

The remedy for unlawful government coercion is an injunction against, or in specific cases, damages from, government actors. Here, the District Court and Fifth Circuit rightly focused the injunction against federal officials.  That was correct.  The marketplace of ideas, now freed from impermissible government intervention, leaves its participants free to exercise their editorial discretion as they see fit.  There is no need to enjoin private actors; and, indeed, doing so would undermine the same freedom of expression that enjoining coercive government actors protects.  On remand, the injunction should continue to make clear that social media companies may continue to engage in the marketplace of ideas by exercising editorial discretion.  But the government may not press its thumb on the scale by compelling them to censor.

CONCLUSION

The judgment should be affirmed.

[1] No party or counsel for a party authored this brief in whole or in part.  No one other than amicus or its counsel made a monetary contribution to fund preparation or submission of this brief.

Continue reading
Innovation & the New Economy

ICLE Amicus in Ohio v Google

Amicus Brief Interest of Amicus[1] The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center aimed at building the intellectual . . .

Interest of Amicus[1]

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center aimed at building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates and has longstanding expertise evaluating law and policy.

ICLE has an interest in ensuring that First Amendment law promotes the public interest by remaining grounded in sensible rules informed by sound economic analysis. ICLE scholars have written extensively in the areas of free speech, telecommunications, antitrust, and competition policy. This includes white papers, law journal articles, and amicus briefs touching on issues related to the First Amendment and common carriage regulation, and competition policy issues related to alleged self-preferencing by Google in its search results.

Introduction

Google’s mission is to “organize the world’s information and make it universally accessible and useful.” See Our Approach to Search, Google (last accessed Jan. 18, 2024), https://www.google.com/search/howsearchworks/our-approach/. Google does this at zero price, otherwise known as free, to its users. This generates billions of dollars of consumer surplus per year for U.S. consumers. See Avinash Collis, Consumer Welfare in the Digital Economy, in The Global Antitrust Instit. Report on the Digital Economy (2020), available at https://gaidigitalreport.com/2020/08/25/digital-platforms-and-consumer-surplus/.

This incredible deal for users is possible because Google is what economists call a multisided platform. See David S. Evans & Richard Schmalensee, Matchmakers: The New Economics of Multisided Platforms 10 (2016) (“Many of the biggest companies in the world, including… Google… are matchmakers… [M]atchmakers’ raw materials are the different groups of customers that they help bring together. And part of the stuff they sell to members of each group is access to members of the other groups. All of them operate physical or virtual places where members of these different groups get together. For this reason, they are often called multisided platforms.”). On one side of the platform, Google provides answers to queries of users. On the other side of the platform, advertisers, pay for access to Google’s users, and, by extension, subsidize the user-side consumption of Google’s free services.

In order to maximize the value of its platform, Google must curate the answers it provides in its search results to the benefit of its users, or it risks losing those users to other search engines. This includes both other general search engines and specialized search engines that focus on one segment of online content (like Yelp or Etsy or Amazon). Losing users would mean the platform becomes less valuable to advertisers.

If users don’t find Google’s answers useful, including answers that may preference other Google products, then they can easily leave and use alternative methods of search. Thus, there are real limitations on how much Google can self-preference before the incentives that allowed it to build a successful platform unravel as users and therefore advertisers leave. In fact, it is highly likely that users of Google search want the integration of direct answers and Google products, and Google provides these results to the benefit of its users. See Geoffrey A. Manne, The Real Reason Foundem Foundered, at 16 (ICLE White Paper 2018), https://laweconcenter.org/wp-content/uploads/2018/05/manne-the_real_reaon_foundem_foundered_2018-05-02-1.pdf (“[N]o one is better positioned than Google itself to ensure that its products are designed to benefit its users”).

Here, as has been alleged without much success in antitrust cases, see United States v. Google, LLC, 2023 WL 4999901, at *20-24 (D. D.C. Aug. 4, 2023) (granting summary judgment in favor of Google on antitrust claims of self-preferencing in search results), the alleged concern is that Google preferences itself at the expense of competitors, and to the detriment of its users. See Complaint (“Google intentionally structures its Results Pages to prioritize Google products over organic search results.”). Ohio asks the court to declare Google a common carrier and subject it to a nondiscrimination requirement that would prevent Google from prioritizing its own products in search results.

The problem, of course, is the First Amendment. Federal district courts have consistently found that the First Amendment protects how providers structure search results. See, e.g., e-ventures Worldwide, LLC v. Google, Inc., 2017 WL 2210029 (M.D. Fla., Feb. 8, 2017); Jian Zhang v. Baidu.com Inc., 10 F. Supp. 3d 433 (S.D. N.Y., Mar. 28, 2014); Langdon v. Google, Inc., 474 F. Supp. 2d 622 (D. Del. 2007); Search King, Inc. v. Google Tech., Inc., 2003 WL 21464568 (W.D. Okla., May 27, 2003).

While Ohio and their amici argue that Google should be considered a common carrier, and thus be subject to a lower standard of review for First Amendment purposes, there is no legal basis for such a conclusion.

First, common carriage is a poor fit for Google’s search product. Courts have rejected monopoly power or being “affected with a public interest” as the proper prerequisites for common carrier status. Ohio, like other jurisdictions, has found that the “fundamental test of common carriage is whether there is a public profession or holding out to serve the public.” Girard v. Youngstown Belt Ry. Co., 134 Ohio St. 3d 79, 89 (2012) (emphasis added). See also Loveless v. Ry. Switching Serv., Inc., 106 Ohio App. 3d 46, 51 (1995) (“The distinctive characteristic of a common carrier is that he undertakes to carry for all people indifferently and hence is regarded in some respects as a public servant.”) (internal quotations omitted). Google simply does not carry information in an undifferentiated way comparable to a railroad carrying passengers or freight. It is rather a service that explicitly differentiates and prioritizes answers to queries by providing individualized responses based upon location, search history, and other factors.

Second, as mentioned above, Google’s search results are protected by the First Amendment, and simply “[l]abeling” Google “a common carrier… has no real First Amendment consequences.” Denver Area Educ. Telecomm. Consortium, Inc. v. FCC, 518 U.S. 727, 825 (1996) (Thomas, J., concurring in the judgment in part and dissenting in part). As this court stated, it is the nondiscrimination requirement sought by Ohio that is subject to First Amendment scrutiny, not the common carriage label itself. See Motion to Dismiss Opinion at 16. And any purported nondiscrimination requirement should be subject to strict scrutiny, as such a requirement would constrain Google’s own speech in the form of its carefully tailored search results, and not simply the speech of others.

Argument

1. Common Carriage Is a Poor Fit as Applied to Google’s Search Product

There is a long history of common carriage regulation in this country. But there has not always been universal agreement on what constitutes the defining feature of a common carrier, with proposed justifications ranging from monopoly power (or natural monopoly) to being affected by the public interest. Over time, though, courts and commentators, including Ohio courts, have agreed that common carriage is primarily about holding oneself out to serve the public indiscriminately.

Simply put, Google Search does not hold itself out to, nor does it actually serve, the public indiscriminately by carrying information, either from users or from other digital service providers. It provides individualized and tailored answers to users’ queries, which may include Google products, direct answers, or general information its search crawlers have learned about other service providers on the Internet.

A. Common Carriage Is Not About Monopoly Power or the Public Interest, It’s About Holding Oneself Out to Serve the Public Indiscriminately

In its complaint, Ohio makes much of Google’s market share in search. See Complaint para. 19-32. Amici also argue that the “immense market dominance” of Google makes it a common carrier analogous to telegraphs or telephones. See Claremont Amicus at 6. Similarly, both Ohio and amici argue that Google’s search results are affected by a public interest. See Complaint at 40; Claremont Amicus at 3-4.

Whatever the market share of Google search, common law courts, including those of Ohio, do not find monopoly power to be a part of the definition of common carriage. For instance, the presence of competition for innkeepers did not mean they were not subject to requirements to serve. See Joseph William Singer, No Right to Exclude: Public Accommodations and Private Property, 90 Nw. U. L. Rev. 1283, 1319-20 (1996) (“On the monopoly rationale, it is important to note that none of the antebellum cases bases the duty to serve on the fact of monopoly. Indeed, the presence of competition was never a reason for denying the duty to serve in the antebellum era. In many towns, there were several innkeepers and cities like Boston had dozens of innkeepers. Yet, no lawyer, judge, or treatise writer ever suggested that innkeepers in cities like Boston should be exempt from the duty to serve the public.”). Nor does the presence of monopoly necessarily lead to common carriage treatment under the law. See Blake Reid, Uncommon Carriage, at 25, 76 Stan. L. Rev., forthcoming (2024) (“[F]irms holding effective monopolies or oligopolies in a wide range of sectors, including pharmacies and drug stores, managed healthcare providers, office supply stores, eyeglass sellers, airlines, alcohol distribution, and even candy are not widely regarded or legally treated as common carriers.”). Accordingly, Ohio does not define common carriage in relation to monopoly power. Cf. Kinder Morgan Cochin LLC v. Simonson, 66 N.E. 1176, 1182 (Ohio Ct. App. 5th Dist. Ashland County 2016) (failing to mention monopoly as part of the definition of common carrier).

Moreover, while older cases and commentators cite the “affected with a public interest” standard, courts have moved away from it because of its indeterminacy. See Biden v. Knight First Amendment Inst., 141 S. Ct. 1220, 1223 (2021) (Thomas, J., concurring) (this definition is “hardly helpful, for most things can be described as ‘of public interest.’”). See also Christopher S. Yoo, The First Amendment, Common Carriers, and Public Accommodations: Net Neutrality, Digital Platforms, and Privacy, 1 J. of Free Speech L. 463, 468-69 (2021).

Instead, the definition of common carriage under Ohio law is defined as holding itself “out to the public as ready and willing to serve the public indifferently.” See Kinder Morgan Cochin, 66 N.E. at 1182; Girard v. Youngstown Belt Ry. Co., 134 Ohio St. 3d 79, 89 (2012); Loveless v. Ry. Switching Serv., Inc., 106 Ohio App. 3d 46, 51 (1995).

B. Google Does Not Offer an Undifferentiated Search Product to Its Users

With this definition in mind, Google is not a common carrier. Google does not offer an undifferentiated service to its users like a pipeline (like in Kinder Morgan Cochin) or railroad (like in Girard or Loveless), or even like a mall offering an escalator to customers (like in May Department Stores Co. v. McBride, 124 Ohio St. 264 (1931)). Nor does it offer to “communicate or transmit” information of “their own design and choosing” to users. See FCC v. Midwest Video Corp., 440 U.S. 689, 701 (1979) (defining common carrier services in the communications context). Instead, it offers a tailored search result to its users. See Complaint at paras. 17-18 (noting that search results depend on location); How Search work with your activity, Google (last accessed Jan. 18, 2024), https://support.google.com/websearch/answer/10909618 (“When you search on Google, your past searches and other info are sometimes incorporated to help us give you a more useful experience.”). This is not a common carrier in the communications context. See Midwest Video, 440 U.S. at 701 (“A common carrier does not make ‘individualized decisions, in particular cases, whether on what terms to deal.’”) (quoting Nat’l Ass’n of Reg. Util. Comm’rs v. FCC, 525 F.2d 630, 641 (D.C. Cir. 1976)).

For instance, if a user searches for restaurants, Google’s algorithm may not only take into consideration the location of the user, but also whether the user previously clicked on particular options when running a similar query, or even if the user visited a particular restaurant’s website. While the results are developed algorithmically, this is much more like answering a question than it is transporting a private communication between two individuals like a telephone or telegraph.

Importantly, users often receive a different result even for the same search. See Why your Google Search results might differ from other people, Google (last accessed Jan. 18, 2024), https://support.google.com/websearch/answer/12412910 (“You may get the same or similar results to someone else who searches on Google Search. But sometimes, Google may give you different results based on things like time, context, or personalized results.”). Google is clearly making “‘individualized’ content- and viewpoint-based decisions” when it comes to search results. Cf. Moody v. NetChoice, 34 F.4th 1196, 1220 (11th Cir. 2022) (quoting Midwest Video, 440 U.S. at 701).

While the court emphasized at the motion to dismiss stage that a reasonable factfinder could find Google offers to hold itself out to the public in its mission “to organize the world’s information and make it universally accessible and universal,” see MTD Opinion at 7, this does not “change [its] status to common carrier[]… unless [it] undertake[s] to carry for all people indifferently.” Loveless, 106 Ohio App. 3d at 52. As the above facts demonstrate, there is no basis for finding that Google search offers an undifferentiated product to its users. The court should find Google is not a common carrier under Ohio law.

II. Google’s Search Results Are Protected by the First Amendment from Common Carriage Nondiscrimination Requirements

Ohio ultimately seeks to restrict the ability of Google to favor its own products in its search results. But this runs into a real constitutional problem: search results are protected by the First Amendment.

Moreover, as this court has previously found, the First Amendment scrutinizes not the label of common carriage, but the burdens which come with it. Here, the nondiscrimination requirement Ohio asks for is what is at issue.

This nondiscrimination requirement is inconsistent with the First Amendment. While this court thought it should be subject to intermediate scrutiny, the First Amendment requires strict scrutiny when speech is compelled. The cases cited by the court are inapposite when a speaker is delivering its own message, i.e. search results, rather than simply hosting speech of others.

A. Federal District Court Cases Establish Google Search Results Are Protected by the First Amendment

While no appellate court has considered the issue, several federal district courts have recognized search engines have a First Amendment interest in their search results. Some decisions have framed the results themselves as speech. Others have considered the issue as one of editorial judgment. But under either approach, Google Search results are protected by the First Amendment.

For instance, in Jian Zhang v. Baidu.com, 10 F. Supp. 3d 433 (S.D. N.Y. Mar. 28, 2014), the court found that the application of a New York public accommodations law to a Chinese search engine that “censored” pro-democracy speech is inconsistent with the right to editorial discretion. The court found that “there is a strong argument to be made that the First Amendment fully immunizes search-engine results from most, if not all, kinds of civil liability and government regulation.” Id. at 438.  The court noted that “the central purpose of a search engine is to retrieve relevant information from the vast universe of data on the Internet and to organize it in a way that would be most helpful to the searcher. In doing so, search engines inevitably make editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information (for example, on the first page of the search results or later).” Id.  Other courts have similarly found search engines have a right to editorial discretion over their results. See also e-ventures Worldwide, LLC v. Google, Inc., 2017 WL 2210029, at *4 (M.D. Fla. Feb. 8, 2017); Langdon v. Google, Inc., 474 F. Supp. 2d 622, 629-30 (D. Del. 2007).

In this sense, Google’s search results are analogous to the decisions of what to print made by the newspaper in Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974), or the parade organizer in Hurley v. Irish-American Gay, Lesbian, & Bisexual Group of Boston, 515 U.S. 557 (1995).

At least one court has found that search results themselves are protected opinions. In Search King Inc. v. Google Technology, Inc., 2003 WL 21464568, at *4 (WD. Okla. May 27, 2003), the court found that search results “are opinions—opinions of the significance of particular web sites as they correspond to a search query. Other search engines express different opinions, as each search engine’s method of determining relative significance is unique.”

Under this line of reasoning, Google’s responses to queries are opinions directing users to what it thinks is the best answer given all the information it has on the user, her behavior, and her preferences. This is in itself protected speech. Cf. Eugene Volokh & Donald M. Falk, Google: First Amendment Protection for Search Results, 8 J. L. Econ. & Pol’y 883, 884 (2012) (“[S]earch engines are speakers… they convey information that the search engine has itself prepared or compiled [and] they direct users to material created by others… Such reporting about others’ speech is itself constitutionally protected speech.”).

In sum, the First Amendment protects Google’s search results.

B. A Common Carriage Label Does Not Change First Amendment Analysis

Amici argued that because Google is a common carrier, the nondiscrimination requirement is merely an economic regulation that is not subject to heightened First Amendment scrutiny. See Claremont Amicus at 17. But the issue here is not simply the label of common carriage, it is the regulatory scheme sought by Ohio. Cf. Denver Area Educ. Telecomm. Consortium, Inc. v. FCC, 518 U.S. 727, 825 (1996) (Thomas, J., concurring in the judgment in part and dissenting in part) (“Labeling leased access a common carrier scheme has no real First Amendment consequences.”); MTD Opinion at 16 (“As for the State’s request for declaratory relief, merely declaring or designating Google Search to be a common carrier does not, of itself, violate the First Amendment or infringe on Google’s constitutional speech rights…. It is the burdens and obligations accompanying that designation that implicate the First Amendment.”).

In other words, when reviewing the nondiscrimination requirement sought by Ohio, the labeling of this as a common carriage obligation does not matter under the First Amendment.

C. The Nondiscrimination Requirement Should be Subject to Strict Scrutiny

Ohio and amici have characterized the nondiscrimination requirement that comes with common carriage as a content-neutral requirement to host the speech of others. See MTD Opinion at 16; Claremont Amicus at 15, 17. This court agreed that this was possible at the motion to dismiss stage. But the remedy sought is not content-neutral, nor is it dealing purely with the speech of others. As a result, it should be subject to strict scrutiny.

This court found that a “restriction of this type must satisfy intermediate scrutiny” as a “content-neutral restriction on speech.” MTD Opinion at 16. The court compared the situation to Turner Broadcasting System Inc. v. FCC, 512 U.S. 622 (1994). But the nondiscrimination requirement is clearly content-based.

Ohio is asking this court to enjoin Google from prioritizing its own products in its search results. See Complaint at para. 77. The only way to know whether Google is doing that is to consider the content of its search results. See, e.g.Reed v. Town of Gilbert, Ariz., 576 U.S. 155, 163 (2015) (“Government regulation of speech is content based if a law applies to particular speech because of the topic discussed or the idea or message expressed.”). The idea or message expressed here is that Google’s products would be a better answer to an inquiry than another. By definition, the nondiscrimination requirement is a content-based regulation of speech, and must therefore be subject to strict scrutiny.

Nor is this just an issue of the speech of others. This court stated that “infringing on a private actor’s speech by requiring that actor to host another person’s speech does not always violate the First Amendment.” MTD Opinion at 17. The court cited PruneYard Shopping Ctr. v. Robins, 447 U.S. 74 (1980), Rumsfeld v. Forum for Academic and Institutional Rights, Inc., 547 U.S. 47 (2007), and Red Lion Broadcasting Co. v. FCC, 395 U.S. 367 (1969). But none of these cases deals with a situation analogous to applying nondiscrimination requirements to Google’s search results.

Here, as explained above, Google’s search results are themselves protected speech. Collectively, each search result is Google’s opinion of the best set of answers, in the optimal order, to questions provided by users to Google. Requiring Google to present different results, or results in a different order, or with different degrees of prioritization would impermissibly compel Google to speak, similar to requiring car owners to display license plates saying “Live Free or Die,” see Wooley v. Maynard, 430 U.S. 705 (1977), or forcing a student to stand for the Pledge of Allegiance, see West Virginia State Bd. of Educ. V. Barnette, 319 U.S. 624 (1943). It is, in short, impossible to require “Google [to] carr[y] all responsive search results on an equal basis,” Complaint at 5, without compelling it to speak in ways it does not choose to speak.

Even if Google’s interest in its search results is characterized as editorial discretion over others’ speech rather its own speech (a dubious distinction), this would still be distinguishable from the above cases. Google is clearly identified with its results by users, unlike the shopping center with its customers in PruneYard or the law schools with military recruiters in FAIR. See Complaint at paras. 48-50 (alleging that Google was built on expectations from users that the search algorithm was in some way neutral). This is especially the case when Google is, as alleged, prioritizing its own products in search results. See id. at paras. 64-70. Google clearly believes, and its users appear to agree, that these products are what its users want to see. See Complaint at 2 (“Google Search is perceived to deliver the best search results…”). Otherwise, those users could just use another service. Cf. Zhang, 10 F. Supp. 3d at 441 (a user dissatisfied with search results can just use another search engine).

Notably, this stands in contrast to the court’s characterization of the speech at issue. See MTD Opinion at 19-20 (“When a user searches a speech by former President Donald Trump on Google Search and that speech is retrieved by Google with a link to the speech on YouTube, no rational person would conclude that Google is associating with President Trump or endorsing what is seen in the video.”). It is not the content of the links that users associate with Google, but the search results themselves, which includes the order in which each link is presented, the presentation of certain prioritized results in a different format, and the exclusion or deprioritization of certain results Google thinks the user will not find relevant. A search engine is more than a “passive receptacle or conduit” for the speech of others; the “choice of material” and how it is presented in its search results “constitute the exercise of editorial control and judgment.” Tornillo, 418 U.S. at 258.

In sum, the reasons for subjecting must-carry provisions in Turner to intermediate scrutiny do not apply here. First, the nondiscrimination requirement sought by Ohio is not content-neutral; indeed, it is precisely Ohio’s dissatisfaction with the specific content Google provides that impels its proposed law. Cf. Turner, 512 U.S. at 653-55 (emphasizing the content-neutrality of the must-carry requirements). Second, Google must alter its message in its search results due to the regulation, as it is expressing a clear opinion that its own products are the best answer—an answer with which Google is identified and which distinguishes it from its search engine competitors. Cf. id. at 655-56 (finding the must-carry requirements would not force cable operators to alter their own messages or identify them with the speech they carry). Third, Google does not have the ability to prevent its users from accessing information, whether from other general search engines, specialized search engines, or just typing a website into the browser. Cf. Turner, 512 U.S. at 656 (“When an individual subscribes to cable, the physical connection between the television set and the cable network gives the cable operator bottleneck, or gatekeeper control over most (if not all) of the television programming that is channeled into the subscriber’s home… A cable operator, unlike other speakers in other media, can thus silence the voice of competing speakers with a mere flick of the switch.”). Absent these countervailing justifications for intermediate scrutiny in Turner, Ohio’s nondiscrimination requirement must be subject to strict scrutiny.

Finally, while it is true that economic regulation like antitrust law can be consistent with the First Amendment, see Claremont Amicus at 17 (citing Associated Press v. United States, 326 U.S. 1, 20), that does not mean every legal restriction on speech so characterized is constitutional. For instance, in Associated Press, the Supreme Court found the organization in violation of antitrust law, but in footnote 18 disclaimed the power to “compel AP or its members to permit publication of anything which their ‘reason’ tells them should not be published.” Associated Press, 316 U.S. at 20, n. 18. The Court echoed this in Tornillo to argue that the remedy sought by Florida’s right-to-reply law was unconstitutional government compulsion of speech that would violate the newspaper’s right to editorial discretion. See Tornillo, 418 U.S. at 254-58. Restricting Google’s right to editorial discretion over its search results is similarly unconstitutional.

Conclusion

Ohio’s attempted end-run of competition law and the First Amendment by declaring Google a common carrier must be rejected by this court. Google is not a common carrier. And the nondiscrimination requirement requested by Ohio is inconsistent with the First Amendment.

[1] Amicus state that no counsel for any party authored this brief in whole or in part, and that no entity or person other than amicus and its counsel made any monetary contribution toward the preparation and submission of this brief.

Continue reading
Innovation & the New Economy

ICLE Files Amicus in NetChoice Social-Media Regulation Cases

TOTM Through our excellent counsel at Yetter Coleman LLP, the International Center for Law & Economics (ICLE ) filed an amicus brief with the U.S. Supreme Court in . . .

Through our excellent counsel at Yetter Coleman LLP, the International Center for Law & Economics (ICLE ) filed an amicus brief with the U.S. Supreme Court in the Moody v. NetChoice and NetChoice v. Paxton cases. In it, we argue that the First Amendment’s protection of the “marketplace of ideas” requires allowing private actors—like social-media companies—to set speech policies for their own private property. Social-media companies are best-placed to balance the speech interests of their users, a process that requires considering both the benefits and harms of various kinds of speech. Moreover, the First Amendment protects their ability to do so, free from government intrusion, even if the intrusion is justified by an attempt to identify social media as common carriers.

Read the full piece here.

Continue reading
Innovation & the New Economy

ICLE Amicus Letter Supporting Review in Liapes v Facebook

Amicus Brief RE: Amicus Letter Supporting Review in Liapes v. Facebook, Inc. (No. S282529), From a Decision by the Court of Appeal, First Appellate District, Division 3 . . .

RE: Amicus Letter Supporting Review in Liapes v. Facebook, Inc. (No. S282529), From a Decision by the Court of Appeal, First Appellate District, Division 3 (No. A164880)

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center aimed at building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates and has longstanding expertise evaluating antitrust law and policy. We thank the Court for considering this amicus letter supporting Petitioner Facebook’s petition for review in which we wish to briefly highlight some of the crucial considerations that we believe should be taken into account when looking at the intermediary liability principles that underlie the interpretation of the Unruh Act.

The Court of Appeal’s decision in Liapes v. Facebook has profound implications for online advertising and raises significant legal and practical concerns that could echo beyond the advertising industry. Targeted advertising is a crucial aspect of marketing, enabling advertisers to direct benign, pro-consumer messages to potential customers based on various considerations, including age and gender. The plaintiff’s argument, and the Court of Appeal’s acceptance of it, present a boundless theory of liability, suggesting that any targeted advertising based on protected characteristics is unlawful. This theory of liability, unfortunately, fails to take account of the nature of Facebook as an online intermediary, and the optimal limitations on liability that this requires when weighing the bad acts of third parties against Facebook’s attempt to provide neutral advertising tools to the benefit of millions of users.

The Unruh Act Is Not a Strict Liability Statute

While the Unruh Act prohibits intentional discrimination, California Civil Code §§ 51 and 51.5, California courts have consistently emphasized that the statute does not impose strict liability for all differential treatment. Rather, the Unruh Act allows for distinctions that serve legitimate nondiscriminatory purposes.

Courts have held that the Unruh Act does not bar practices “justified by ‘legitimate business interests.'” Koebke v. Bernardo Heights Country Club, 36 Cal. 4th 824, 851 (2005). The statute prohibits only discrimination that is “arbitrary, invidious or unreasonable.” Javorsky v. Western Athletic Clubs, Inc., 242 Cal. App. 4th 1386, 1395 (2015). Reasonable, nonarbitrary distinctions are therefore permissible. Differential treatment may qualify as reasonable and nonarbitrary if there is a public policy justification for the distinction. For example, discounts for senior citizens have been deemed nonarbitrary because they advance policies like assisting those with limited incomes. Sargoy v. Resolution Trust Corp., 8 Cal. App. 4th 1039, 1044 (1992). And it is “reasonable” discrimination on the basis of age to prevent minors from entering bars and adult bookstores. Koire v. Metro Car Wash, 40 Cal. 3d 24, 31 (1985).

Thus, the Unruh Act does not impose strict liability merely for practices that have a disparate impact. Harris v. Capital Growth Investors XIV, 52 Cal. 3d 1142, 1149 (1991). While the Unruh Act provides robust protections, it was not intended to forbid all differential treatment. Distinctions based on legitimate justifications remain permissible under the statute’s exceptions.

Firms like Meta operate services facilitating billions of interactions between users and advertisers. In this vast, complex environment, interpreting any ad targeting based on protected class membership as a per se Unruh Act violation would amount to imposing de facto strict liability on the online advertising industry. Setting aside the fact that the Unruh Act is not a strict liability statute, drawing the liability line at this point would have drastic practical consequences.

First, a de facto strict liability standard fails to account for the immense scale and complexity of services like Facebook. Given the number of third-party advertisers and users, as well as the speed and quantity of ad auctions, some incidental correlations between ad delivery and protected characteristics are likely inevitable even absent purposeful exclusions. The Court of Appeal’s opinion exposes both advertisers and platforms like Facebook to litigation based on such correlations, on the theory that the correlations may be “probative” of the intentional discrimination the Unruh Act forbids.

Second, advertisers may have many reasonable, nonarbitrary motivations for targeting their ads to certain demographic groups. For example, targeting older people for certain kinds of medicines, or members of religious groups with information about services in their religion. The Court of Appeal’s opinion will lead to extensive, costly litigation about potential justification for such ad targeting, and in the meantime consumers will be deprived of useful ads.

Finally, if any segmentation of ad targets based on protected characteristics triggers Unruh Act violations, online advertising loses an essential tool for connecting people with relevant messages. This impedes commerce without any showing of invidious discrimination.

Although the Unruh Act provides important protections, overbroad interpretations amount to strict liability incompatible with the realities of a massive, complex ad system. Nuance is required to balance anti-discrimination aims with the actual welfare of users of services. In order to properly parse the line between reasonable and unreasonable discrimination when dealing with a neutral advertising service like Facebook and the alleged bad acts of third parties, it is necessary to incorporate the legal principles of intermediary liability into an analysis under the Unruh Act.

Principles of Intermediary Liability

In public policy and legal analysis, a central objective is to align individual incentives with social welfare, thereby deterring harmful behavior and encouraging optimal levels of precaution. See Guido Calabresi, The Cost of Accidents: A Legal and Economic Analysis 26 (1970). In the online context, this principle necessitates a careful examination of intermediary liability, especially for actors indirectly involved in online interactions.

Intermediary liability applies to third parties not directly causing harm but who can influence primary actors’ behavior to reduce harm cost-effectively. This is particularly relevant when direct deterrence is insufficient, and the intermediary can prevent harm more effectively or at a lower cost than direct enforcement. See Reiner Kraakman, Gatekeepers: The Anatomy of a Third-Party Enforcement Strategy, 2 J.L. Econ. & Org. 53, 56-57 (1986). However, not every intermediary in a potentially harmful transaction should be a target for such liability.

The focus is on locating the “least-cost avoider” – the party that can reduce the likelihood of harm at the lowest overall cost. See Harold Demsetz, When Does the Rule of Liability Matter?, 1 J. of Leg. Stud. 13, 28 (1972); see also Kraakman, supra, at 61 (“[t]he general problem remains one of selecting the mix of direct and collateral enforcement measures that minimizes the total costs of misconduct and enforcement”). This approach aims to balance the costs of enforcement against the social gains achieved as well as the losses that flow from the chilling effects of liability.

Imposing liability involves weighing the administrative costs and the potential lost benefits society might enjoy in the absence of liability. See Ronald Coase, The Problem of Social Cost, 3 J.L. & Econ. 1, 27 (1960) (“[W]hat has to be decided is whether the gain from preventing the harm is greater than the loss which would be suffered elsewhere as a result of stopping the action which produces the harm.”). The least-cost avoider is determined by considering whether the reduction in costs from locating liability on that party is outweighed by the losses caused by restricting other activities that flow from that liability. Calabresi, supra at 141.

The internet comprises various intermediaries like interactive computer services, internet service providers, content delivery networks, and advertising networks, which facilitate interactions between users, content platforms, and various service providers. See generally David S. Evans, Platform Economics: Essays on Multi-Sided Businesses (2011). Sometimes, intermediaries are the least-cost avoiders, especially when information costs are low enough for them to monitor and control end users effectively, or when it is difficult or impossible to identify bad actors using those platforms. But this is not always the case.

While liability can induce actors to take efficient precautions, intermediaries often cannot implement narrow precautions due to limited information or control. Facebook’s platform illustrates this challenge: Facebook has limited to no access to information about the motivations or design of every one of the millions of ad campaigns from millions of individual advertisers on its platform at any given time. Thus, avoiding liability risk might entail broad actions like reducing all services, including those supporting beneficial activities. If the collateral costs in lost activity are significant, the benefits of imposing intermediary liability may not justify its implementation.

Here, overbroad liability could end up severely reducing the effectiveness of advertising in general. This could result in 1) less relevant advertisements for users of online services; 2) reduced value to advertising for businesses, harming in particular small businesses which have limited advertising budgets, and 3) less revenue for online services which rely on advertising revenue, pressuring them to increase revenue through other means like higher ad prices and subscriptions.

The individuals and businesses placing advertisements, not the intermediary ad platform, are the primary actors choosing whether and how to use tools for targeting. As we noted above, under the Unruh Act there are permissible uses of targeted advertising, even when focusing on protected classes. The focus in discouraging discrimination should be on primary actors.

It is not hard to locate parties misusing Facebook’s advertising tools in a way that potentially violates the Unruh Act when evidence of discrimination is presented. On the other hand, intermediaries like Facebook will often lack particularized ex ante knowledge of specific discriminatory transactions or direct control over advertisers’ targeting choices. The only avenue for Facebook to comply with broad liability under the Unruh Act is to altogether remove the ability of businesses to use any characteristic that might theoretically trigger Unruh Act liability, which would result in the harms described above. In situations like this, where the intermediary has little ability to effectively police certain misuses of otherwise benign, neutral tools that enhance social welfare, the case for imposing collateral liability is weakened.

Moreover, some statistically disproportionate ad delivery outcomes may be inevitable given the vast scale of platforms like Facebook. Disparate effects should not automatically equate to impermissible discrimination absent purposeful exclusion. The creation of neutral tools for use by advertisers who then use them to break the law does not imply intentionality by Facebook (or any other advertising platform) to break the law. No one would suggest that a hammer company intends for its product to be misused by customers who use it to bludgeon another human being. Nuance is required.

Broad Unruh Act liability risks unintended harms. Imposing a de facto strict liability regime that treats all ad targeting of protected classes as impermissible under the Unruh Act would drive services like Facebook to restrict lawful advertising tools for all users in order to mitigate liability risks. This impairs a large amount of indisputably legal commerce to deter allegedly illegal advertising by a subset of third parties. Moreover, the effects of such a decision would echo not only throughout the advertising ecosystem, but throughout the internet ecosystem in general where intermediaries might provide similar neutral tools that could run afoul of such a broad theory of liability.

Conclusion

The intermediary liability principles outlined above strongly counsel against the overbroad Unruh Act interpretation embraced by the Court of Appeal in the present matter.

The primary actors are the advertisers choosing whether and how to target ads, not Facebook. The Court of Appeal’s broad view wrongly focused on Facebook’s provision of neutral tools rather than advertisers’ specific uses of those tools.

Given the context-dependent nature of an Unruh Act analysis, the Court of Appeal failed to appropriately balance between deterring allegedly illegal acts by advertisers with the potential loss of value from targeted advertising altogether. The proper duties of intermediaries like Facebook should be limited to feasible actions like removing impermissibly exclusionary ads when notified. They should not include disabling essential advertising tools for all users. The Court of Appeal’s overbroad approach would ultimately harm consumer access to targeted advertising.

With the foregoing in mind, we respectfully urge this Court to grant the pending petition for review. Careful examination of the Court of Appeal’s ruling will reveal it strays beyond the Act’s purpose and ignores collateral harms from overdeterrence. Guidance is needed on balancing antidiscrimination aims with the liberty interests of platforms and their users. This case presents an ideal vehicle for this Court to provide that guidance.

Continue reading
Innovation & the New Economy

A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes

ICLE Issue Brief I.       Introduction Proposals to protect children and teens online are among the few issues in recent years to receive at least rhetorical bipartisan support at . . .

I.       Introduction

Proposals to protect children and teens online are among the few issues in recent years to receive at least rhetorical bipartisan support at both the national and state level. Citing findings of alleged psychological harm to teen users,[1] legislators from around the country have moved to pass bills that would require age verification and verifiable parental consent for teens to use social-media platforms.[2] But the primary question these proposals raise is whether such laws will lead to greater parental supervision and protection for teen users, or whether they will backfire and lead teens to become less likely to use the covered platforms altogether.

The answer, this issue brief proposes, is to focus on transaction costs.[3] Or more precisely, the answer can be found by examining how transaction costs operate under the Coase theorem.

The major U.S. Supreme Court cases that have considered laws to protect children by way of parental consent and age verification all cast significant doubt on the constitutionality of such regimes under the First Amendment. The reasoning such cases have employed appears to apply a Coasean transaction-cost/least-cost-avoider analysis, especially with respect to strict scrutiny’s least-restrictive-means test.

This has important implications for recent attempts to protect teens online by way of an imposed duty of care, mandatory age verification, and/or verifiable parental consent. First, because it means these solutions are likely unconstitutional. Second, because a least-cost-avoider analysis suggests that parents are in best positioned to help teens assess the marginal costs and benefits of social media, by way of the power of the purse and through available technological means. Placing the full burden of externalities on social-media companies would reduce the options available to parents and teens, who could be excluded altogether if transaction costs are sufficiently large as to foreclose negotiation among the parties. This would mean denying teens the overwhelming benefits of social-media usage.

Part II of this brief will define transaction costs and summarize the Coase theorem, with an eye toward how these concepts can help to clarify potential spillover harms and benefits arising from teens’ social-media usage. Part III will examine three major Supreme Court cases that considered earlier parental-consent and age-verification regimes enacted to restrict minors’ access to allegedly harmful content, while arguing that one throughline in the jurisprudence has been the implicit application of least-cost-avoider analysis. Part IV will argue that, even in light of how the internet ecosystem has developed, the Coase theorem’s underlying logic continues to suggest that parents and teens working together are the least-cost avoiders of harmful internet content.

Part V will analyze proposed legislation and recently enacted bills, some of which already face challenges in the federal courts, and argue that the least-cost-avoider analysis embedded in Supreme Court precedent should continue to foreclose age-verification and parental-consent laws. Part VI concludes.

II.     The Coase Theorem and Teenage Use of Social-Media Platforms

A.    The Coase Theorem Briefly Stated and Defined

The Coase theorem has been described as “the bedrock principle of modern law and economics,”[4] and the essay that initially proposed it may be the most-cited law-review article ever published.[5] Drawn from Ronald Coase’s seminal work “The Problem of Social Cost”[6] and subsequent elaborations in the literature,[7] the theorem suggests that:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the lowest-cost avoider, while taking into consideration the total social costs of the institutional framework.

A few definitions are in order. An externality is a side effect of an activity that is not reflected in the cost of that activity—basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action. When we say that such an externality is bilateral, it is to say that it takes two to tango: only when there is a conflict in the use or enjoyment of property is there an externality problem.

Transaction costs are the additional costs borne in the process of buying or selling, separate and apart from the price of the good or service itself—i.e., the costs of all actions involved in an economic transaction. Where transaction costs are present and sufficiently large, they may prevent otherwise beneficial agreements from being concluded. Institutional frameworks determine the rules of the game, including who should bear transaction costs. In order to maximize efficiency, the Coase theorem holds that the burden of avoiding negative externalities should be placed on the party or parties that can avoid them at the lowest cost.

A related and interesting literature focuses on whether the common law is efficient, and the mechanisms by which that may come to be the case.[8] Todd J. Zywicki and Edward P. Stringham argue—contra the arguments of Judge Richard Posner—that the common law’s relative efficiency is a function of the legal process itself, rather than whether judges implicitly or explicitly adopt efficiency or wealth maximization as goals.[9] Zywicki & Stringham find both demand-side and supply-side factors that tend to promote efficiency in the common law, but note that the supply-side factors (e.g., competitive courts for litigants) have changed over time in ways that may result in diminished incentives for efficiency.[10] Their central argument is that the re-litigation of inefficient rules eventually leads to the adoption of more efficient ones.[11] Efficiency itself, they argue, is also best understood as the ability to coordinate plans, rather than as wealth maximization.[12]

In contrast to common law, there is a relative paucity of literature on whether constitutional law follows a pattern of efficiency. For example, one scholar notes that citations to Coase’s work in the corpus of constitutional-law scholarship are actually exceedingly rare.[13] This brief seeks to contribute to the law & economics literature by examining how the Supreme Court appears implicitly to have adopted one version of efficiency—the least-cost-avoider principle—in its First Amendment reviews of parental-consent and age-verification laws under the compelling-government-interest and least-restrictive-means tests.

B.     Applying the Coase Theorem to Teenage Social-Media Usage

The Coase theorem’s basic insights are useful in evaluating not only legal decisions, but also legislation. Here, this means considering issues related to children and teenagers’ online social-media usage. Social-media platforms, teenage users, and their parents are the parties at-issue in this example. While social-media platforms create incredible value for their users,[14] they also arguably impose negative externalities on both teens and their parents.[15] The question here, as it was for Coase, is how to deal with those externalities.

The common-law framework of rights in this scenario is to allow minors to enter into enforceable agreements, except where they are void for public-policy reasons. As Adam Candeub points out:

Contract law is a creature of state law, and states require parental consent for minors entering all sorts of contracts for services or receiving privileges, including getting a tattoo, obtaining a driver’s license, using a tanning facility, purchasing insurance, and signing liability waivers. As a general rule, all contracts with minors are valid, but with certain exceptions they are voidable. And even though a minor can void most contracts he enters into, most jurisdictions have laws that hold a minor accountable for the benefits he received under the contract. Because children can make enforceable contracts for which parents could end up bearing responsibility, it is a reasonable regulation to require parental consent for such contracts. The few courts that have addressed the question of the enforceability of online contracts with minors have held the contracts enforceable on the receipt of the mildest benefit.[16]

Of course, many jurisdictions have passed laws requiring age-verification for various transactions prohibited to minors, such as laws for buying alcohol or tobacco,[17] obtaining driver’s licenses,[18] and buying lottery tickets or pornography.[19] Through the Children’s Online Privacy Protection Act and its regulations, the federal government also requires that online platforms obtain verifiable parental consent before they are permitted to collect certain personal information regarding children under age 13.[20]

The First Amendment, however, has been found to protect minors’ ability to receive speech, including through commercial transactions.[21] The question therefore arises: how should the law regard minors’ ability to access information on social-media platforms? In recent years, multiple jurisdictions have responded to this question by proposing or passing age-verification and parental-consent laws for teens’ social-media usage.[22]

As will be detailed below,[23] while the internet has contributed to significant reductions in transaction costs, they are still present. Thus, in order to maximize social-media platforms’ benefits while minimizing the negative externalities they impose, policymakers should endeavor to place the burden of avoiding the harms associated with teen use on the least-cost avoider. I argue that the least-cost avoider is parents and teens working together to make marginal decisions about social-media use, including by exploiting relatively low-cost practical and technological tools to avoid harmful content. The thesis of this issue brief is that this finding is consistent with the implicit Coasean reasoning in the Supreme Court’s major First Amendment cases on parental consent and age verification.

III.   Major Supreme Court Cases on Parent Consent and Age Verification

Parental-consent and age-verification laws that seek to protect minors from harmful content are not new. The Supreme Court has had occasion to review several of them, while applying First Amendment scrutiny. An interesting aspect of this line of cases is that the Court appears implicitly to have used Coasean analysis in understanding who should bear the burden of avoiding harms associated with speech platforms.

Specifically, in each case, after an initial finding that the restrictions were content-based, the Court applied strict scrutiny. Thus, the burden was placed on the government to prove the relevant laws were narrowly tailored to a compelling government interest using the least-restrictive means. The Court’s transaction-cost analysis is implicit throughout the descriptions of the problem in each case. But the main area of analysis below will be from each case’s least-restrictive-means test section, with a focus on the compelling-state-interest test in Part III.C. Parts III.A, III.B, and III.C will deal with each of these cases in turn.

A.    United States v Playboy Entertainment Group

In United States v. Playboy Entertainment Group,[24] the Supreme Court reviewed § 505 of the Telecommunications Act of 1996, which required “cable television operators who provide channels ‘primarily dedicated to sexually-oriented programming’ either to ‘fully scramble or otherwise fully block’ those channels or to limit their transmission to hours when children are unlikely to be viewing, set by administrative regulation as the time between 10 p.m. and 6 a.m.”[25] Even prior to the regulations promulgated pursuant to the law, cable operators used technological means called “scrambling” to blur sexually explicit content for those viewers who didn’t explicitly subscribe to such content, but there were reported problems with “signal bleed” that allowed some audio and visual content to be obtained by nonsubscribers.[26] Following the regulation, cable operators responded by shifting the hours when such content would be aired—i.e., by making it unavailable for 16 hours a day. This prevented cable subscribers from viewing purchased content of their choosing at times they would prefer.[27]

The basic Coasean framework is present right from the description of the problems that the statute and regulations were trying to solve. As the Court put it:

Two essential points should be understood concerning the speech at issue here. First, we shall assume that many adults themselves would find the material highly offensive; and when we consider the further circumstance that the material comes unwanted into homes where children might see or hear it against parental wishes or consent, there are legitimate reasons for regulating it. Second, all parties bring the case to us on the premise that Playboy’s programming has First Amendment protection. As this case has been litigated, it is not alleged to be obscene; adults have a constitutional right to view it; the Government disclaims any interest in preventing children from seeing or hearing it with the consent of their parents; and Playboy has concomitant rights under the First Amendment to transmit it. These points are undisputed.[28]

In Coasean language, the parties at-issue were the cable operators, content-providers of sexually explicit programming, adult cable subscribers, and their children. Cable television provides tremendous value to its customers, including sexually explicit subscription content that is valued by those subscribers. There is, however, a negative externality to the extent that such programming may become available to children whose parents find it inappropriate. The Court noted that some parents may allow their children to receive such content, and the government disclaimed an interest in preventing such reception with parental consent. Given imperfect scrambling technology, this possible negative externality was clearly present. The question that arose was whether the transaction costs imposed by time-shifting requirements in Section 505 have the effect of restricting adults’ ability to make such viewing decisions for themselves and on behalf of their children.

After concluding that Section 505 was a content-based restriction, due to the targeting of specific adult content and specific programmers, the Court stated that when a content-based restriction is designed “to shield the sensibilities of listeners, the general rule is that the right of expression prevails, even where no less restrictive alternative exists. We are expected to protect our own sensibilities ‘simply by averting [our] eyes.’” [29]

This application of strict scrutiny does not change, the court noted, because we are dealing in this instance with children or the issue of parental consent:

No one suggests the Government must be indifferent to unwanted, indecent speech that comes into the home without parental consent. The speech here, all agree, is protected speech; and the question is what standard the Government must meet in order to restrict it. As we consider a content-based regulation, the answer should be clear: The standard is strict scrutiny. This case involves speech alone; and even where speech is indecent and enters the home, the objective of shielding children does not suffice to support a blanket ban if the protection can be accomplished by a less restrictive alternative.[30]

Again, using our Coasean translator, we can read the opinion as saying the least-cost way to avoid the negative externality of unwanted adult content is by just not looking at it, or for parents to use the means available to them to prevent their children from viewing it.

In fact, that is exactly where the Court goes, by comparing, under the least-restrictive-means test, the targeted blocking mechanism made available in Section 504 of the statute to the requirements imposed by Section 505:

[T]argeted blocking enables the Government to support parental authority without affecting the First Amendment interests of speakers and willing listeners—listeners for whom, if the speech is unpopular or indecent, the privacy of their own homes may be the optimal place of receipt. Simply put, targeted blocking is less restrictive than banning, and the Government cannot ban speech if targeted blocking is a feasible and effective means of furthering its compelling interests. This is not to say that the absence of an effective blocking mechanism will in all cases suffice to support a law restricting the speech in question; but if a less restrictive means is available for the Government to achieve its goals, the Government must use it.[31]

Moreover, the Court found that the fact that parents largely eschewed the available low-cost means to avoid the harm was not necessarily sufficient for the government to prove that it is the least-restrictive alternative:

When a plausible, less restrictive alternative is offered to a content-based speech restriction, it is the Government’s obligation to prove that the alternative will be ineffective to achieve its goals. The Government has not met that burden here. In support of its position, the Government cites empirical evidence showing that § 504, as promulgated and implemented before trial, generated few requests for household-by-household blocking. Between March 1996 and May 1997, while the Government was enjoined from enforcing § 505, § 504 remained in operation. A survey of cable operators determined that fewer than 0.5% of cable subscribers requested full blocking during that time. Id., at 712. The uncomfortable fact is that § 504 was the sole blocking regulation in effect for over a year; and the public greeted it with a collective yawn.[32]

This is because there were, in fact, other market-based means available for parents to use to avoid the harm of unwanted adult programming,[33] and the government had not proved that Section 504 could be effective with more adequate notice.[34] The Court concluded its least-restrictive means analysis by saying:

Even upon the assumption that the Government has an interest in substituting itself for informed and empowered parents, its interest is not sufficiently compelling to justify this widespread restriction on speech. The Government’s argument stems from the idea that parents do not know their children are viewing the material on a scale or frequency to cause concern, or if so, that parents do not want to take affirmative steps to block it and their decisions are to be superseded. The assumptions have not been established; and in any event the assumptions apply only in a regime where the option of blocking has not been explained. The whole point of a publicized § 504 would be to advise parents that indecent material may be shown and to afford them an opportunity to block it at all times, even when they are not at home and even after 10 p.m. Time channeling does not offer this assistance. The regulatory alternative of a publicized § 504, which has the real possibility of promoting more open disclosure and the choice of an effective blocking system, would provide parents the information needed to engage in active supervision. The Government has not shown that this alternative, a regime of added communication and support, would be insufficient to secure its objective, or that any overriding harm justifies its intervention.[35]

In Coasean language, the government’s imposition of transaction costs through time-shifting channels is not the least-cost way to avoid the harm. By publicizing the blocking mechanism of Section 504, as well as promoting market-based alternatives like VCRs to record programming for playback later or blue-screen technology that blocks scrambled video, adults would be able to effectively act as least-cost avoiders of harmful content, including on behalf of their children.

B.     Ashcroft v ACLU

In Ashcroft v. ACLU,[36] the Supreme Court reviewed a U.S. District Court’s preliminary injunction of the age-verification requirements imposed by the Children Online Protection Act (COPA), which was designed to “protect minors from exposure to sexually explicit materials on the Internet.”[37] The law created criminal penalties “of a $50,000 fine and six months in prison for the knowing posting” for ‘commercial purposes’ of World Wide Web content that is ‘harmful to minors.’”[38] The law did, however, provide an escape hatch, through:

…an affirmative defense to those who employ specified means to prevent minors from gaining access to the prohibited materials on their Web site. A person may escape conviction under the statute by demonstrating that he

“has restricted access by minors to material that is harmful to minors—

“(A) by requiring use of a credit card, debit account, adult access code, or adult personal identification number;

“(B) by accepting a digital certificate that verifies age; or

“(C) by any other reasonable measures that are feasible under available technology.” § 231(c)(1).[39]

Here, the Coasean analysis of the problem is not stated as explicitly as in Playboy, but it is still apparent. The internet clearly provides substantial value to users, including those who want to view pornography. But there is a negative externality in internet pornography’s broad availability to minors for whom it would be inappropriate. Thus, to prevent these harms, COPA established a criminal regulatory scheme with an age-verification defense. The threat of criminal penalties, combined with the age-verification regime, imposed high transaction costs on online publishers who post content defined as harmful to minors. This leaves adults (including parents of children) and children themselves as the other relevant parties. Again, the question is: who is the least-cost avoider of the possible negative externality of minor access to pornography? The adult-content publisher or the parents, using technological and practical means?

The Court immediately went to an analysis of the least-restrictive-means test, defining the inquiry as follows:

In considering this question, a court assumes that certain protected speech may be regulated, and then asks what is the least restrictive alternative that can be used to achieve that goal. The purpose of the test is not to consider whether the challenged restriction has some effect in achieving Congress’ goal, regardless of the restriction it imposes. The purpose of the test is to ensure that speech is restricted no further than necessary to achieve the goal, for it is important to ensure that legitimate speech is not chilled or punished. For that reason, the test does not begin with the status quo of existing regulations, then ask whether the challenged restriction has some additional ability to achieve Congress’ legitimate interest. Any restriction on speech could be justified under that analysis. Instead, the court should ask whether the challenged regulation is the least restrictive means among available, effective alternatives.[40]

The Court then considered the available alternative to COPA’s age-verification regime: blocking and filtering software. They found that such tools are clearly less-restrictive means, focusing not only on the software’s granting parents the ability to prevent their children from accessing inappropriate material, but also that adults would retain access to any content blocked by the filter by simply turning it off.[41] In fact, the Court noted that the evidence presented to the District Court suggested that filters, while imperfect, were probably even more effective than the age-verification regime.[42] Finally, the Court noted that, even if Congress couldn’t require filtering software, it could encourage it through parental education, by providing incentives to libraries and schools to use it, and by subsidizing development of the industry itself. Each of these, the Court argued, would be clearly less-restrictive means of promoting COPA’s goals.[43]

In Coasean language, the Court found that parents using technological and practical means are the least-cost avoider of the harm of exposing children to unwanted adult content. Government promotion and support of those means were held up as clearly less-restrictive alternatives than imposing transaction costs on publishers of adult content.

C.    Brown v Entertainment Merchants Association

In Brown v. Entertainment Merchants Association,[44] the Court considered California Assembly Bill 1179, which prohibited the sale or rental of “violent video games” to minors.[45] The Court first disposed of the argument that the government could create a new category of speech that it considered unprotected, just because it is directed at children, stating:

The California Act is something else entirely. It does not adjust the boundaries of an existing category of unprotected speech to ensure that a definition designed for adults is not uncritically applied to children. California does not argue that it is empowered to prohibit selling offensively violent works to adults—and it is wise not to, since that is but a hair’s breadth from the argument rejected in Stevens. Instead, it wishes to create a wholly new category of content-based regulation that is permissible only for speech directed at children.

That is unprecedented and mistaken. “[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.” Erznoznik v. Jacksonville, 422 U.S. 205, 212-213, 95 S.Ct. 2736*2736 2268, 45 L.Ed.2d 125 (1975) (citation omitted). No doubt a State possesses legitimate power to protect children from harm, Ginsberg, supra, at 640-641, 88 S.Ct. 1274; Prince v. Massachusetts, 321 U.S. 158, 165, 64 S.Ct. 438, 88 L.Ed. 645 (1944), but that does not include a free-floating power to restrict the ideas to which children may be exposed. “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” Erznoznik, supra, at 213-214, 95 S.Ct. 2268.[46]

The Court rejected that there was any “longstanding tradition” of restricting children’s access to depictions of violence, as demonstrated by copious examples of violent content in children’s books, high-school reading lists, motion pictures, radio dramas, comic books, television, music lyrics, etc. Moreover, to the extent there was a time when government enforced such regulations, the courts have eventually overturned them.[47] The fact that video games were interactive did not matter either, the Court found, as all literature is potentially interactive, especially genres like choose-your-own-adventure stories.[48]

Thus, because the law was clearly content-based, the Court applied strict scrutiny. The Court was skeptical even of whether the government had a compelling state interest, finding the law to be both seriously over- and under-inclusive. The same effects of exposure to violent content, the Court noted, could be found from covered video games and cartoons not subject to the law’s provisions. Moreover, the law allowed a parent or guardian (or any adult) to buy violent video games for their children.[49]

The Court then gets to the law’s real justification, which it summarily rejected as inconsistent with the First Amendment:

California claims that the Act is justified in aid of parental authority: By requiring that the purchase of violent video games can be made only by adults, the Act ensures that parents can decide what games are appropriate. At the outset, we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.[50]

In Coasean language, the Court is saying that video games—even violent ones—are subjectively valued by those who play them, including minors. There may be negative externalities from playing such games, in that exposure to violence could be linked to psychological harm, and that they are interactive, but these content and design features are still protected speech. Placing the transaction costs on parents/adults to buy such games on behalf of minors, just in case some parents disapprove of their children playing them, is not a compelling state interest.

While the Court is only truly focused on whether there is a compelling state interest in California’s statutory scheme regulating violent video games, some of the language would equally apply to a least-restrictive means analysis:

But leaving that aside, California cannot show that the Act’s restrictions meet a substantial need of parents who wish to restrict their children’s access to violent video games but cannot do so. The video-game industry has in place a voluntary rating system designed to inform consumers about the content of games. The system, implemented by the Entertainment Software Rating Board (ESRB), assigns age-specific ratings to each video game submitted: EC (Early Childhood); E (Everyone); E10 + (Everyone 10 and older); T (Teens); M (17 and older); and AO (Adults Only—18 and older). App. 86. The Video Software Dealers Association encourages retailers to prominently display information about the ESRB system in their stores; to refrain from renting or selling adults-only games to minors; and to rent or sell “M” rated games to minors only with parental consent. Id., at 47. In 2009, the Federal Trade Commission (FTC) found that, as a result of this system, “the video game industry outpaces the movie and music industries” in “(1) restricting target-marketing of mature-rated products to children; (2) clearly and prominently disclosing rating information; and (3) restricting children’s access to mature-rated products at retail.” FTC, Report to Congress, Marketing Violent Entertainment to Children 30 (Dec.2009), online at http://www. ftc.gov/os/2009/12/P994511violent entertainment.pdf (as visited June 24, 2011, and available in Clerk of Court’s case file) (FTC Report). This system does much to ensure that minors cannot purchase seriously violent games on their own, and that parents who care about the matter can readily evaluate the games their children bring home. Filling the remaining modest gap in concerned parents’ control can hardly be a compelling state interest.

And finally, the Act’s purported aid to parental authority is vastly overinclusive. Not all of the children who are forbidden to purchase violent video games on their own have parents who care whether they purchase violent video games. While some of the legislation’s effect may indeed be in support of what some parents of the restricted children actually want, its entire effect is only in support of what the State thinks parents ought to want. This is not the narrow tailoring to “assisting parents” that restriction of First Amendment rights requires.[51]

In sum, the Court suggests that the law would not be narrowly tailored, because there are already market-based systems in place to help parents and minors make informed decisions about which video games to buy—most importantly from the rating system that judges appropriateness by age and offers warnings about violence. Government paternalism is simply insufficient to justify imposing new transaction costs on parents and minors who wish to buy even violent video games.

Interestingly, the concurrence of Justice Samuel Alito, joined by Chief Justice John Roberts, also contains some language that could be interpreted through a Coasean lens. The concurrence allows, in particular, the possibility that harms from interactive violent video games may differ from other depictions of violence that society has allowed children to view, although it concludes that reasonable minds may differ.[52] In other words, the concurrence basically notes that the negative externalities may be greater than the majority opinion would allow, but nonetheless, that Justices Alito and Roberts agreed the law was not drafted in a constitutional manner that comports with the obscenity exception to the First Amendment.

Nonetheless, it appears the Court applies an implicit Coasean framework when it rejects the imposition of transaction costs on parents and minors to gain access to protected speech—in this case, violent video games. Parents and minors remain the least-cost avoiders of the potential harms of violent video games.

IV.   Coase Theorem Applied to Age-Verification and Verifiable-Consent Laws

As outlined above, the issue is whether social media needs age-verification and parental-consent laws in order to address negative externalities to minor users. This section will analyze this question under the Coasean framework introduced in Part II.

The basic argument proceeds as follows:

  1. Transaction costs for age verification and verifiable consent from parents and/or teens are sufficient large to prevent a bargain from being struck;
  2. The lowest-cost avoiders are parents and teens working together, using practical and technological means, including low-cost monitoring and filtering services, to make marginal decisions about minors’ social-media use; and
  3. Placing the transaction costs on social-media companies to obtain age verification and verifiable consent from parents and/or teens would actually reduce their ability to make marginal decisions about minors’ social-media use, as social-media companies will respond by investing more in excluding minors from access than in creating safe and vibrant spaces for interaction.

Part IV.A will detail the substantial transaction costs associated with obtaining age verification and verifiable parental consent. Part IV.B argues that parents and teens working together using practical and technological means are the lowest-cost avoiders of the harms of social-media use. Part IV.C will consider the counterfactual scenario of placing the transaction costs on social-media companies and argue that the result would be teens’ exclusion from social media, to their detriment, as well as the detriment of parents who would have made different choices.

A.    Transaction Costs, Age Verification, and Verifiable Parental Consent[53]

As Coase taught, in a world without transaction costs (or where such costs are sufficiently low), age-verification laws or mandates to obtain verifiable parental consent would not matter, because the parties would bargain to arrive at an efficient solution. Because there are high transaction costs that prevent such bargains from being easily struck, making the default that teens cannot join social media without verifiable parental consent could have the effect of excluding them from the great benefits of social media usage altogether.[54]

There is considerable evidence that, even despite the internet and digital technology serving to reduce transaction costs considerably across a wide range of fronts,[55] transaction costs remain high when it comes to age verification and verifiable parental consent. A data point that supports this conclusion is the experience of social-media platforms under the Children’s Online Privacy Protection Act (COPPA).[56] In their working paper “COPPAcalypse? The YouTube Settlement’s Impact on Kids Content,”[57] Garrett Johnson, Tesary Lin, James C. Cooper, & Liang Zhong summarized the issue as follows:

The Children’s Online Privacy Protection Act (COPPA), and its implementing regulations, broadly prohibit operators of online services directed at children under 13 from collecting personal information without providing notice of its data collection and use practices and obtaining verifiable parental consent. Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA essentially acts as a de facto ban on the collection of personal information by providers of free child-directed content. In 2013, the FTC amended the COPPA rules to include in the definition of personal information “persistent identifier that can be used to recognize a user over time and across different Web sites or online services,” such as a “customer number held in a cookie . . . or unique device identifier.” This regulatory change meant that, as a practical matter, online operators who provide child-directed content could no longer engage in personalized advertising.

On September 4, 2019, the FTC entered into a consent agreement with YouTube to settle charges that it had violated COPPA. The FTC’s allegations focused on YouTube’s practice of serving personalized advertising on child-directed content at children without obtaining verifiable parental consent. Although YouTube maintains it is a general audience website and users must be at least 13 years old to obtain a Google ID (which makes personalized advertising possible), the FTC complaint alleges that YouTube knew that many of its channels were popular with children under 13, citing YouTube’s own claims to advertisers. The settlement required YouTube to identify child-directed channels and videos and to stop collecting personal information from visitors to these channels. In response, YouTube required channel owners producing [“made-for-kids”] MFK content to designate either their entire channels or specific videos as MFK, beginning on January 1, 2020. YouTube supplemented these self-designations with an automated classifier designed to identify content that was likely directed at children younger than 13.9 In so doing, YouTube effectively shifted liability under COPPA to the channel owners, who could face up to $42,530 in fines per video if they fail to self-designate and are not detected by YouTube’s classifier.[58]

The rule change and settlement increased the transaction costs imposed on social-media platforms by requiring verifiable parental consent. YouTube’s economically rational response was to restrict the content creators’ ability to benefit from (considerably more lucrative) personalized advertising. The end result was less content created for children, with competitive effects to boot:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.[59]

This is not the only finding regarding COPPA’s role in reducing the production of content for children. The president of the App Association, a global trade association for small and medium-sized technology companies, presented extensively at the Federal Trade Commission’s (FTC) 2019 COPPA Workshop.[60] The testimony from App Association President Morgan Reed detailed that the transaction costs associated with obtaining verifiable parental consent did little to enhance parental control, but much to reduce the quality and quantity of content directed to children. But it is worth highlighting Reed’s constant use of the words “friction,” “restriction,” and “cost” to describe how the institutional environment of COPPA affects the behavior of the social media platforms, parents, and children. While noting that general audience content is “unfettered, meaning that you don’t feel restricted by what you can get to, how you do it. It’s easy, it’s low friction. Widely available. I can get it on any platform, in any case, in any context and I can get to it rapidly,” COPPA-regulated apps and content are, Reed said, all about:

Friction, restriction, and cost. Every layer of friction you add alters parent behavior significantly. We jokingly refer to it as the over the shoulder factor. If a parent wants access to something and they have to pass it from the back seat to the front seat of the car more than one time, the parent moves on to the next thing. So the more friction you add to an application directed at children the less likely it is that the parent is going to take the steps necessary to get through it because the competition, of course, is as I said, free, unfettered, widely available. Restriction. Kids balk against some of the restrictions. I can’t get to this, I can’t do that. And they say that to the parent. And from the parent’s perspective, fine, I’ll just put in a different age date. They’re participating, they’re parenting but they’re not using the regulatory construction that we all understand.

The COPPA side, expensive, onerous or friction full. We have to find some way around that. Restrictive, fewer features, fewer capabilities, less known or available, and it’s entertaining-ish. …

Is COPPA the barrier? I thought this quote really summed it up. “Seamlessness is expected. But with COPPA, seamlessness is impossible.” And that has been one of the single largest areas of concern. Our folks are looking to provide a COPPA compliant environment. And they’re finding doing VPC is really hard. We want to make it this way, we just walked away. And why do they want to do it? We wanted to create a hub for kids to promote creativity. So these are not folks who are looking to take data and provide interest based advertising. They’re trying to figure out how to do it so they can build an engaging product. Parental consent makes the whole process very complicated. And this is the depressing part. …

We say that VPC is intentional friction. It’s clear from everything we’ve heard in the last two panels that the authors of COPPA, we don’t really want information collected on kids. So friction is intentional. And this is leading to the destruction of general audience applications basically wiping out COPPA apps off the face of the map.[61]

Reed’s use of the word “friction” is particularly enlightening. Mike Munger has often described transaction costs as frictions, explaining that, to consumers, all costs are transaction costs.[62] When higher transaction costs are imposed on social-media platforms, end users feel the impact. In this case, the result is that children and parents receive less quality children’s apps and content.

A similar example can be seen in the various battles between traditional media and social-media companies in Australia, Canada, and the EU, where laws have been passed that would require platforms to pay for linking to certain news content.[63] Because these laws raise transaction costs, social-media platforms have responded by restricting access to news links,[64] to the detriment of users and the news-media organizations themselves. In other words, much like with verifiable parental consent, the intent of these laws is thwarted by the underlying economics.

More evidence that imposing transaction costs on social-media companies can have the effect of diminishing the user experience can be found in the preliminary injunction issued by the U.S. District Court in Austin, Texas in Free Speech Coalition Inc. v. Colmenero.[65] The court cited evidence from the plaintiff’s complaint that included bills for “several commercial verification services, showing that they cost, at minimum, $40,000.00 per 100,000 verifications.”[66] The court also noted that “[Texas law] H.B. 1181 imposes substantial liability for violations, including $10,000.00 per day for each violation, and up to $250,000.00 if a minor is shown to have viewed the adult content.”[67]

Moreover, the transaction costs in this example also include the subjective costs borne by those who actually go through with verifying their age to access pornography. As the court noted “the law interferes with the Adult Video Companies’ ability to conduct business, and risks deterring adults from visiting the websites.”[68] The court issued a preliminary injunction against the law’s age-verification provision, finding that other means—such as content-filtering software—are clearly more effective than age verification to protect children from unwanted content.[69]

In sum, transaction costs for age verification and verifiable parental consent are sufficiently high as to prevent an easy bargain from being struck. Thus, which party bears the burden of those costs will determine the outcome. The lessons from COPPA, news-media laws, and online-pornography age-verification laws are clear: if the transaction costs are imposed on the online platforms and apps, it will lead to access restrictions on the speech those platforms provide, almost all of which is protected speech. This is the type of collateral censorship that the First Amendment is designed to avoid.[70]

B.     Parents and Teens as the Least-Cost Avoiders of Negative Externalities

If transaction costs due to online age-verification and verifiable-parent-consent laws are substantial, the question becomes which party or parties should be subject to the burden of avoiding the harms arising from social-media usage.

It is possible, in theory, that social-media platforms are the best-positioned to monitor and control content posted to their platforms—for instance, when it comes to harms associated with anonymous or pseudonymous accounts imposing social costs on society.[71] In such cases, a duty of care that would allow for intermediary liability against social-media companies may make sense.[72]

On the other hand, when it comes to online age-verification and parental-consent laws, widely available practical and technological means appear to be lowest-cost way to avoid the negative externalities associated with social-media usage. As NetChoice put it in their complaint against Arkansas’ social-media age-verification law, “[p]arents have myriad ways to restrict their children’s access to online services and to keep their children safe on such services.”[73]

In their complaint, NetChoice recognizes the subjective nature of negative externalities, stating:

Just as people inevitably have different opinions about what books, television shows, and video games are appropriate for minors, people inevitably have different views about whether and to what degree online services are appropriate for minors. While many minors use online services in wholesome and productive ways, online services, like many other technologies, can be abused in ways that may harm minors.[74]

They then expertly list all the ways that parents can take control and help their children avoid online harms, including with respect to the decisions to buy devices for their children and to set terms for how and when they are permitted to use them.[75] Parents can also choose to use tools from cell-phone carriers and broadband providers to block certain apps and sites from their children’s devices, or to control with whom their children can communicate and for how long they can use the devices.[76] They also point to wireless routers that allow for parents to filter and monitor online content;[77] parental controls at the device level;[78] third-party filtering applications;[79] and numerous tools offered by NetChoice members that all allow for relatively low-cost monitoring and control by parents and even teen users acting on their own behalf.[80] Finally, they note that NetChoice members, in response to market demand,[81]expend significant resources curating content to make sure it’s appropriate.[82]

The recent response from the Australian government to the proposed “Roadmap for Age Verification”[83] buttresses this analysis. The government pulled back from plans to “force adult websites to bring in age verification following concerns about privacy and the lack of maturity of the technology.”[84] In particular, the government noted that:

It is clear from the Roadmap that at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness and implementation issues. For age assurance to be effective, it must:

  • work reliably without circumvention;
  • be comprehensively implemented, including where pornography is hosted outside of Australia’s jurisdiction; and
  • balance privacy and security, without introducing risks to the personal information of adults who choose to access legal pornography.

Age assurance technologies cannot yet meet all these requirements. While industry is taking steps to further develop these technologies, the Roadmap finds that the age assurance market is, at this time, immature.

The Roadmap makes clear that a decision to mandate age assurance is not ready to be taken.[85]

As a better solution, the government offered “[m]ore support and resources for families,”[86] including promoting tools already available in the marketplace to help prevent children from accessing inappropriate content like pornography,[87] and promoting education for both parents and children on how to avoid online harms.[88]

In sum, this is all about transaction costs. The least-cost avoider from negative externalities imposed by social-media usage are the parents and teens themselves, working together to make marginal decisions about how to use these platforms through the use of widely available practical and technological means.

C.    Teen Exclusion Online and Reduced Parental Involvement in Social-Media Usage Decisions

If the burden of avoiding negative externalities is placed on social-media platforms, the result could be considerable collateral censorship of protected speech. This is because of transaction costs, as explained above in Part IV.A. Thus, while one could argue that the externalities imposed by social-media platforms on teen users and their parents represent a market failure, this is not the end of the analysis. Transaction costs help to explain that the institutional environment we create fosters the rules of the game that platforms, parents, and teens follow. If transaction costs are too high and placed incorrectly on social-media platforms, parents and teens’ ability to control how they use social media will actually suffer.

As can be seen most prominently in the COPPA examples discussed above,[89] the burden of obtaining verifiable parental consent leads to platforms reallocating investments into the exclusion of the protected class—in that case, children under age 13—that could otherwise go toward creating a safe and vibrant community from which children could benefit. Thus, proposals like COPPA 2.0,[90] which would extend the need for verifiable consent to teens, could yield an equivalent result of greater exclusion of teens. State laws that would require age verification and verifiable parental consent for teens are likely to produce the same result, as well. The irony, of course, is that parental consent laws would actually reduce the available choices for those parents who see the use value for their teenagers.

In sum, the economics of transaction costs explains why age-verification and verifiable-parental-consent laws will not satisfy their proponents’ stated objectives. As with minimum-wage laws[91] and rent control,[92] economics helps to explain the counterintuitive finding that well-intentioned laws can actually produce the exact opposite end result. Here, that means age-verification and verifiable-parental-consent laws lead to parents and teens being less able to make meaningful and marginal decisions about the costs and benefits of their own social-media usage.

V.     The Unconstitutionality of Social-Media Verification and Verifiable-Consent Laws

Bringing this all together, Part V will consider the constitutionality of the enacted and proposed laws on age verification and verifiable parental consent under the First Amendment. As several courts have already suggested, these laws will not survive First Amendment scrutiny.

The first question is whether these laws will be subject to strict scrutiny (because they are content-based) or instead to intermediate scrutiny as content-neutral regulations. There is a possibility that it will not matter, because a court could find—as one already has—that such laws burden more speech than necessary anyway. Part V.A will take up these questions.

The second set of questions is whether, assuming strict scrutiny applies, these enacted and proposed laws could survive the least-restrictive-means test. Part V.B will consider this set of questions and argue that, as the lowest-cost avoiders, parents and teens working together using widely available practical and technological means to avoid negative externalities also represents the least-restrictive means to promote the government’s interest in protecting minors from the harms of social media.

A.    Questions of Content Neutrality

The first important question is whether laws that attempt to protect minors from externalities associated with social-media usage are content-neutral. One argument that has been forwarded is that they are simply content-neutral contract laws that shift the consent default to parents before teens can establish an ongoing contractual relationship with a social-media company by creating a profile.[93]

Before delving into whether that argument could work, it is worth considering laws that are clearly content-based to help tell the difference. For instance, the Texas law challenged in Free Speech Coalition v. Colmenero is clearly content-based, because “the regulation is based on whether content contains sexual material.”[94]

Similarly, laws like the Kids Online Safety Act (KOSA)[95] are content-based, in that they require covered platforms to take:

reasonable measures in its design or operation of products and services to prevent or mitigate the following:

  • Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.

  • Patterns of use that indicate or encourage addiction-like behaviors.

  • Physical violence, online bullying, and harassment of the minor.

  • Sexual exploitation and abuse.

  • Promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol.

  • Predatory, unfair, or deceptive marketing practices, or other financial harms.[96]

While parts 4-6 and actual physical violence all constitute either unprotected speech or conduct, decisions about how to present information from part 2 is arguably protected speech.[97] Even true threats like online bullying and harassment are speech subject to at least some First Amendment scrutiny, in that they would require some type of mens rea to be constitutional.[98] Part 1 may be unconstitutionally vague as written.[99] Moreover, 1-3 are clearly content-based, in that it is necessary to consider the content presented, which will include at least some protected speech. This equally applies to the California Age Appropriate Design Code,[100] which places an obligation on covered companies to identify and mitigate speech that is harmful or potentially harmful to users under 18 years old, and to prioritize speech that promotes such users’ well-being and best interests.[101]

In each of these cases, it would be difficult to argue that strict scrutiny ought not apply. On the other hand, some have argued that the Utah and Arkansas laws requiring age verification and verifiable parental consent are simply content-neutral regulations of contract formation, which can be considered independently of speech.[102] Arkansas has argued that Act 689’s age-verification requirements are “merely a content-neutral regulation on access to speech at particular ‘locations,’ so intermediate scrutiny should apply.”[103]

But even in NetChoice v. Griffin,[104] the U.S. District Court in Arkansas, while skeptical that the law was content-neutral,[105] proceeded as if it was and still found, in granting a preliminary injunction, that the age-verification law “is likely to unduly burden adult and minor access to constitutionally protected speech.”[106] Similarly, the U.S. District Court for the Northern District of California found that all major provisions of California’s AADC were likely unconstitutional under a lax commercial-speech standard.[107]

Nonetheless, there are strong arguments that these laws are content-based. As the court in Griffin put it:

Deciding whether Act 689 is content-based or content-neutral turns on the reasons the State gives for adopting the Act. First, the State argues that the more time a minor spends on social media, the more likely it is that the minor will suffer negative mental health outcomes, including depression and anxiety. Second, the State points out that adult sexual predators on social media seek out minors and victimize them in various ways. Therefore, to the State, a law limiting access to social media platforms based on the user’s age would be content-neutral and require only intermediate scrutiny.

On the other hand, the State points to certain speech-related content on social media that it maintains is harmful for children to view. Some of this content is not constitutionally protected speech, while other content, though potentially damaging or distressing, especially to younger minors, is likely protected nonetheless. Examples of this type of speech include depictions and discussions of violence or self-harming, information about dieting, so-called “bullying” speech, or speech targeting a speaker’s physical appearance, race or ethnicity, sexual orientation, or gender. If the State’s purpose is to restrict access to constitutionally protected speech based on the State’s belief that such speech is harmful to minors, then arguably Act 689 would be subject to strict scrutiny.

During the hearing, the State advocated for intermediate scrutiny and framed Act 689 as “a restriction on where minors can be,” emphasizing it was “not a speech restriction” but “a location restriction.” The State’s briefing analogized Act 689 to a restriction on minors entering a bar or a casino. But this analogy is weak. After all, minors have no constitutional right to consume alcohol, and the primary purpose of a bar is to serve alcohol. By contrast, the primary purpose of a social media platform is to engage in speech, and the State stipulated that social media platforms contain vast amounts of constitutionally protected speech for both adults and minors. Furthermore, Act 689 imposes much broader “location restrictions” than a bar does. The Court inquired of the State why minors should be barred from accessing entire social media platforms, even though only some of the content was potentially harmful to them, and the following colloquy ensued:

THE COURT: Well, to pick up on Mr. Allen’s analogy of the mall, I haven’t been to the Northwest Arkansas mall in a while, but it used to be that there was a restaurant inside the mall that had a bar. And so certainly minors could not go sit at the bar and order up a drink, but they could go to the Barnes & Noble bookstore or the clothing store or the athletic store. Again, borrowing Mr. Allen’s analogy, the gatekeeping that Act 689 imposes is at the front door of the mall, not the bar inside the mall; yes?

THE STATE: The state’s position is that the whole mall is a bar, if you want to continue to use the analogy.

THE COURT: The whole mall is a bar?

THE STATE: Correct.

Clearly, the state’s analogy is not persuasive.

NetChoice argues that Act 689 is not a content-neutral restriction on minors’ ability to access particular spaces online, and the fact that there are so many exemptions to the definitions of “social media company” and “social media platform” proves that the State is targeting certain companies based either on a platform’s content or its viewpoint. Indeed, Act 689’s definitions and exemptions do seem to indicate that the State has selected a few platforms for regulation while ignoring all the rest. The fact that the State fails to acknowledge this causes the Court to suspect that the regulation may not be content neutral. “If there is evidence that an impermissible purpose or justification underpins a facially content-neutral restriction, for instance, that restriction may be content-based.” City of Austin v. Reagan Nat’l Advertising of Austin, LLC, 142 S. Ct. 1464, 1475 (2022).[108]

Utah’s laws HB 311 and 152 would also seem to suffer from a similar defect as KOSA and AADC,[109] though they have not yet been litigated.

B.     Least-Restrictive Means Is to Promote Monitoring and Filtering

Assuming that courts do, in fact, find that these laws are content-based, strict scrutiny would apply, including the least-restrictive-means test.[110] In that case, the caselaw is clear: the least-restrictive means to achieve the government’s interest of protecting minors from social media’s speech and design problems is to promote low-cost monitoring and filtering.

First, however, it is also worth inquiring whether the government would be able to establish a compelling state interest, as the Court discussed in Brown. The Court’s strong skepticism of government paternalism[111] applies equally to the verifiable-parental-consent laws enacted in Arkansas and Utah, as well as COPPA 2.0. Aiding parental consent likely fails to “meet a substantial need of parents who wish to restrict their children’s access”[112] to social media, but can’t do so, to use the late Justice Antonin Scalia’s language. Moreover, the “purported aid to parental authority” is likely to be found to be “vastly overinclusive” because “[n]ot all of the children who are forbidden” to join social media on “their own have parents who care whether” they do so.[113] While such laws “may indeed be in support of what some parents of the restricted children actually want, its entire effect is only in support of what the State thinks parents ought to want. This is not the narrow tailoring to ‘assisting parents’ that restriction of First Amendment rights requires.”[114]

As argued clearly above, Ashcroft is strong precedent that promoting the practical and technological means available in the marketplace, outlined by NetChoice in its brief in Griffin, is less restrictive than age-verification laws to protect minors from harms associated with social-media usage.[115] In fact, there is a strong argument that the market has subsequently produced more and more effective tools than were available even then. This makes it exceedingly unlikely that the Supreme Court will change its mind.

While some have argued that Justice Clarence Thomas’ dissent in Brown offers roadmap to reject these precedents,[116] there is little basis for that conclusion. First, Thomas’ dissent in Brown was not joined by any other members of the Supreme Court.[117] Second, Justice Thomas joined the majority in Ashcroft v. ACLU, suggesting he probably still sees age-verification laws as unconstitutional.[118] Even Associate Justice Samuel Alito issued a concurrence to the majority in that case,[119] expressing skepticism of Justice Thomas’ approach.[120]  Third, it seems unlikely that the newer conservative justices, whose jurisprudence has been more speech-protective by nature,[121] would join Justice Thomas in his opinion on the right of children to receive speech. And far from being vague on the issue of whether a minor has a right to receive speech, [122] Justice Scalia’s majority opinion clearly stated that:

[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them… but that does not include a free-floating power to restrict the ideas to which children may be exposed.[123]

Precedent is strong against age-verification and parental-consent laws, and there is no reason to think the personnel changes on the Supreme Court would change the analysis.

In sum, straightforward applications of Brown and Ashcroft doom these new social-media laws.

VI.   Conclusion

This issue brief has two main conclusions, one of interest to the scholarship of applying law & economics to constitutional law, and the other to the policy and legal questions surrounding social-media age-verification and parental-consent laws:

  1. The Supreme Court appears to implicitly adopt a Coasean framework in its approach to parental-consent and age-verification laws in the three major precedents of Playboy, Ashcroft, and Brown; and
  2. The application of this least-cost avoider analysis in the least-restrictive-means test, in particular, is likely to doom these laws constitutionally, but also as a matter of economically grounded policy.

In conclusion, these online age-verification laws should be rejected. Why? The answer is transaction costs.

[1] See, e.g., Kirsten Weir, Social Media Brings Benefits and Risks to Teens. Here’s How Psychology Can Help Identify a Path Forward, 54 Monitor on Psychology 46 (Sep. 1, 2023), https://www.apa.org/monitor/2023/09/protecting-teens-on-social-media.

[2] See, e.g., Khara Boender, Jordan Rodell, & Alex Spyropoulos, The State of Affairs: What Happened in Tech Policy During 2023 State Legislative Sessions?, Project Disco (Jul. 25, 2023), https://www.project-disco.org/competition/the-state-of-affairs-state-tech-policy-in-2023 (noting laws passed and proposed addressing children’s online safety at the state level, including California’s Age-Appropriate Design Code and age-verification laws in both Arkansas and Utah, all of which will be considered below).

[3] With apologies to Mike Munger for borrowing the title of his excellent podcast, invoked several times in this issue brief; see The Answer Is Transaction Costs, https://podcasts.apple.com/us/podcast/the-answer-is-transaction-costs/id1687215430 (last accessed Sept. 28, 2023).

[4] Steven G. Medema, “Failure to Appear”: The Use of the Coase Theorem in Judicial Opinions, at 4, Dep’t of Econ. Duke Univ., Working Paper No. 2.1 (2019), available at https://hope.econ.duke.edu/sites/hope.econ.duke.edu/files/Medema%20workshop%20paper.pdf.

[5] Fred R. Shapiro & Michelle Pearse, The Most Cited Law Review Articles of All Time, 110 Mich. L. Rev. 1483, 1489 (2012).

[6] R.H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960).

[7] See generally Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[8] Todd J. Zywicki & Edward Peter Stringham, Common Law and Economic Efficiency, Geo. Mason Univ.. L. & Econ. Rsch., Working Paper No. 10-43 (2010), available at https://www.law.gmu.edu/assets/files/publications/working_papers/1043CommonLawandEconomicEfficiency.pdf.

[9] See id. at 4.

[10] See id. at 3.

[11] See id. at 10.

[12] See id. at 34.

[13] Medema, supra note 4, at 39.

[14] See, e.g., Matti Cuorre & Andrew K. Przybylski, Estimating the Association Between Facebook Adoption and Well-Being in 72 Countries, 10 Royal Soc’y Open Sci. 1 (2023), https://royalsocietypublishing.org/doi/epdf/10.1098/rsos.221451; Sabrina Cipoletta, Clelia Malighetti, Chiara Cenedese, & Andrea Spoto, How Can Adolescents Benefit from the Use of Social Networks? The iGeneration on Instagram, 17 Int. J. Environ. Res. Pub. Health 6952 (2020), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7579040.

[15] See Jean M. Twenge, Thomas E. Joiner, Megan L Rogers, & Gabrielle N. Martin, Increases in Depressive Symptoms, Suicide-Related Outcomes, and Suicide Rates Among U.S. Adolescents After 2010 and Links to Increased New Media Screen Time, 6 Clinical Psych. Sci. 3 (2018), available at https://courses.engr.illinois.edu/cs565/sp2018/Live1_Depression&ScreenTime.pdf.

[16] Adam Candeub, Age Verification for Social Media: A Constitutional and Reasonable Regulation, FedSoc Blog (Aug. 7, 2023), https://fedsoc.org/commentary/fedsoc-blog/age-verification-for-social-media-a-constitutional-and-reasonable-regulation.

[17] See Wikipedia, List of Alcohol Laws of the United States, https://en.wikipedia.org/wiki/List_of_alcohol_laws_of_the_United_States (last accessed Sep. 28, 2023); Wikipedia, U.S. History of Tobacco Minimum Purchase Age by State, https://en.wikipedia.org/wiki/U.S._history_of_tobacco_minimum_purchase_age_by_state (last accessed Sep. 28, 2023).

[18] See Wikipedia, Driver’s Licenses in the United States, https://en.wikipedia.org/wiki/Driver%27s_licenses_in_the_United_States (last accessed Sep. 28, 2023).

[19] See Wikipedia, Gambling Age, https://en.wikipedia.org/wiki/Gambling_age (last accessed Sep. 28, 2023) (table on minimum age for lottery tickets and casinos by state). As far as this author is aware, every state and territory requires identification demonstrating the buyer is at least 18 years old to make a retail purchase of a pornographic magazine or video.

[20] See 15 U.S.C. § 6501, et seq. (2018); 16 CFR Part 312.

[21] See infra Part III. See Brown v. Ent. Merch. Ass’n, 564 U.S. 786, 794 (2011) (“California does not argue that it is empowered to prohibit selling offensively violent works to adults—and it is wise not to, since that is but a hair’s breadth from the argument rejected in Stevens. Instead, it wishes to create a wholly new category of content-based regulation that is permissible only for speech directed at children. That is unprecedented and mistaken. ‘[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them…’ No doubt a State possesses legitimate power to protect children from harm… but that does not include a free-floating power to restrict the ideas to which children may be exposed. ‘Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.’”) (internal citations omitted).

[22] See infra Part V.

[23] See infra Part IV.

[24] 529 U.S. 803 (2000).

[25] Id. at 806.

[26] See id.

[27] See id. at 806-807.

[28] Id. at 811.

[29] Id. at 813 (internal citation omitted).

[30] Id. at 814.

[31] Id. at 815.

[32] Id. at 816.

[33] See id. at 821 (“[M]arket-based solutions such as programmable televisions, VCR’s, and mapping systems []which display a blue screen when tuned to a scrambled signal[] may eliminate signal bleed at the consumer end of the cable.”).

[34] See id. at 823 (“The Government also failed to prove § 504 with adequate notice would be an ineffective alternative to § 505.”).

[35] Id. at 825-826.

[36] 542 U.S. 656 (2004).

[37] Id. at 659.

[38] Id. at 661.

[39] Id. at 662.

[40] Id. at 666.

[41] See id. at 667 (“Filters are less restrictive than COPA. They impose selective restrictions on speech at the receiving end, not universal restrictions at the source. Under a filtering regime, adults without children may gain access to speech they have a right to see without having to identify themselves or provide their credit card information. Even adults with children may obtain access to the same speech on the same terms simply by turning off the filter on their home computers. Above all, promoting the use of filters does not condemn as criminal any category of speech, and so the potential chilling effect is eliminated, or at least much diminished. All of these things are true, moreover, regardless of how broadly or narrowly the definitions in COPA are construed.”).

[42] See id. at 667-669.

[43] See id. at 669-670.

[44] 564 U.S. 786 (2011).

[45] See id. at 787.

[46] Id. at 793-795.

[47] See id. at 794-797.

[48] See id. at 796-799.

[49] See id. at 799-802.

[50] Id. at 801.

[51] Id. at 801-804.

[52] See id. at 812 (J. Alito, concurring):

“There is a critical difference, however, between obscenity laws and laws regulating violence in entertainment. By the time of this Court’s landmark obscenity cases in the 1960’s, obscenity had long been prohibited, See Roth v. U.S., 354 U.S. 476, at 484-485, and this experience had helped to shape certain generally accepted norms concerning expression related to sex.

There is no similar history regarding expression related to violence. As the Court notes, classic literature contains descriptions of great violence, and even children’s stories sometimes depict very violent scenes.

Although our society does not generally regard all depictions of violence as suitable for children or adolescents, the prevalence of violent depictions in children’s literature and entertainment creates numerous opportunities for reasonable people to disagree about which depictions may excite “deviant” or “morbid” impulses. See Edwards & Berman, Regulating Violence on Television, 89 Nw. U.L.Rev. 1487, 1523 (1995) (observing that the Miller test would be difficult to apply to violent expression because “there is nothing even approaching a consensus on low-value violence”).

Finally, the difficulty of ascertaining the community standards incorporated into the California law is compounded by the legislature’s decision to lump all minors together. The California law draws no distinction between young children and adolescents who are nearing the age of majority.”

See also id. at 819 (Alito, J., concurring) (“If the technological characteristics of the sophisticated games that are likely to be available in the near future are combined with the characteristics of the most violent games already marketed, the result will be games that allow troubled teens to experience in an extraordinarily personal and vivid way what it would be like to carry out unspeakable acts of violence.”).

[53] The following sections are adapted from Ben Sperry, Right to Anonymous Speech, Part 3: Anonymous Speech and Age-Verification Laws, Truth on the Market (Sep. 11, 2023), https://truthonthemarket.com/2023/09/11/right-to-anonymous-speech-part-3-anonymous-speech-and-age-verification-laws.

[54] See Ben Sperry, Online Safety Bills Will Mean Kids Are No Longer Seen or Heard Online, The Hill (May 12, 2023), https://thehill.com/opinion/congress-blog/4002535-online-safety-bills-will-mean-kids-are-no-longer-seen-or-heard-online;  Ben Sperry, Bills Aimed at ‘Protecting’ Kids Online Throw the Baby out with the Bathwater, The Hill (Jul. 26, 2023), https://thehill.com/opinion/congress-blog/4121324-bills-aimed-at-protecting-kids-online-throw-the-baby-out-with-the-bathwater; Przybylski & Vuorre, supra note 14; Mesfin A. Bekalu, Rachel F. McCloud, & K. Viswanath, Association of Social Media Use With Social Well-Being, Positive Mental Health, and Self-Rated Health: Disentangling Routine Use From Emotional Connection to Use, 42 Sage J. 69S, 69S-80S (2019), https://journals.sagepub.com/doi/full/10.1177/1090198119863768.

[55] See generally Michael Munger, Tomorrow 3.0: Transaction Costs and the Sharing Economy, Cambridge University Press (Mar. 22, 2018).

[56] The Future of the COPPA Rule: An FTC Workshop Part 2, Federal Trade Commission (Oct. 7, 2019), available at https://www.ftc.gov/system/files/documents/public_events/1535372/transcript_of_coppa_workshop_part_2_1.pdf.

[57] Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[58] Id. at 6-7 (emphasis added).

[59] Id. at 1.

[60] FTC, supra note 56.

[61] Id. at 6 (emphasis added).

[62] See Michael Munger, To Consumers, All Costs are Transaction Costs, Am. Inst. Econ. Rsch. (June 13, 2023), https://www.aier.org/article/to-consumers-all-costs-are-transaction-costs.

[63] See Katie Robertson, Meta Begins Blocking News in Canada, N.Y. Times (Aug. 2, 2023), https://www.nytimes.com/2023/08/02/business/media/meta-news-in-canada.html; Mark Collom, Australia Made a Deal to Keep News on Facebook. Why Couldn’t Canada?, CBC News (Aug. 3, 2023), https://www.cbc.ca/news/world/meta-australia-google-news-canada-1.6925726.

[64] See id.

[65] Free Speech Coal. Inc. v. Colmenero, No. 1:23-CV-917-DAE, 2023 U.S. Dist. LEXIS 154065 (W.D. Tex. 2023), available at https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172751222/gov.uscourts.txwd.1172751222.36.0.pdf.

[66] Id. at 10.

[67] Id.

[68] Id.

[69] Id. at 44.

[70] Geoffrey A. ManneBen Sperry, & Kristian Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Comput. & Tech. L.J. 26 (2022), https://laweconcenter.org/resources/who-moderates-the-moderators-a-law-economics-approach-to-holding-online-platforms-accountable-without-destroying-the-internet; Geoffrey A. Manne, Kristian Stout, & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-first-amendment-and-online-intermediary-liability.

[71] See Manne, Stout, & Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, supra note 70; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market. (Sep. 6, 2023), httsps://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach; Manne, Sperry, & Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, supra note 70.

[72] See Manne, Stout, & Sperry, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, supra note 70, at 28 (“To the extent that the current legal regime permits social harms online that exceed concomitant benefits, it should be reformed to deter those harms, provided it can be done so at sufficiently low cost.”); Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, supra note 71.

[73] See NetChoice Complaint, NetChoice LLC v. Griffin, NO. 5:23-CV-05105, available at 2023 U.S. Dist. LEXIS 154571 (W.D. Ark. 2023), https://netchoice.org/wp-content/uploads/2023/06/NetChoice-v-Griffin_-Complaint_2023-06-29.pdf.

[74] Id. at para. 13.

[75] See id. at para. 14

[76] See id.

[77] See id. at para 15.

[78] See id. at para 16.

[79] See id.

[80] See id. at para. 17, 19-21

[81] See Ben Sperry, Congress Should Focus on Protecting Teens from Real Harms, Not Targeted Ads, The Hill (Feb. 12, 2023), https://thehill.com/opinion/congress-blog/3862238-congress-should-focus-on-protecting-teens-from-real-harms-not-targeted-ads.

[82] See NetChoice Complaint, supra note 73 at para. 18.

[83] Government Response to the Roadmap for Age Verification, Australian Gov’t Dep’t of Infrastructure, Transp., Reg’l Dev., Commc’ns and the Arts (Aug. 2023), available at https://www.infrastructure.gov.au/sites/default/files/documents/government-response-to-the-roadmap-for-age-verification-august2023.pdf.

[84] See Josh Taylor, Australia Will Not Force Adult Websites to Bring in Age Verification Due To Privacy And Security Concerns, The Guardian (Aug. 30, 2023), https://www.theguardian.com/australia-news/2023/aug/31/roadmap-for-age-verification-online-pornographic-material-adult-websites-australia-law.

[85] See NetChoice Complaint, supra note 73 at 2.

[86] Id. at 6.

[87] See id.

[88] See id. at 6-8.

[89] Supra Part IV.A.

[90] See Children and Teen’s Online Privacy Protection Act, S. 1418, 118th Cong. (2023), as amended Jul. 27, 2023, available at https://www.congress.gov/bill/118th-congress/senate-bill/1418/text (last accessed Oct. 2, 2023). Other similar bills have been proposed as well. See Protecting Kids on Social Media Act, S. 1291, 118th Cong. (2023); Making Age-Verification Technology Uniform, Robust, and Effective Act, S. 419, 118th Cong. (2023); Social Media Child Protection Act, H.R. 821, 118th Cong. (2023).

[91] See David Neumark & Peter Shirley, Myth or Measurement: What Does the New Minimum Wage Research Say About Minimum Wages and Job Loss in the United States? (Nat’l Bur. Econ. Res. Working Paper 28388, Mar. 2022), available at https://www.nber.org/papers/w28388 (concluding that “(i) there is a clear preponderance of negative estimates in the literature; (ii) this evidence is stronger for teens and young adults as well as the less-educated; (iii) the evidence from studies of directly-affected workers points even more strongly to negative employment effects; and (iv) the evidence from studies of low-wage industries is less one-sided.”).

[92] See Lisa Sturtevant, The Impacts of Rent Control: A Research Review and Synthesis, at 6-7, Nat’l Multifamily Hous. Coun’cl Res. Found. (May 2018), available at https://www.nmhc.org/globalassets/knowledge-library/rent-control-literature-review-final2.pdf (“1. Rent control and rent stabilization policies do a poor job at targeting benefits. While some low-income families do benefit from rent control, so, too, do higher-income households. There are more efficient and effective ways to provide assistance to lower-income individuals and families who have trouble finding housing they can afford. 2. Residents of rent-controlled units move less often than do residents of uncontrolled housing units, which can mean that rent control causes renters to continue to live in units that are too small, too large or not in the right locations to best meet their housing needs. 3. Rent-controlled buildings potentially can suffer from deterioration or lack of investment, but the risk is minimized when there are effective local requirements and/or incentives for building maintenance and improvements. 4. Rent control and rent stabilization laws lead to a reduction in the available supply of rental housing in a community, particularly through the conversion to ownership of controlled buildings. 5. Rent control policies can hold rents of controlled units at lower levels but not under all circumstances. 6. Rent control policies generally lead to higher rents in the uncontrolled market, with rents sometimes substantially higher than would be expected without rent control. 7. There are significant fiscal costs associated with implementing a rent control program.”).

[93] See Candeub, supra note 16.

[94] Colmenero, supra note 65, at 22.

[95] See Kids Online Safety Act, S. 1409, 118th Cong. (2023), as amended and posted by the Senate Committee on Commerce, Science , and Transportation on July 27, 2023, available at https://www.congress.gov/bill/118th-congress/senate-bill/1409/text#toc-id6fefcf1d-a1ae-4949-a826-23c1e1b1ef26 (last accessed Oct. 2, 2023).

[96] See id. at Section 3.

[97] Cf. Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921, 1930-31 (2019):

[M]erely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints…

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.

[98] See Counterman v. Colorado, 600 U.S. 66 (2023); Ben Sperry (@RBenSperry), Twitter (June 28, 2023, 4:46 PM), https://twitter.com/RBenSperry/status/1674157227387547648.

[99] Cf. HØEG v. Newsom, 2023 WL 414258 (E.D. Cal. Jan. 25, 2023); Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, supra note 70.

[100] California Age-Appropriate Design Code Act, AB 2273 (2022), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202120220AB2273.

[101] See id. at § 1798.99.32(d)(1), (2), (4).

[102] See Candeub, supra note 16.

[103] NetChoice LLC. v. Griffin, Case No. 5:23-CV-05105 at 25 (Aug. 31, 2023), slip op., available at https://netchoice.org/wp-content/uploads/2023/08/GRIFFIN-NETCHOICE-GRANTED.pdf.

[104] Id.

[105] Id. at 38 (“Having considered both sides’ positions on the level of constitutional scrutiny to be applied, the Court tends to agree with NetChoice that the restrictions in Act 689 are subject to strict scrutiny. However, the Court will not reach that conclusion definitively at this early stage in the proceedings and instead will apply intermediate scrutiny, as the State suggests.”).

[106] Id. at 48 (“In sum, NetChoice is likely to succeed on the merits of the First Amendment claim it raises on behalf of Arkansas users of member platforms. The State’s solution to the very real problems associated with minors’ time spent online and access to harmful content on social media is not narrowly tailored. Act 689 is likely to unduly burden adult and minor access to constitutionally protected speech. If the legislature’s goal in passing Act 689 was to protect minors from materials or interactions that could harm them online, there is no compelling evidence that the Act will be effective in achieving those goals.”).

[107] See NetChoice v. Bonta, Case No. 22-cv-08861-BLF (N.D. Cal. Sept. 18, 2023), slip op., available at https://netchoice.org/wp-content/uploads/2023/09/NETCHOICE-v-BONTA-PRELIMINARY-INJUNCTION-GRANTED.pdf; Ben Sperry, What Does NetChoice v. Bonta Mean for KOSA and Other Attempts to Protect Children Online?, Truth on the Market (Sep. 29, 2023), https://truthonthemarket.com/2023/09/29/what-does-netchoice-v-bonta-mean-for-kosa-and-other-attempts-to-protect-children-online.

[108] Id. at 36-38.

[109] See Carl Szabo, NetChoice Sends Veto Request to Utah Gov. Spencer Cox on HB 311 and SB 152, NetChoice (Mar. 3, 2023),  https://netchoice.org/netchoice-sends-veto-request-to-utah-gov-spencer-cox-on-hb-311-and-sb-153.

[110] See, e.g., Sable Commcn’s v. FCC, 492 U.S. 115, 126 (1989) (“The Government may, however, regulate the content of constitutionally protected speech in order to promote a compelling interest if it chooses the least restrictive means to further the articulated interest.”).

[111] Brown, 564 U.S. at 801 (“California claims that the Act is justified in aid of parental authority: By requiring that the purchase of violent video games can be made only by adults, the Act ensures that parents can decide what games are appropriate. At the outset, we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.”).

[112] Brown, 564 U.S. at 801.

[113] Id. at 803

[114] Id.

[115] See supra IV.B.

[116] See Clare Morrell, Adam Candeub, & Michael Toscano, No, Big Tech Doesn’t Have A Right To Speak To Kids Without Their Parent’s Consent, The Federalist (Sept. 21, 2023), https://thefederalist.com/2023/09/21/no-big-tech-doesnt-have-a-right-to-speak-to-kids-without-their-parents-consent (noting “Justice Clarence Thomas wrote in his dissent in the Brown case that “the ‘freedom of speech,’ as originally understood, does not include a right to speak to minors (or a right of minors to access speech) without going through the minors’ parents or guardians.”).

[117] Brown, 564 U.S. at 821.

[118] Id. at 822.

[119] Id. at 805.

[120] Id. at 813.

[121] See, e.g., Ben Sperry, There’s Nothing ‘Conservative’ About Trump’s Views on Free Speech and the Regulation of Social Media, Truth on the Market (Jul. 12, 2019), https://truthonthemarket.com/2019/07/12/theres-nothing-conservative-about-trumps-views-on-free-speech (noting Kavanaugh’s majority opinion in Halleck on compelled speech included all the conservative justices; at the time he and Gorsuch were relatively new Trump appointees); Justice Amy Comey Barrett has also joined the majority opinion in 303 Creative LLC v. Elenis, 600 U.S. 570 (2023), written by Gorsuch and joined by all the conservatives, which found public-accommodations laws are subject to strict scrutiny if they implicate expressive activity.

[122] Clare Morell (@ClareMorellEPPC), Twitter (Sept. 7, 2023, 8:27 PM), https://twitter.com/ClareMorellEPPC/status/1699942446711357731.

[123] Brown, 564 U.S. at 786.

Continue reading
Innovation & the New Economy

Ben Sperry on State Action Doctrine

Presentations & Interviews ICLE Senior Scholar Ben Sperry joined The Dynamist podcast to discuss whether the Biden administration turned Big Tech companies into “state actors” in trying to . . .

ICLE Senior Scholar Ben Sperry joined The Dynamist podcast to discuss whether the Biden administration turned Big Tech companies into “state actors” in trying to regulate social-media content. The full episode is embedded below.

Continue reading
Innovation & the New Economy

Health Care and Health Insurance Merger Retrospective: A Personal Law & Economics Experience

TOTM My colleagues at the International Center for Law & Economics (ICLE) often engage not only in excellent analysis of proposed mergers and acquisitions, but also . . .

My colleagues at the International Center for Law & Economics (ICLE) often engage not only in excellent analysis of proposed mergers and acquisitions, but also have been known to offer retrospectives on past mergers. Today, I want to offer a very personal version of this.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

What Does NetChoice v. Bonta Mean for KOSA and Other Attempts to Protect Children Online?

TOTM With yet another win for NetChoice in the U.S. District Court for the Northern District of California—this time a preliminary injunction granted against California’s Age Appropriate Design Code (AADC)—it is . . .

With yet another win for NetChoice in the U.S. District Court for the Northern District of California—this time a preliminary injunction granted against California’s Age Appropriate Design Code (AADC)—it is worth asking what this means for the federally proposed Kids Online Safety Act (KOSA) and other laws of similar import that have been considered in a few states. I also thought it was worthwhile to contrast them with the duty-of-care proposal we at the International Center for Law & Economics have put forward, in terms of how best to protect children from harms associated with social media and other online platforms.

In this post, I will first consider the Bonta case, its analysis, and what it means going forward for KOSA. Next, I will explain how our duty-of-care proposal differs from KOSA and the AADC, and why it would, in select circumstances, open online platforms to intermediary liability where they are best placed to monitor and control harms to minors, by making it possible to bring products-liability suits. I will also outline a framework for considering how the First Amendment and the threat of collateral censorship interacts with such suits.

Read the full piece here.

Continue reading
Innovation & the New Economy

The Marketplace of Ideas: Government Failure Is Worse Than Market Failure When It Comes to Social-Media Misinformation

TOTM Today marks the release of a white paper I have been working on for a long time, titled “Knowledge and Decisions in the Information Age: . . .

Today marks the release of a white paper I have been working on for a long time, titled “Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social-Media Platforms.” In it, I attempt to outline an Austrian law & economics theory of state action under the First Amendment, and then explain why it is important to the problem of misinformation on social-media platforms.

Read the full piece here.

Continue reading
Innovation & the New Economy