Showing 9 of 143 Publications by Ben Sperry

ICLE/ITIF Amicus Brief Urges Court to Set Aside FCC’s Digital-Discrimination Rules

TOTM The Federal Communications Commission (FCC) recently adopted sweeping new rules designed to prevent so-called “digital discrimination” in the deployment, access, and adoption of broadband internet . . .

The Federal Communications Commission (FCC) recently adopted sweeping new rules designed to prevent so-called “digital discrimination” in the deployment, access, and adoption of broadband internet services. But an amicus brief filed by the International Center for Law & Economics (ICLE) and the Information Technology & Innovation Foundation (ITIF) with the 8th U.S. Circuit Court of Appeals argues that the rules go far beyond what Congress authorized.

It appears to us quite likely the court will vacate the new rules, because they exceed the authority Congress granted the FCC and undermine the very broadband investment and deployment that Congress wanted to encourage. In effect, the rules would set the FCC up as a central planner of all things broadband-related. In combination with the commission’s recent reclassification of broadband as a Title II service, the FCC has stretched its authority far beyond the breaking point.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

Brief of ICLE and ITIF to 8th Circuit in Minnesota Telecom Alliance v FCC

Amicus Brief STATEMENTS OF INTEREST The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations for . . .

STATEMENTS OF INTEREST

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates and has longstanding expertise evaluating law and policy.

ICLE scholars have written extensively in the areas of telecommunications and broadband policy. This includes white papers, law journal articles, and amicus briefs touching on issues related to the provision and regulation of broadband Internet service.

The FCC’s final rule by Report and Order adopted on January 22, 2024  concerning “digital discrimination” (the Order) constitutes a significant change to an economic policy. Broadband alone is a $112 billion industry with over 125 million customers. If permitted to stand, the FCC’s broad Order will be harmful to the dynamic marketplace for broadband that presently exists in the United States. ICLE urges the Court to vacate the Order and instead require the FCC to adopt rules limited to preventing intentional discrimination in deployment by broadband Internet access service providers.

The Information Technology and Innovation Foundation (“ITIF”) is an independent non-profit, non-partisan think tank. ITIF’s mission is to formulate, evaluate, and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress. To that end, ITIF strives to provide policymakers around the world with high-quality information, analysis, and recommendations they can trust. ITIF adheres to the highest standards of research integrity, guided by an internal code of ethics grounded in analytical rigor, policy pragmatism, and independence from external direction or bias.

ITIF’s mission is to advance public policies that accelerate the progress of technological innovation. ITIF believes that innovation can almost always be a force for good. It is the major driver of human advancement and the essential means for improving societal welfare. A robust rate of innovation makes it possible to achieve many other goals—including increases in median per-capita income, improved health, transportation mobility, and a cleaner environment. In pursuing this goal, ITIF does not hew to a fixed set of ideas; rather, ITIF strives for objective and rational analysis that is guided by critical thinking and a set of core values. ITIF engages in policy and legal debates, both directly and indirectly, by presenting policymakers, courts, and other policy influencers with compelling data, analysis, arguments, and proposals to advance effective innovation policies and oppose counterproductive ones.

The FCC’s Order will have a significant impact on the speed and adoption of technological innovation in the United States. The Order not only raises the cost of deployment investments, but it also increases the risk of liability for discrimination, thereby increasing the uncertainty of the investments’ returns. As a result, the Order will not only stifle new deployment to unserved areas, but also will delay network upgrades and maintenance out of fear of alleged disparate effects. For these reasons ITIF urges the Court to set aside the FCC’s Order.

Pursuant to Federal Rule of Appellate Procedure 29(a)(2), ICLE and ITIF have obtained consent of the parties to file the instant Brief of the International Center for Law & Economics and the Information Technology and Innovation Foundation as Amici Curiae In Support of Petitioners.

INTRODUCTION AND SUMMARY OF ARGUMENT

The present marketplace for broadband in the United States is dynamic and generally serves consumers well. See Geoffrey A. Manne, Kristian Stout, & Ben Sperry, A Dynamic Analysis of Broadband Competition: What Concentration Numbers Fail to Capture (ICLE White Paper, Jun. 2021), https://laweconcenter.org/wp-content/uploads/2021/06/A-Dynamic-Analysis-of-Broadband-Competition.pdf. Broadband providers acting in the marketplace have invested $2.1 trillion in building, maintaining, and improving their networks since 1996, including $102.4 billion in 2022 alone. See USTelecom, 2022 Broadband Capex Report (Sept. 8, 2023), https://www.ustelecom.org/research/2022-broadband-capex. The FCC’s own data suggests that 91% of Americans have access to high-speed broadband under its new and faster definition. See 2024 706 Report, FCC 24-27, GN Docket No. 22-270, at paras. 20, 22 (Mar. 18, 2024).

Despite this, there are areas in the country, primarily due to low population density, where serving consumers is prohibitively expensive. Moreover, affordability remains a concern for some lower-income groups. To address these concerns, Congress passed the Infrastructure Investment and Jobs Act (IIJA), Pub. L. No. 117-58, 135 Stat. 429, which invested $42.5 billion in building out broadband to rural areas through the Broadband Equity, Access, and Deployment (BEAD) Program, and billions more in the Affordable Connectivity Program (ACP), which provided low-income individuals a $30 per month voucher. Congress’s passage of the IIJA was consistent with sustaining the free and dynamic market for broadband.

In addition, to address concerns that broadband providers could engage in discriminatory behavior in deployment decisions, Section 60506(b) of IIJA requires that “[n]ot later than 2 years after November 15, 2021, the Commission shall adopt final rules to facilitate equal access to broadband internet access services, taking into account the issues of technical and economic feasibility presented by that objective, including… preventing digital discrimination of access based on income level, race, ethnicity, color, religion, or national origin.” Pub. L. No. 117-58, § 60506(b)(1), 135 Stat. 429, 1246.

The FCC adopted the final rule by Report and Order in the Federal Register on January 22, 2024. See 89 Fed. Reg. 4128 (Jan. 22, 2024) [hereinafter “Order”] attached as the Addendum to Petitioners’ Brief (“Pet. Add.”). But the digital discrimination rule issued in this Order is inconsistent with the IIJA, so expansive as to claim regulatory authority over major political and economic questions, and is arbitrary and capricious. As a result, this Court must vacate it.

The FCC could have issued a final rule consistent with the statute and the dynamic broadband marketplace. Such a rule would have recognized the limited purpose of the statute was to outlaw intentional discrimination by broadband providers in deployment decisions, in a way that would treat a person or group of persons less favorably than others because of a listed protected trait. This rule would be workable, leaving the FCC to focus its attention on cases where broadband providers fail to invest in deploying networks due to animus against those groups.

Instead, the FCC chose to create an expansive regulatory scheme that gives it essentially unlimited discretion over anything that would affect the adoption of broadband. It did this by adopting a differential impact standard that applies not only to broadband providers, but to anyone that could “otherwise affect consumer access to broadband internet access service,” see 47 CFR §16.2 (definition of “Covered entity”), which includes considerations of price among the “comparable terms and conditions.” See Pet. Add. 59, Order at para. 111 (“Indeed, pricing is often the most important term that consumers consider when purchasing goods and services… this is no less true with respect to broadband internet access services.”). Taken together, these departures from the text of Section 60506 would give the FCC nearly unlimited authority over broadband providers, and even a great deal of authority over other entities that can affect broadband access.

To interpret Section 60506 to encompass a “differential impact” standard, as the agency has done here, leads to a situation in which covered entities that have no intent to discriminate or even take active measures to help protected classes could still be found in violation of the rules. Cf. 47 CFR §16.4(b) (“A discriminatory effect occurs when a facially neutral policy or practice differentially impacts consumers’ access to covered services or covered elements of service.”). This standard opens nearly everything to FCC review because of the correlation of profit-maximizing motivations not covered by the statute with things that are covered by the statute.

Income level, race, ethnicity, color, religion, and national origin are often incidentally associated with some other non-protected factor important for investment decisions. Specifically, population density is widely recognized as one of the determinants of expected profitability for broadband deployment. See Eric Fruits & Kristian Stout, The Income Conundrum: Intent and Effects Analysis of Digital Discrimination (ICLE Issue Brief 2022-11-14) available at https://laweconcenter.org/wp-content/uploads/2022/11/The-Income-Conundrum-Intent-and-Effects-Analysis-of-Digital-Discrimination.pdf citing U.S. Gov’t Accountability Office, GAO-06-426, Telecommunications Broadband Deployment Is Extensive Throughout the United States, but It Is Difficult to Assess the Extent of Deployment Gaps in Rural Areas 19 (2006) (population density is the “most frequently cited cost factor affecting broadband deployment” and “a critical determinant of companies’ deployment decisions”). But population density is also correlated with income level, with higher density associated with higher incomes. See Daniel Hummel, The Effects of Population and Housing Density in Urban Areas on Income in the United States, 35 Loc. Econ. 27, Feb. 7, 2020, (showing statistically significant positive relationship between income and both population and housing density). Higher population density is also correlated with greater racial, ethnic, religious, and national origin diversity. See, e.g., Barrett A. Lee & Gregory Sharp, Diversity Across the Rural-Urban Continuum, 672 Annals Am. Acad. Pol. & Soc. Sci. 26 (2017).

Consider a hypothetical provider who eschews discrimination against any of the protected traits in its deployment practices by prioritizing its investments solely on population density, deploying to high-density areas first then lower-density areas later. If higher-density areas are also areas with higher incomes, then it would be relatively easy to produce a statistical analysis showing that lower-income areas are associated with lower rates of deployment. This finding can be used to support a complaint against the provider alleging income discrimination under the Order’s disparate impact standard thereby triggering an FCC investigation. Similarly, because of the relationships between population density and race, ethnicity, color, religion, and national origin, it would be relatively easy to produce a statistical analysis showing disparate impacts across these protected traits.

Because population density is correlated with each of the protected traits, applying an effects-based statistical analysis is likely to produce a false positive concluding the presence of digital discrimination—even when there was none. With so many possible spurious correlations, it is almost impossible for any covered entity to know with any certainty whether its policies or practices could be actionable for differential impacts. Nobel laureate, Ronald Coase, is reported to have said, “If you torture the data long enough, it will confess.” Garson O’Toole, If You Torture the Data Long Enough, It Will Confess, Quote Investigator (Jan. 18, 2021), https://quoteinvestigator.com/2021/01/18/confess. The FCC’s Order amounts to an open invitation to torture the data.

While it is possible that the FCC could determine that the costs of deployment due to population density or another profit-relevant reason go to “technical or economic feasibility,” the burden to prove infeasibility are on the covered entity by a preponderance of the evidence standard. See 47 CFR §16.5(c)-(d). This may include “proof that available, less discriminatory alternatives were not reasonably achievable.” See 47 CFR §16.5(c). In its case-by-case review process, there is no guarantee that the Commission will agree that “technical or economic feasibility” warrants an exception in any given dispute. See 47 CFR §16.5(e). This rule will put a great deal of pressure on covered entities to avoid possible litigation by getting all plans pre-approved by the FCC through its advisory opinion authority. See 47 CFR §16.7. This sets up the FCC to be a central planner for nearly everything related to broadband, from deployment to policies and practices that affect even adoption itself, including price of the service. This is inconsistent with preserving the ability of businesses to make “practical business choices and profit-related decisions that sustain a vibrant and dynamic free-enterprise system.” Texas Dep’t of Hous. & Cmty. Affs. v. Inclusive Communities Project, Inc., 576 U.S. 519, 533 (2015). The Order will thus dampen investment incentives because “the specter of disparate-impact litigation” will cause private broadband providers to “no longer construct or renovate” their networks, leading to a situation where the FCC’s rule “undermines its own purpose” under the IIJA “as well as the free market system.” Id. at 544.

Argument

The FCC’s Order is unlawful. First, the Order’s interpretation of Section 60506 is inconsistent with the structure of the IIJA. Second, the Order is inconsistent with the clear meaning of Section 60506. Third, the Order raises major questions of political and economic significance by giving the FCC nearly unlimited authority over broadband deployment decisions, including price. Fourth, the Order is arbitrary and capricious because it fails to adopt a rule that is reasonable insofar as it will end up reducing investment incentives of broadband providers to deploy and improve broadband service, which is inconsistent with the purpose of the IIJA. Finally, the Order’s vagueness leaves a person of ordinary intelligence no ability to know whether they are subject to the law and thus gives the FCC the ability to engage in arbitrary and discriminatory enforcement.

I. The Order’s Interpretation of Section 60506 is Inconsistent with the Structure of the IIJA

“It is a fundamental canon of statutory construction that the words of a statute must be read in their context and with a view to their place in the overall statutory scheme.” Davis v. Michigan Dept. of Treasury, 489 U.S. 803, 809 (1989). The structure of the IIJA as a whole, as well as the fact that Section 60506, in particular, was not placed within the larger Communications Act (47 U.S.C. §150 et seq.) that gives the FCC authority, suggests that the Order claims authority far beyond what Congress has granted the FCC.

The IIJA divided broadband policy priorities between different agencies and circumscribes the scope of each program or rulemaking it delegates to agencies. Section 60102 addressed the issue of universal broadband deployment by creating the Broadband Equity, Access, and Deployment (BEAD) Program. See IIJA §60102. The statute designated the National Telecommunication and Information Administration (NTIA) to administer this $42.45 billion program with funds to be first allocated to deploy broadband service to all areas that currently lack access to high-speed broadband Internet. See IIJA §60102(b), (h). BEAD is, therefore, Congress’s chosen method to remedy disparities in broadband deployment due to cost-based barriers like low population density. Section 60502 then created the Affordable Connectivity Program (ACP), which provided low-income individuals a $30 per month voucher, and delegated its administration to the FCC. See IIJA §60502. ACP is, therefore, Congress’s chosen method to remedy broadband affordability for households whose low income is a barrier to broadband adoption. Title V of Division F of the IIJA goes on to create several more broadband programs, each with a specific and limited scope. See IIJA § 60101 et seq.

In short, Congress was intentional about circumscribing the different problems with broadband deployment and access, as well as the scope of the programs it designed to fix them. Section 60506’s authorization for the FCC to prevent “digital discrimination” fits neatly into this statutory scheme if it targets disparate treatment in deployment decisions based upon protected status—i.e., intentional harmful actions that are distinct from deployment decisions based on costs of deployment or projected demand for broadband service. But the FCC’s Order vastly exceeds this statutory scope and claims authority over virtually every aspect of the broadband marketplace, including infrastructure deployment decisions due to cost generally and the potential market for the networks once deployed.  Indeed, the FCC envisions scenarios in which its rules conflict with other federal funding programs but nevertheless says that compliance with them is no safe harbor from liability for disparate impacts that compliance creates. See Pet. Add. 69-70, Order at para. 142. The Order thus dramatically exceeds the boundaries Congress set in Section 60506. Congress cannot have meant for section 60506 to remedy all deployment disparities or all issues of affordability because it created BEAD and ACP for those purposes.

Moreover, Section 60506 was not incorporated into the Communications Act, unlike other parts of the IIJA. In other words, the FCC’s general enforcement authority doesn’t apply to the regulatory scheme of Section 60506. The IIJA was not meant to give the FCC vast authority over broadband deployment and adoption by implication. The FCC must rely on Section 60506 alone for any authority it was given to combat digital discrimination.

II. The Order is Inconsistent with the Clear Meaning of the Text of Section 60506

The text of Section 60506 plainly shows that the intention of Congress to combat digital discrimination was through the use of circumscribed rules aimed at preventing intentional discrimination in deployment decisions by broadband providers. The statute starts with a statement of policy in part (a) and then gives the Commission direction to fulfill that purpose in parts (b) and (c).

The statement of policy in Section 60506(a) is exactly that: a statement of policy. Courts have long held that statutory sections like Section 60506(a)(1) and (a)(3) using words like “should” are “precatory.” See Emergency Coal. to Def. Educ. Travel v. U.S. Dep’t of Treasury, 498 F. Supp. 2d 150, 165 (D.D.C. 2007) (“Courts have repeatedly held that such ‘sense of Congress’ language is merely precatory and non-binding.”), aff’d, 545 F.3d 4 (D.C. Cir. 2008). While the statement of policy helps illuminate the goal of the provision at issue, it does not actually give the FCC authority. The goal of the statute is clear: to make sure the Commission prevents intentional discrimination in deployment decisions. For instance, Section 60506(c) empowers the Commission (and the Attorney General) to ensure federal policies promote equal access by prohibiting intentional deployment discrimination. See Section 60506(c) (“The Commission and the Attorney General shall ensure that Federal policies promote equal access to robust broadband internet access service by prohibiting deployment discrimination…”). Moreover, the definition of equal access as “equal opportunity to subscribe,” see 47 U.S.C. §1754(a)(2), does not imply a disparate impact analysis. See Brnovich v. Democratic Nat’l Comm., 141 S. Ct. 2321, 2339 (2021) (“[T]he mere fact there is some disparity in impact does not necessarily mean… that it does not give everyone an equal opportunity.”)

There is no evidence that IIJA’s drafters intended the law to be read as broadly as the Commission has done in its rules. The legislative record on Section 60506 is exceedingly sparse, containing almost no discussion of the provision beyond assertions that “broadband ought to be available to all Americans,” 167 Cong. Rec. 6046 (2021), and also that the IIJA was not to be used as a basis for the “regulation of internet rates.”167 Cong. Rec. 6053 (2021). The FCC argues that since “there is little evidence in the legislative history… that impediments to broadband internet access service are the result of intentional discrimination,” Congress must have desired a disparate impact standard. See Pet. Add. 25, Order at para. 47. But the limited nature of the problem suggests a limited solution in the form of a framework aimed at preventing such discrimination. Given the sparse evidence on legislative intent, Section 60506 should be read as granting a limited authority to the Commission.

With Section 60506(b), Congress gave the Commission a set of tools to identify and remedy acts of intentional discrimination by broadband providers in deployment decisions. As we explain below, under both the text of Section 60506 and the Supreme Court’s established jurisprudence, the Commission was not empowered to employ a disparate-impact (or “differential impact”) analysis under its digital discrimination rules.

Among the primary justifications for disparate-impact analysis is to remedy historical patterns of de jure segregation that left an indelible mark on minority communities. See Inclusive Communities, 576 at 528-29. While racial discrimination has not been purged from society, broadband only became prominent in the United States well after all forms of de jure segregation were made illegal, and after Congress and the courts had invested decades in rooting out impermissible de facto discrimination. In enacting its rules that give it presumptive authority over nearly all decisions related to broadband deployment and adoption, the FCC failed to adequately take this history into account.

Beyond the policy questions, however, Section 60506 cannot be reasonably construed as authorizing disparate-impact analysis. While the Supreme Court has allowed disparate-impact analysis in the context of civil-rights law, it has imposed some important limitations. To find disparate impact, the statute must be explicitly directed “to the consequences of an action rather than the actor’s intent.”  Inclusive Communities., 576 U.S. at 534. There, the Fair Housing Act made it unlawful:

To refuse to sell or rent after the making of a bona fide offer, or to refuse to negotiate for the sale or rental of, or otherwise make unavailable or deny, a dwelling to any person because of race, color, religion, sex, familial status, or national origin.

42 U.S.C. §3604(a) (emphasis added). The Court noted that the presence of language like “otherwise make unavailable” is critical to construing a statute as demanding an effects-based analysis. Inclusive Communities., 576 U.S. at 534. Such phrases, the Court found, “refer[] to the consequences of an action rather than the actor’s intent.” Id. Further, the structure of a statute’s language matters:

The relevant statutory phrases… play an identical role in the structure common to all three statutes: Located at the end of lengthy sentences that begin with prohibitions on disparate treatment, they serve as catchall phrases looking to consequences, not intent. And all [of these] statutes use the word “otherwise” to introduce the results-oriented phrase. “Otherwise” means “in a different way or manner,” thus signaling a shift in emphasis from an actor’s intent to the consequences of his actions.

Id. at 534-35.

Previous Court opinions help parse the distinction between statutes limited to intentional discrimination claims and those that allow for disparate impact claims. Particularly relevant here, the Court looked at language from Section 601 of the Civil Rights Act stating that “[n]o person in the United States shall, on the ground of race, color, or national origin, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any program or activity receiving Federal financial assistance,” 42 U.S.C. §2000d (emphasis added), and found it “beyond dispute—and no party disagrees—that [it] prohibits only intentional discrimination.”  Alexander v. Sandoval, 532 U.S. 275, 280 (2001).

Here, the language of Section 60506” (“based on”) mirrors the language of Section 601 of the Civil Rights Act (“on the ground of”). Moreover, it is consistent with the reasoning of Inclusive Communities that determines when a statute allows for disparate impact analysis. Inclusive Communities primarily based its opinion on the “otherwise make unavailable” language at issue, with a particular focus on “otherwise” creating a more open-ended inquiry. See Inclusive Communities, 576 U.S. at 534 (“Here, the phrase ‘otherwise make unavailable’ is of central importance to the analysis that follows”). Such language is absent in Section 60506. Moreover, the closest analogy for Section 60506’s “based on” language is the “on the ground of” language of Title VI of the Civil Rights Act, which also does not include the “otherwise” language found to be so important in Inclusive Communities. Compare 42 U.S.C. §2000d with Inclusive Communities, 576 U.S. at 534-35 (focusing on how “otherwise” is a catch-all phrase looking to consequences instead of intent). If the Court has found “grounded on” means only intentional discrimination, then it is hard to see how “based on” wouldn’t lead to the same conclusion.

Thus, since Section 60506 was drafted without “results-oriented language” and instead frames the prohibition against digital discrimination as “based on income level, race, ethnicity, color, religion, or national origin,” this would put the rule squarely within the realm of prohibitions on intentional discrimination. That is, to be discriminatory, the decision to deploy or not to deploy must have been intentionally made based on or grounded on the protected characteristic. Mere statistical correlation between deployment and protected characteristics is insufficient.

In enacting the IIJA, Congress was undoubtedly aware of the Court’s history with disparate-impact analysis. Had it chosen to do so, it could have made the requirements of Section 60506 align with the requirements of that precedent. But it chose not to do so.

III. Congress Did Not Clearly Authorize the FCC to Decide a Major Question in this Order

To read Section 60506 of the IIJA as broadly as the FCC does in the Order invites a challenge under the major-questions doctrine. There are “extraordinary cases” where the “history and the breadth of the authority” that an agency asserts and the “economic and political significance” of that asserted authority provide “reason to hesitate before concluding that Congress” meant to confer such authority. See West Virginia v. EPA, 597 U.S. 697, 721 (2022) (quoting FDA v. Brown & Williamson, 529 U.S. 120, 159-60 (2000)). In such cases, “something more than a merely plausible textual basis for agency action is necessary. The agency instead must point to ‘clear congressional authorization’ for the power it claims.” Id. at 723 (quoting Utility Air Regulatory Group v. EPA, 573 U.S. 302, 324 (2014).

Here, the FCC has claimed dramatic new powers over the deployment of broadband Internet access, and it has exercised that alleged authority to create a process for inquiry into generalized civil rights claims. Such a system is as unprecedented as it is important to the political and economic environment of the country. The FCC itself implicitly recognizes this fact when it emphasizes the critical importance of Internet access as necessary “to meet basic needs.” Broadband alone is a $112 billion industry with over 125 million customers. See The History of US Broadband, S&P Global (last accessed May 11, 2023), https://www.spglobal.com/marketintelligence/en/news-insights/research/the-history-of-us-broadband. This doesn’t even include all the entities covered by this Order, which also includes all those who could “otherwise affect consumer access to broadband internet access service.” See 47 CFR §16.2. There is, therefore, no doubt that the Order is of great economic and political significance.

This would be fine if the statute clearly delegated such power to the FCC. But the only potential source of authority for the Order is Section 60506. Since the text of Section 60506 can be (and is better) read as not giving the FCC such authority, it simply can’t be an unambiguous delegation of authority.

As argued above, Congress knows how to write a disparate-impact statute in light of Supreme Court jurisprudence. Put simply, Congress did not write a disparate-impact statute here because there is no catch-all language comparable to what the Supreme Court has pointed to in statutes like the FHA. Cf. Inclusive Communities, 576 U.S. at 533 (finding a statute includes disparate-impact liability when the “text refers to the consequences of actions and not just the mindset of actors”). At best, Section 60506 is ambiguous in giving the authority to the FCC to use disparate impact analysis. That is simply not enough when regulating an area of great economic and political significance.

In addition to the major question of whether the FCC may enact its vast disparate impact apparatus, the FCC claims vast authority over the economically and politically significant arena of broadband rates despite no clear authorization to do so in Section 60506. In fact, in the legislative record, Congress explicitly wanted to avoid the possibility that the IIJA would be used as the basis for the “regulation of internet rates.” 167 Cong. Rec. 6053 (2021). The FCC disclaims the authority to engage in rate regulation, but it does claim authority for “ensuring pricing consistency.” See Pet. Add. 56-57, Order at para. 105. While the act of assessing the comparability of prices is not rate regulation in the sense that the Communications Act contemplates, a policy that holds entities liable for those disparities such that an ISP must adjust its prices until it matches an FCC definition of “comparable” is tantamount to setting that rate. See Eric Fruits & Geoffrey Manne, Quack Attack: De Facto Rate Regulation in Telecommunications (ICLE Issue Brief 2023-03-30), available at https://laweconcenter.org/wp-content/uploads/2023/03/De-Facto-Rate-Reg-Final-1.pdf (describing how the FCC often engages in rate regulation in practice even when it doesn’t call it that).

Furthermore, the Order could also allow the FCC to use the rule to demand higher service quality under the “comparable terms and conditions” language, even if consumers may prefer lower speeds for less money. That increased quality comes at a cost that will necessarily increase the market price of broadband. In this way, the Order would allow the FCC to set a price floor even if it never explicitly requires ISPs to submit their rates for approval.

The elephant of rate regulation is not hiding in the mousehole of Section 60506. Cf. Whitman v. American Trucking Assns., Inc., 531 U.S. 457, 468 (2001). Indeed, the FCC itself forswears rate regulation in an ongoing proceeding in which the relevant statute would clearly authorize it. See Safeguarding and Securing the Open Internet, 88 Fed. Reg. 76048 (proposed Nov. 3, 2023) (to be codified at 47 CFR pts. 8, 20). Nevertheless, the FCC recognized that rate regulation is inappropriate for the broadband marketplace and has declined its application in that proceeding. Even here, the FCC has denied that including pricing within the scope of the rules is “an attempt to institute rate regulation.” See Pet. Add. 59, Order at para. 111. But despite its denials, the FCC’s claim of authority would allow it to regulate prices despite nothing in Section 60506 granting it authority to do so. The FCC should not be able to recognize a politically significant consensus against rate regulation one minute and then smuggle that disfavored policy in through a statute that never mentions it the next.

Finally, as noted above, since many of the protected characteristics, but especially income, can be correlated with many factors relevant to profitability, it would be no surprise that almost any policy or practice of a covered entity under the Order could be subject to FCC enforcement. And since there is no guarantee that the FCC would agree in a particular case that technical or economic feasibility justifies a particular policy or practice, nearly everything a broadband provider or other covered entities do would likely need pre-approval under the FCC’s advisory opinion process. This would essentially make the FCC a central planner of everything related to broadband. In other words, the FCC has clearly claimed authority far beyond what Congress could have imagined without any clear authorization to do so.

IV. The Order is Arbitrary and Capricious Because it will Produce Results Inconsistent with the Purpose of the Statute

As noted above, the purposes of the broadband provisions of the IIJA are to encourage broadband deployment, enhance broadband affordability, and prevent discrimination in broadband access. Put simply, the purpose is to get more Americans to adopt more broadband, regardless of income level, race, ethnicity, color, religion, or national origin. The FCC’s Order should curtail discrimination, but the aggressive and expansive police powers the agency grants itself will surely diminish investments in broadband deployment and efforts to encourage adoption. We urge the Court to vacate the Order and require the FCC to adopt rules limited to preventing intentional discrimination in deployment by broadband Internet access service providers. More narrowly tailored rules would satisfy Section 60506’s mandates while preserving incentives to invest in deployment and encourage adoption. Cf. Cin. Bell Tel. Co. v. FCC, 69 F.3d 752, 761 (6th Cir. 1995) (“The FCC is required to give [a reasoned] explanation when it declines to adopt less restrictive measures in promulgating its rules.”). But the current Order is arbitrary and capricious because the predictable results of the rules would be inconsistent with the purpose of the IIJA in promoting broadband deployment. See Motor Vehicle Mfrs. Ass’n v. State Farm Mutual Auto. Ins. Co., 463 U.S. 29, 43 (1983) (“[A]n agency rule would be arbitrary and capricious if the agency has… offered an explanation for its decision that runs counter to the evidence before the agency, or is so implausible that it could not be ascribed to a difference in view of the product of agency expertise”).

The Order spans nearly every aspect of broadband deployment, including, but not limited to network infrastructure deployment, network reliability, network upgrades, and network maintenance. Pet. Add. 58, Order ¶ 108. In addition, the Order covers a wide range of policies and practices that while not directly related to deployment, affect the profitability of deployment investments, such as pricing, discounts, credit checks, marketing or advertising, service suspension, and account termination. Pet. Add. 58, Order ¶ 108.

Like all firms, broadband providers have limited resources with which to make their investments. While profitability (i.e., economic feasibility) is a necessary precondition for investment, not all profitable investments can be undertaken. Among the universe of economically feasible projects, firms are likely to give priority to those that promise greater returns on investment relative to those with lower returns. Returns on investment in broadband depend on several factors. Population density, terrain, regulations, and taxes are all important cost factors, while a given consumer population’s willingness to adopt and pay for broadband are key demand-related factors. Anything that raises the cost of expected cost deployment or reduces the demand for service can turn a profitable investment into an unprofitable prospect or downgrade its priority relative to other investment opportunities.

The Order not only raises the cost of deployment investments, but it also increases the risk of liability for discrimination, thereby increasing the uncertainty of the investments’ returns. Because of the well-known and widely accepted risk-return tradeoff, firms that face increased uncertainty in investment returns will demand higher expected returns from the investments they pursue. This demand for higher returns means that some projects that would have been pursued under more limited digital discrimination rules will not be pursued under the current Order.

The Order will not only stifle new deployment to unserved areas, but also will delay network upgrades and maintenance out of fear of alleged disparate effects. At the extreme, providers will be faced with the choice to upgrade everyone or upgrade no one. Because they cannot afford to upgrade everyone, then they will upgrade no one.

It might be argued that providers could avoid some of the ex post regulatory risk by ex ante seeking pre-approval under the FCC’s advisory opinion process. Such processes are costly and are not certain to result in approval. Even if approved, the FCC reserves to right to rescind the pre-approval. See Pet. Add. 75, Order ¶ 156 (“[A]dvisory opinions will be issued without prejudice to the Enforcement Bureau’s or the Commission’s ability to reconsider the questions involved, and rescind the opinion. Because advisory opinions would be issued by the Enforcement Bureau, they would also be issued without prejudice to the Commission’s right to later rescind or revoke the findings.”). Under the Order’s informal complaint procedures, third parties can allege discriminatory effects associated with pre-approved policies and practices that could result in the recission of pre-approval. The result is an unambiguous increase in deployment and operating costs, even with pre-approval.

Moreover, by imposing liability for disparate impacts outside the control of covered broadband providers, the Order produces results inconsistent with the purpose of the IIJA because parties cannot conform their conduct to the rules. Among the 7% of households who do not use the internet at home, more than half of Current Population Survey (CPS) respondents indicated that they “don’t need it or [are] not interested.” George S. Ford, Confusing Relevance and Price: Interpreting and Improving Surveys on Internet Non-adoption, 45 Telecomm. Pol’y, Mar. 2021. ISPs sell broadband service, but they cannot force uninterested people to buy their product.

Only 2-3% of U.S. households that have not adopted at-home broadband indicate it is because of a lack of access. Eric Fruits & Geoffrey Manne, Quack Attack: De Facto Rate Regulation in Telecommunications (ICLE Issue Brief 2023-03-30) at Table 1, available at https://laweconcenter.org/wp-content/uploads/2023/03/De-Facto-Rate-Reg-Final-1.pdf. And even this tiny fraction is driven by factors such as topography, population density, and projected consumer demand. Differences in these factors will be linked to differences in broadband deployment, but there is little that an ISP can do to change them. If the FCC’s command could make the mountainous regions into flat plains, it would have done so already. It is nonsensical to hold liable a company attempting to overcome obstacles to deployment because they do not do so simultaneously everywhere. And it is not a rational course of action to address a digital divide by imposing liability on entities that cannot fix the underlying causes driving it.

Punishment exacted on an ISP will not produce the broadband access the statute envisions for all Americans. In fact, it will put that access further out of reach by incentivizing ISPs to reduce the speed of deployments and upgrades so that they do not produce inadvertent statistical disparities. Given the statute’s objective of enhancing broadband access, the FCC’s rulemaking must contain a process for achieving greater access. The Order does the opposite and, therefore, cannot be what Congress intended. Cf. Inclusive Communities, 576 U.S. at 544 (“If the specter of disparate-impact litigation causes private developers to no longer construct or renovate housing units for low-income individuals, then the FHA would have undermined its own purpose as well as the free-market system.”).

The Order will result in less broadband investment by essentially making the FCC the central planner of all deployment and pricing decisions. This is inconsistent with the purpose of Section 60506, making the rule arbitrary and capricious.

V. The Order’s Vagueness Gives the FCC Unbounded Power

The Order’s digital discrimination rule is vague because it does not have “sufficient definiteness that ordinary people can understand what conduct is prohibited.” Kolender v. Lawson, 461 U.S. 352, 357 (1983). As a result, the FCC has claimed unbounded power to engage in “arbitrary and discriminatory enforcement.” Id. As argued above, the disparate impact standard means that anything that is correlated with income, which includes many things that may be benignly relevant to deployment and pricing decisions, could give rise to a possible violation of the Order.

While a covered entity could argue that there are economic or technical feasibility reasons for a policy or practice, the case-by-case nature of enforcement outlined in the Order means that no one can be sure of whether they are on the right side of the law. See 47 CFR §16.5(e) (“The Commission will determine on a case-by-case basis whether genuine issues of technical or economic feasibility justified the adoption, implementation, or utilization of a [barred] policy or practice…”).

This vagueness is not cured by the presence of the Order’s advisory opinion process because the FCC retains the right to bring an enforcement action anyway after reconsidering, rescinding, or revoking it. See 47 CFR §16.5(e) (“An advisory opinion states only the enforcement intention of the Enforcement Bureau as of the date of the opinion, and it is not binding on any party. Advisory opinions will be issued without prejudice to the Enforcement Bureau or the Commission to reconsider the questions involved, or to rescind or revoke the opinion. Advisory opinions will not be subject to appeal or further review”). In other words, there is no basis for concluding a covered entity has “the ability to clarify the meaning of the regulation by its own inquiry, or by resort to an administrative process.” Cf. Village of Hoffman Estates v. Flipside, Hoffman Estates, Inc., 455 U.S. 489, 498 (1982). The FCC may engage in utterly arbitrary and discriminatory enforcement under the Order.

Moreover, the Order’s expansive definition of covered entities to include any “entities that provide services that facilitate and affect consumer access to broadband internet access service,” 47 CFR § 16.2 (definition of “Covered entity”, which includes “Entities that otherwise affect consumer access to broadband internet access service”), also leads to vagueness as to whom the digital discrimination rules apply. This would arguably include state and local governments and nonprofits, as well as multi-family housing owners, many of whom may have no idea they are subject to the FCC’s digital discrimination rules nor any idea of how to comply.

The Order is therefore void for vagueness because it does not allow a person of ordinary intelligence to know whether they are complying with the law and gives the FCC nearly unlimited enforcement authority.

Conclusion

For the foregoing reasons, ICLE and ITIF urge the Court to set aside the FCC’s Order.

Continue reading
Telecommunications & Regulated Utilities

Net Neutrality and the Paradox of Private Censorship

TOTM With yet another net-neutrality order set to take effect (the link is to the draft version circulated before today’s Federal Communications Commission vote; the final version is . . .

With yet another net-neutrality order set to take effect (the link is to the draft version circulated before today’s Federal Communications Commission vote; the final version is expected to be published in a few weeks) and to impose common-carriage requirements on broadband internet-access service (BIAS) providers, it is worth considering how the question of whether online platforms (whether they be social media or internet service providers) have the right to editorial discretion keeps shifting.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

Children’s Online Safety and Privacy Legislation

TL;DR TL;DR Background: There has been recent legislative movement on a pair of major bills related to children’s online safety and privacy. H.R. 7891, the Kids . . .

TL;DR

Background: There has been recent legislative movement on a pair of major bills related to children’s online safety and privacy. H.R. 7891, the Kids Online Safety Act (KOSA) has 62 cosponsors in the U.S. Senate. Meanwhile, H.R. 7890, the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) also has bipartisan support within the U.S. Senate Commerce Committee. At the time of publication, these and a slate of other bills related to children’s online safety and privacy were scheduled to be marked up April 17 by the U.S. House Energy and Commerce Committee.

But… If enacted, the primary effect of these bills is likely to be less free online content for minors. Raising the regulatory burdens on online platforms that host minors, as well as restricting creators’ ability to monetize their content, are both likely to yield greater investment in identifying and excluding minors from online spaces, rather than creating safe and vibrant online ecosystems and content that cater to them. In other words, these bills could lead to minors losing the many benefits of internet usage. A more cost-effective way to address potential online harms to teens and children would be to encourage parents and minors to make use of available tools to avoid those harms and to dedicate more resources to prosecuting those who use online platforms to harm minors.

KEY TAKEAWAYS

RAISING THE COST TO SERVE MINORS COULD LEAD TO THEIR EXCLUSION

If the costs of serving minors surpass the revenues that online platforms can generate from serving them, those platforms will invest in excluding underage users, rather than creating safe and vibrant content and platforms for them. 

KOSA will substantially increase the costs that online platforms bear for serving minors. The bill would require a “high impact online company” to exercise “reasonable care” in its design features to “prevent and mitigate” certain harms. These harms include certain mental-health disorders and patterns indicating or encouraging compulsive use by minors, as well as physical violence, cyberbullying, and discriminatory harassment. Moreover, KOSA requires all covered platforms to implement default safeguards to limit design features that encourage minors’ use of the platforms and to control the use of personalized recommendation systems.

RESTRICTING TARGETED ADVERTISING LEADS TO LESS FREE CONTENT

A significant portion of internet content is delivered by what economists call multisided platforms. On one side of the platform, users enjoy free access to content, while on the other side, advertisers are granted a medium to reach users. In effect, advertisers subsidize users’ access to online content. Platforms also collect data from users in order to serve them targeted ads, the most lucrative form of advertising. Without those ads, there would be less revenue to fund access to, and creation of, content. This is no less true when it comes to content of interest to minors.

COPPA 2.0 would expand the protections granted by the Children’s Online Privacy Protection Act of 1998 to users under age 13 to also cover those between 13 and 17 years of age. Where the current law requires parental consent to collect and use persistent identifiers for “individual-specific advertising” directed to children under age 13, COPPA 2.0 would require the verifiable consent of the teen or a parent to serve such ads to teens. 

Obtaining verifiable consent has proven sufficiently costly under the current COPPA rule that almost no covered entities make efforts to obtain it. COPPA has instead largely prevented platforms from monetizing children’s content, which has meant that less of it is created. Extending the law to cover teens would generate similar results. Without the ability to serve them targeted ads, platforms will have less incentive to encourage the creation of teen-focused content.

DE-FACTO AGE VERIFICATION REQUIREMENTS

To comply with laws designed to protect minors, online platforms will need to verify whether its users are minors. While both KOSA and COPPA 2.0 disclaim establishing any age-verification requirements or the collection of any data not already collected “in the normal course of business,” they both establish constructive knowledge standards for violators (i.e., “should have known” or “knowledge fairly implied on the basis of objective circumstances”). Online platforms will need to be able to identify their users who are minors in order to comply with the prohibition on serving them personalized recommendations (KOSA) or targeted advertising (COPPA 2.0). 

Age-verification requirements have been found to violate the First Amendment, in part because they aren’t the least-restrictive means to protect children online. As one federal district court put it: “parents may rightly decide to regulate their children’s use of social media—including restricting the amount of time they spend on it, the content they may access, or even those they chat with. And many tools exist to help parents with this.”

A BETTER WAY FORWARD

Educating parents and minors about those widely available practical and technological tools to mitigate the harms of internet use is a better way to protect minors online, and would pass First Amendment scrutiny. Another way to address the problem would be to increase the resources available to law enforcement to go after predators. The Invest in Child Safety Act of 2024 is one such proposal to give overwhelmed investigators the necessary resources to combat child sexual exploitation.

For more on how to best protect minors online, see “A Law & Economics Approach to Social Media Regulation” and “A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes.” 

Continue reading
Innovation & the New Economy

Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social-Media Platforms

ICLE White Paper “If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in . . .

“If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. If there are any circumstances which permit an exception, they do not now occur to us.” – West Virginia Board of Education v. Barnette (1943)[1]

“Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth.” – United States v. Alvarez (2012)[2]

Introduction

In April 2022, the U.S. Department of Homeland Security (DHS) announced the creation of the Disinformation Governance Board, which would be designed to coordinate the agency’s response to the potential effects of disinformation threats.[3] Almost immediately upon its announcement, the agency was met with criticism. Congressional Republicans denounced the board as “Orwellian,”[4] and it was eventually disbanded.[5]

The DHS incident followed years of congressional hearings in which Republicans had castigated leaders of the so-called “Big Tech” firms for allegedly censoring conservatives, while Democrats had criticized those same leaders for failing to combat and remove misinformation.[6] Moreover, media outlets have reported on systematic attempts by government officials to encourage social-media companies to remove posts and users based on alleged misinformation. For example, The Intercept in 2022 reported on DHS efforts to set up backchannels with Facebook for flagging posts and misinformation.[7]

The “Twitter Files” released earlier this year by the company’s CEO Elon Musk—and subsequently reported on by journalists Barry Weiss, Matt Taibbi, and Michael Shellenberger—suggest considerable efforts by government agents to encourage Twitter to remove posts as misinformation and to bar specific users for being purveyors of misinformation.[8] What’s more, communications unveiled as part of discovery in the Missouri v. Biden case have offered further evidence a variety of government actors cajoling social-media companies to remove alleged misinformation, along with the development of a considerable infrastructure to facilitate what appears to be a joint project to identify and remove the same.[9]

With all of these details coming into public view, the question that naturally arises is what role, if any, does the government have in regulating misinformation disseminated through online platforms? The thesis of this paper is that the First Amendment forecloses government agents’ ability to regulate misinformation online, but it protects the ability of private actors—i.e., the social-media companies themselves—to regulate misinformation on their platforms as they see fit.

The primary reason for this conclusion is the state-action doctrine, which distinguishes public and private action. Public actions are subject to constitutional constraints (such as the First Amendment), while private actors are free from such regulation.[10] A further thesis of this paper is that application of the state-action doctrine to the question of misinformation on online platforms promotes the bedrock constitutional value of “protect[ing] a robust sphere of individual liberty,”[11] while also creating outlets for more speech to counteract false speech.[12]

Part I of this paper outlines a law & economics theory of state-action requirements under the First Amendment and explains its importance for the online social-media space. The right to editorial discretion and Section 230 will also be considered as part of this background law, which places the responsibility for regulating misinformation on private actors like social-media platforms. Such platforms must balance the interests of each side of their platforms to maximize value. This means, in part, setting moderation rules on misinformation that keep users engaged in order to provide increased opportunities to generate revenue from advertisers.

Part II considers various theories of state action and whether they apply to social-media platforms. It appears clear that some state-action theory—like the idea that social-media companies exercise a “traditional, exclusive public function”—are foreclosed in light of Manhattan Community Access Corp. v. Halleck. But it remains an open question whether a social-media company could be found a state actor under a coercion or collusion theory under facts that have been revealed in the Twitter Files and litigation over this question.

Part III completes the First Amendment analysis of what government agents can do to regulate misinformation on social media. The answer: not much. The U.S. Constitution forbids direct regulation of false speech simply because it is false. A more difficult question concerns how to define truth and falsity in contested areas of fact, where legal questions may run into vagueness concerns. We recommend that a better way forward is for government agents to invest in telling their own version of the facts, but where they have no authority to mandate or pressure social-media companies into regulating misinformation.

I.        A Theory of State Action and Speech Rights on Online Social-Media Platforms

Among the primary rationales for the First Amendment’s speech protections is to shield the “marketplace of ideas”:[13] in most circumstances, the best remedy for false or harmful speech is “more speech, not enforced silence.”[14] But this raises the question of why private abridgments of speech—such as those enforced by powerful online social-media platforms—should not be subject to the same First Amendment restrictions as government action.[15] After all, if the government can’t intervene in the marketplace of ideas by deciding what is true or false, then why should that privilege be held by Facebook or Google?

Here enters the state-action doctrine, which is the legal principle (discussed further below) that, in some cases, private entities may function as extensions of the state. Under this doctrine, the actions of such private actors would give rise to similar First Amendment concerns as if the state had acted on its own. It has been said that there is insufficient theorizing about the “why” of the state-action doctrine.[16] What follows is a theory of why the state-action doctrine is fundamental to protecting those private intermediaries who are best positioned to make marginal decisions about the benefits and harms of speech, including social-media companies through their moderation policies on misinformation.

Governance structures are put in place by online platforms as a response to market pressures to limit misinformation and other harmful speech. At the same time, there are also market pressures to not go too far in limiting speech.[17] The balance that must be struck by online intermediaries is delicate, and there is no reason to expect government regulators to do a better job than the marketplace in determining the optimal rules. The state-action doctrine protects a marketplace for speech governance by limiting the government’s reach into these spaces.

In order to discuss the state-action doctrine meaningfully, we must first outline its basic contours and the why identified by the Supreme Court. In Part I.A, we will provide a description of the Supreme Court’s most recent First Amendment state-action decision, Manhattan Community Access Corp. v. Halleck, where the Court both defines and defends the doctrine’s importance. We will also briefly consider how the state-action doctrine’s protection of private ordering is bolstered by the right to editorial discretion and by Section 230 of the Communications Decency Act of 1998.

We will then consider whether there are good theoretical reasons to support the First Amendment’s state-action doctrine. In Part I.B, we will apply insights from the law & economics tradition associated with the interaction of institutions and dispersed knowledge.[18] We argue that the First Amendment’s dichotomy between public and private action allows for the best use of dispersed knowledge in society by creating a marketplace for speech governance. We also argue that, by protecting this marketplace for speech governance from state action, the First Amendment creates the best institutional framework for reducing harms from misinformation.[19]

A.      The State-Action Doctrine, the Right to Editorial Discretion, and Section 230

At its most basic, the First Amendment’s state-action doctrine says that government agents may not restrict speech, whether through legislation, rules, or enforcement actions, or by putting undue burdens on speech exercised on government-owned property.[20] Such restrictions will receive varying levels of scrutiny from the courts, depending on the degree of incursion. On the other hand, the state-action doctrine means that, as a general matter, private actors may set rules for what speech they are willing to abide or promote, including rules for speech on their own property. With a few exceptions where private actors may be considered state actors,[21] these restrictions will receive no scrutiny from courts, and the government may actually help remove those who break privately set speech rules.[22]

In Halleck, the Court set out a strong defense of the state-action doctrine under the First Amendment. Justice Brett Kavanaugh, writing for the majority, defended the doctrine based on the text and purpose of the First Amendment:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law … abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law ….” § 1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech…

In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty…

It is sometimes said that the bigger the government, the smaller the individual. Consistent with the text of the Constitution, the state-action doctrine enforces a critical boundary between the government and the individual, and thereby protects a robust sphere of individual liberty. Expanding the state-action doctrine beyond its traditional boundaries would expand governmental control while restricting individual liberty and private enterprise.[23]

Applying the state-action doctrine, the Court held that even the heavily regulated operation of cable companies’ public-access channels constituted private action. The Court opined that “merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.”[24] The Court went on to explain:

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.[25]

Similarly, the Court has found that private actors have the right to editorial discretion that can’t generally be overcome by a government compelling the carriage of speech.[26] In Miami Herald v. Tornillo, the Supreme Court ruled that a right-to-reply statute for political candidates was unconstitutional because it “compel[s] editors or publishers to publish that which ‘reason tells them should not be published.’”[27] The Court found that the marketplace of ideas was still worth protecting from government-compelled speech, even in a media environment where most localities only had one (monopoly) newspaper.[28] The effect of Tornillo was to establish a general rule whereby the limits on media companies’ editorial discretion were defined not by government edict but by “the acceptance of a sufficient number of readers—and hence advertisers —to assure financial success; and, second, the journalistic integrity of its editors and publishers.”[29]

Section 230 of the Communications Decency Act supplements the First Amendment’s protections by granting “providers and users of an interactive computer service” immunity from (most) lawsuits for speech generated by other “information content providers” on their platforms.[30] The effect of this statute is far-ranging in its implications for online speech. It protects online social-media platforms from lawsuits for the third-party speech they host, as well as for the platforms’ decisions to take certain third-party speech down.[31]

As with the underlying First Amendment protections, Section 230 augments social-media companies’ ability to manage misinformation on their services. Specifically, it shields them from an unwarranted flood of litigation for failing to remove the defamatory speech of third parties when they make efforts to remove some undesirable speech from their platforms.

B.      Regulating Speech in Light of Dispersed Knowledge[32]

One of the key insights of the late Nobel laureate economist F.A. Hayek was that knowledge is dispersed.[33] In other words, no one person or centralized authority has access to all the tidbits of knowledge possessed by countless individuals spread out through society. Even the most intelligent among us have but a little bit more knowledge than the least intelligent. Thus, the economic problem facing society is not how to allocate “given” resources, but how to “secure the best use of resources known to any of the members of society, for ends whose relative importance only these individuals know.”[34]

This is particularly important when considering the issue of regulating alleged misinformation. As noted above, the First Amendment is premised on the idea that a marketplace of ideas will lead to the best information eventually winning out, with false ideas pushed aside by true ones.[35] Much like the economic problem, there are few, if any, given answers that are true for all time when it comes to opinions or theories in science, the arts, or any other area of knowledge. Thus, the question is: how do we establish a system that promotes the generation and adoption of knowledge, recognizing there will be “market failures” (and possibly, corresponding “government failures”) along the way?

Like virtually any other human activity, there are benefits and costs to speech. It is ultimately subjective individual preference that determines how to manage those tradeoffs. Although the First Amendment protects speech from governmental regulation, that does not mean that all speech is acceptable or must be tolerated. As noted above, U.S. law places the power to decide what speech to allow in the public square firmly into the hands of the people. The people’s preferences are expressed individually and collectively through their participation in online platforms, news media, local organizations, and other fora, and it via that process that society arrives at workable solutions to such questions.

Very few people believe that all speech protected by the First Amendment should be without consequence. Just as very few people, if pressed, would really believe that it is, generally speaking, a wise idea to vest the power to determine what is true or false in a vast governmental bureaucracy. Instead, proposals for government regulation of misinformation generally are offered as an expedient to effect short-term political goals that are perceived to be desirable. But given the dispersed nature of knowledge and given that very few “facts” are set in stone for all time,[36] such proposals threaten to undermine the very process through which new knowledge is discovered and disseminated.

Moreover, such proposals completely fail to account for how “bad” speech has, in fact, long been regulated via informal means, or what one might call “private ordering.” In this sense, property rights have long played a crucial role in determining the speech rules of any given space. If a man were to come into another man’s house and start calling his wife racial epithets, he would not only have the right to ask that person to leave but could exercise his right as a property owner to eject the trespasser—if necessary, calling the police to assist him. One similarly could not expect to go to a restaurant and yell at the top of her lungs about political issues and expect the venue—even those designated as “common carriers” or places of public accommodation—to allow her to continue.[37] A Christian congregation may in most circumstances be extremely solicitous of outsiders with whom they want to share their message, but they would likewise be well within their rights to prevent individuals from preaching about Buddhism or Islam within their walls.

In each of these examples, the individual or organization is entitled to eject individuals on the basis of their offensive (or misinformed) speech with no cognizable constitutional complaint about the violation of rights to free speech. The nature of what is deemed offensive is obviously context- and listener-dependent, but in each example, the proprietors of the relevant space are able to set and enforce appropriate speech rules. By contrast, a centralized authority would, by its nature, be forced to rely on far more generalized rules. As the economist Thomas Sowell once put it:

The fact that different costs and benefits must be balanced does not in itself imply who must balance them?or even that there must be a single balance for all, or a unitary viewpoint (one “we”) from which the issue is categorically resolved.[38]

When it comes to speech, the balance that must be struck is between one individual’s desire for an audience and that prospective audience’s willingness to listen. Asking government to make categorical decisions for all of society is substituting centralized evaluation of the costs and benefits of access to communications for the individual decisions of many actors. Rather than incremental decisions regarding how and under what terms individuals may relate to one another—which can evolve over time in response to changes in what individuals find acceptable—governments can only hand down categorical guidelines: “you must allow a, b, and c speech” or “you must not allow z, y, and z speech.”

It is therefore a fraught proposition to suggest that government could have both a better understanding of what is true and false, and superior incentives to disseminate the truth, than the millions of individuals who make up society.[39] Indeed, it is a fundamental aspect of both the First Amendment’s Establishment Clause[40] and of free-speech jurisprudence[41] that the government is in no position to act as an arbiter of what is true or false.

Thus, as much as the First Amendment protects a marketplace of ideas, by excluding the government as a truth arbiter, it also protects a marketplace for speech governance. Private actors can set the rules for speech on their own property, including what is considered true or false, with minimal interference from the government. And as the Court put it in Halleck, opening one’s property for the speech of third parties need not make the space take all-comers.[42]

This is particularly relevant in the social-media sphere. Social-media companies must resolve social-cost problems among their users.[43] In his famous work “The Problem of Social Cost,” the economist Ronald Coase argued that the traditional approach to regulating externalities was wrong, because it failed to apprehend the reciprocal nature of harms.[44] For example, the noise from a factory is a potential cost to the doctor next door who consequently can’t use his office to conduct certain testing, and simultaneously the doctor moving his office next door is a potential cost to the factory’s ability to use its equipment. In a world of well-defined property rights and low transaction costs, the initial allocation of a right would not matter, because the parties could bargain to overcome the harm in a beneficial manner—i.e., the factory could pay the doctor for lost income or to set up sound-proof walls, or the doctor could pay the factory to reduce the sound of its machines.[45] Similarly, on social media, misinformation and other speech that some users find offensive may be inoffensive or even patently true to other users. There is a reciprocal nature to the harms of offensive speech, much as with other forms of nuisance. But unlike the situation of the factory owner and the doctor, social-media users use the property of social-media companies, who must balance these varied interests to maximize the platform’s value.

Social-media companies are what economists call “multi-sided” platforms.[46] They are profit seeking, to be sure, but the way they generate profits is by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users will abandon the platform. Without users, advertisers would have no interest in buying ads. And without advertisers, there is no profit to be made. Social-media companies thus need to maximize the value of their platform by setting rules that keep users sufficiently engaged that there are advertisers who will pay to reach them.

In the cases of Facebook, Twitter, and YouTube, the platforms have set content-moderation standards that restrict many kinds of speech, including misinformation. [47] In some cases, these policies are viewed negatively by some users, particularly given that the First Amendment would foreclose the government from regulating those same types of content. But social-media companies’ ability to set and enforce moderation policies could actually be speech-enhancing. Because social-media companies are motivated to maximize the value of their platforms, for any given policy that gives rise to enforcement actions that leave some users disgruntled, there are likely to be an even greater number of users who agree with the policy. Moderation policies end up being speech-enhancing when they promote more speech overall, as the proliferation of harmful speech may push potential users away from the platforms.

Currently, all social-media companies rely on an advertising-driven revenue model. As a result, their primary goal is to maximize user engagement. As we have recently seen, this can lead to situations where advertisers threaten to pull ads if they don’t like the platform’s speech-governance decisions. After Elon Musk began restoring the accounts of Twitter users who had been banned for what the company’s prior leadership believed was promoting hate speech and misinformation, major advertisers left the platform.[48] A different business model (about which Musk has been hinting for some time[49]) might generate different incentives for what speech to allow and disallow. There would, however, still be a need for any platform to allow some speech and not other speech, in line with the expectations of its user base and advertisers. The bottom line is that the motive to maximize profits and the tendency of markets to aggregate information leaves the platforms themselves best positioned to make these incremental decisions about their users’ preferences, in response to the feedback mechanism of consumer demand.

Moreover, there is a fundamental difference between private action and state action, as alluded to by the Court in Halleck: one is voluntary, and the other based on coercion. If Facebook or Twitter suspends a user for violating community rules, that decision terminates a voluntary association. When the government removes someone from a public forum for expressing legal speech, its censorship and use of coercion are inextricably intertwined. The state-action doctrine empowers courts to police this distinction because the threats to liberty are much greater when one party in a dispute over the content of a particular expression is also empowered to impose its will with the use of force.

Imagine instead that courts were to decide that they, in fact, were best situated to balance private interests in speech against other interests, or even among speech interests. There are obvious limitations on courts’ access to knowledge that couldn’t be easily overcome through the processes of adjudication, which depend on the slow development of articulable facts and categorical reasoning over a lengthy period of time and an iterative series of cases. Private actors, on the other hand, can act relatively quickly and incrementally in response to ever-changing consumer demand in the marketplace. As Sowell put it:

The courts’ role as watchdogs patrolling the boundaries of governmental power is essential in order that others may be secure and free on the other side of those boundaries. But what makes watchdogs valuable is precisely their ability to distinguish those people who are to be kept at bay and those who are to be left alone. A watchdog who could not make that distinction would not be a watchdog at all, but simply a general menace.

The voluntariness of many actions—i.e., personal freedom—is valued by many simply for its own sake. In addition, however, voluntary decision-making processes have many advantages which are lost when courts attempt to prescribe results rather than define decision-making boundaries.[50]

The First Amendment’s complementary right of editorial discretion also protects the right of publishers, platforms, and other speakers to be free from an obligation to carry or transmit government-compelled speech.[51] In other words, not only is private regulation of speech not state action, but as a general matter, private regulation of speech is protected by the First Amendment from government action. The limits on editorial discretion are marketplace pressures, such as user demand and advertiser support, and social mores about what is acceptable to be published.[52]

There is no reason to think that social-media companies today are in a different position than was the newspaper in Tornillo.[53] These companies must determine what, how, and where content is presented within their platform. While this right of editorial discretion protects social-media companies’ moderation decisions, its benefits accrue to society at-large, who get to use those platforms to interact with people from around the world and to thereby grow the “marketplace of ideas.”

Moreover, Section 230 amplifies online platforms’ ability to make editorial decisions by immunizing most of their choices about third-party content. In fact, it is interesting to note that the heading for Section 230 is “Protection for private blocking and screening of offensive material.”[54] In other words, Section 230 is meant, along with the First Amendment, to establish a market for speech governance free from governmental interference.

Social-media companies’ abilities to differentiate themselves based on functionality and moderation policies are important aspects of competition among them.[55] How each platform is used may differ depending on those factors. In fact, many consumers use multiple social-media platforms throughout the day for different purposes.[56] Market competition, not government power, has enabled internet users to have more avenues than ever to get their message out.[57]

If social-media users and advertisers demand less of the kinds of content commonly considered to be misinformation, platforms will do their best to weed those things out. Platforms won’t always get these determinations right, but it is by no means clear that centralizing decisions about misinformation by putting them in the hands of government officials would promote the societal interest in determining the truth.

It is true that content-moderation policies make it more difficult for speakers to communicate some messages, but that is precisely why they exist. There is a subset of protected speech to which many users do not wish to be subject, including at least some perceived misinformation. Moreover, speakers have no inherent right to an audience on a social-media platform. There are always alternative means to debate the contested issues of the day, even if it may be more costly to access the desired audience.

In sum, the First Amendment’s state-action doctrine assures us that government may not make the decision about what is true or false, or to restrict a citizen’s ability to reach an audience with ideas. Governments do, however, protect social-media companies’ rights to exercise editorial discretion on their own property, including their right to make decisions about regulating potential misinformation. This puts the decisions in the hands of the entities best placed to balance the societal demands for online speech and limits on misinformation. In other words, the state-action doctrine protects the marketplace of ideas.

II.      Are Online Platforms State Actors?

As the law currently stands, the First Amendment grants online platforms the right to exercise their own editorial discretion, free from government intervention. By contrast, if government agents pressure or coerce platforms into declaring certain speech misinformation, or to remove certain users, a key driver of the marketplace of ideas—the action of differentiated actors experimenting with differing speech policies—will be lost.[58]

Today’s public debate is not actually centered on a binary choice between purely private moderation and legislatively enacted statutes to literally define what is true and what is false. Instead, the prevailing concerns relate to the circumstances under which some government activity—such as chastising private actors for behaving badly, or informing those actors about known threats—might transform online platforms’ moderation policies into de facto state actions. That is, at what point do private moderation decisions constitute state action? To this end, we will now consider sets of facts under which online platforms could be considered state actors for the purposes of the First Amendment.

In Halleck, the Supreme Court laid out three exceptions to the general rule that private actors are not state actors:

Under this Court’s cases, a private entity can qualify as a state actor in a few limited circumstances—including, for example, (i) when the private entity performs a traditional, exclusive public function; (ii) when the government compels the private entity to take a particular action; or (iii) when the government acts jointly with the private entity.[59]

Below, we will consider each of these exceptions, as applied to online social-media platforms. Part II.A will make the case that Halleck decisively forecloses the theory that social-media platforms perform a “traditional, exclusive public function,” as has been found by many federal courts. Part II.B will consider whether government agents have coerced or encouraged platforms to make specific enforcement decisions on misinformation in ways that would transform their moderation actions into state action. Part II.C will look at whether the social-media companies have essentially colluded with government actors, through either joint action or in a relationship sufficiently intertwined as to be symbiotic.

A.      ‘Traditional, Exclusive Public Function’

The classic case that illustrates the traditional, exclusive public function test is Marsh v. Alabama.[60] There, the Supreme Court found that a company town, while private, was a state actor for the purposes of the First Amendment. At issue was whether the company town could prevent a Jehovah’s Witness from passing out literature on the town’s sidewalks. The Court noted that “[o]wnership does not always mean absolute dominion. The more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it.”[61] The Court then situated the question as one where it was being asked to balance property rights with First Amendment rights. Within that framing, it found that the First Amendment’s protections should be in the “preferred position.”[62]

Despite nothing in Marsh suggesting a limitation to company towns or the traditional, exclusive public function test, future courts eventually cabined it. But there was a time when it looked like the Court would expand this reasoning to other private actors who were certainly not engaged in a traditional, exclusive public function. A trio of cases involving shopping malls eventually ironed this out.

First, in Food Employees v. Logan Valley Plaza,[63] the Court—noting the “functional equivalence” of the business block in Marsh and the shopping center[64] —found that the mall could not restrict the peaceful picketing of a grocery store by a local food-workers union.[65]

But then, the Court seemingly cabined-in both Logan Valley and Marsh just a few years later in Lloyd Corp. v. Tanner.[66] Noting the “economic anomaly” that was company towns, the Court said Marsh “simply held that where private interests were substituting for and performing the customary functions of government, First Amendment freedoms could not be denied where exercised in the customary manner on the town’s sidewalks and streets.”[67] Moreover, the Court found that Logan Valley applied “only in a context where the First Amendment activity was related to the shopping center’s operations.”[68] The general rule, according to the Court, was that private actors had the right to restrict access to property for the purpose of exercising free-speech rights.[69] Importantly, “property does not lose its private character merely because the public is generally invited to use it for designated purposes.”[70] Since the mall did not dedicate any part of its shopping center to public use in a way that would entitle the protestors to use it, the Court allowed it to restrict hand billing by Vietnam protestors within the mall.[71]

Then, in Hudgens v. NLRB,[72] the Court went a step further and reversed Logan Valley and severely cabined-in Marsh. Now, the general rule was that “the constitutional guarantee of free speech is a guarantee only against abridgment by government, federal or state.”[73] Marsh is now a narrow exception, limited to situations where private property has taken on all attributes of a town.[74] The Court also found that the reasoning—if not the holding—of Tanner had already reversed Logan Valley.[75] The Court concluded bluntly that “under the present state of the law the constitutional guarantee of free expression has no part to play in a case such as this.”[76] In other words, private actors, even those that open themselves up to the public, are not subject to the First Amendment. Following Hudgens, the Court would further limit the public-function test to “the exercise by a private entity of powers traditionally exclusively reserved to the State.”[77] Thus, the “traditional, exclusive public function” test.

Despite this history, recent litigants against online social-media platforms have argued, often citing Marsh, that these platforms are the equivalent of public parks or other public forums for speech.[78] On top of that, the Supreme Court itself has described social-media platforms as the “modern public square.”[79] The Court emphasized the importance of online platforms because they:

allow[] users to gain access to information and communicate with one another about it on any subject that might come to mind… [give] access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge. These websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard. They allow a person with an Internet connection to “become a town crier with a voice that resonates farther than it could from any soapbox.”[80]

Seizing upon this language, many litigants have argued that online social-media platforms are public forums for First Amendment purposes. To date, all have failed in federal court under this theory,[81] and the Supreme Court officially foreclosed it in Halleck.

In Halleck, the Court considered whether a public-access channel operated by a cable provider was a government actor for purposes of the First Amendment under the traditional, exclusive public function test. Summarizing the caselaw, the Court said the test required more than just a finding that the government at some point exercised that function, or that the function serves the public good. Instead, the government must have “traditionally and exclusively performed the function.”[82]

The Court then found that operating as a public forum for speech is not a function traditionally and exclusively performed by the government. On the contrary, a private actor that provides a forum for speech normally retains “editorial discretion over the speech and speakers in the forum”[83] because “[it] is not an activity that only governmental entities have traditionally performed.”[84] The Court reasoned that:

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.[85]

If the applicability of Halleck to the question of whether online social-media platforms are state actors under the “traditional, exclusive public function” test isn’t already clear, there have been appellate courts who have squarely addressed the question. In Prager University v. Google, LLC,[86] the 9th U.S. Circuit Court of Appeals took on the question of whether social-media platforms are state actors subject to First Amendment. Prager relied primarily upon Marsh and Google’s representations that YouTube is a “public forum” to argue that YouTube is a state actor under the traditional, public function test.[87] Citing primarily Halleck, along with a healthy dose of both Hudgens and Tanner, the 9th Circuit rejected this argument, for the reasons noted above. [88] YouTube was not a state actor just because it opened itself up to the public as a forum for free speech.

In sum, there is no basis for arguing that online social-media platforms fit into the narrow Marsh exception to the general rule that private actors can use their own editorial discretion over own their digital property to set their own rules for speech, including misinformation policies.

That this exception to the general private/state action dichotomy has been limited as applied to social-media platforms is consistent with the reasoning laid out above on the law & economics of the doctrine. Applying the Marsh theory to social-media companies would make all of their moderation decisions subject to First Amendment analysis. As will be discussed more below in Part III.A, this would severely limit the platforms’ ability to do anything at all with regard to online misinformation, since government actors can do very little to regulate such speech consistent with the First Amendment.

The inapplicability of the Marsh theory of state action means that a robust sphere of individual liberty will be protected. Social-media companies will be able to engage in a vibrant “market for speech governance” with respect to misinformation, responding to the perceived demands of users and advertisers and balancing those interests in a way that maximizes the value of their platforms in the presence of market competition.

B.      Government Compulsion or Encouragement

In light of the revelations highlighted in the introduction of this paper from The Intercept, the “Twitter Files,” and subsequent litigation in Missouri v. Biden,[89] the more salient theory of state action is that online social-media companies were either compelled by or colluded in joint action with the federal government to censor speech under their misinformation policies. This section will consider the government compulsion or encouragement theory and Part II.C below will consider the joint action/entwinement theory.

At a high level, the government may not coerce or encourage private actors to do what it may itself not do constitutionally.[90] But state action can be found for a private decision under this theory “only when it has exercised coercive power or has provided such significant encouragement, either overt or cover, that the choice must in law be deemed to be that of the State.”[91] But “[m]ere approval of or acquiescence in the initiatives of a private party is not sufficient to justify holding the State responsible” for private actions.[92] While each case is very fact-specific,[93] courts have developed several tests to determine when government compulsion or encouragement would transform a private actor into a state actor for constitutional purposes.

For instance, in Bantam Books v. Sullivan,[94] the Court considered whether letters sent by a legislatively created commission to book publishers declaring certain books and magazines objectionable for sale or distribution was sufficient to transform into state action the publishers’ subsequent decision not to publish further copies of the listed publications. The commission had no legal power to apply formal legal sanctions and there were no bans or seizures of books.[95] In fact, the book distributors were technically “free” to ignore the commission’s notices.[96] Nonetheless, the Court found “the Commission deliberately set about to achieve the suppression of publications deemed ‘objectionable’ and succeeded in its aim.”[97] Particularly important to the Court was that the notices could be seen as a threat to refer them for prosecution, regardless how the commission styled them. As the Court stated:

People do not lightly disregard public officers’ thinly veiled threats to institute criminal proceedings against them if they do not come around, and [the distributor’s] reaction, according to uncontroverted testimony, was no exception to this general rule. The Commission’s notices, phrased virtually as orders, reasonably understood to be such by the distributor, invariably followed up by police visitations, in fact stopped the circulation of the listed publications ex proprio vigore. It would be naive to credit the State’s assertion that these blacklists are in the nature of mere legal advice, when they plainly serve as instruments of regulation…[98]

Similarly, in Carlin Communications v. Mountain States Telephone Co.,[99] the 9th U.S. Circuit Court of Appeals found it was state action when a deputy county attorney threatened prosecution of a regional telephone company for carrying an adult-entertainment messaging service.[100] “With this threat, Arizona ‘exercised coercive power’ over Mountain Bell and thereby converted its otherwise private conduct into state action…”[101] The court did not find it relevant whether or not the motivating reason for the removal was the threat of prosecution or the telephone company’s independent decision.[102]

In a more recent case dealing with Backpage.com, the 7th U.S. Circuit Court of Appeals found a sheriff’s campaign to shut down the site by cutting off payment processing for ads from Visa and Mastercard was impermissible under the First Amendment.[103] There, the sheriff sent a letter to the credit-card companies asking them to “cease and desist” from processing payment for advertisements on Backpage.com and for “contact information” for someone within the companies he could work with.[104] The court spent considerable time distinguishing between “attempts to convince and attempts to coerce,”[105] coming to the conclusion that “Sheriff Dart is not permitted to issue and publicize dire threats against credit card companies that process payments made through Backpage’s website, including threats of prosecution (albeit not by him, but by other enforcement agencies that he urges to proceed against them), in an effort to throttle Backpage.”[106] The court also noted “a threat is actionable and thus can be enjoined even if it turns out to be empty—the victim ignores it, and the threatener folds his tent.”[107]

In sum, the focus under the coercion or encouragement theory is on what the state objectively did and not on the subjective understanding of the private actor. In other words, the question is whether the state action is reasonably understood as coercing or encouraging private action, not whether the private actor was actually responding to it.

To date, several federal courts have dismissed claims that social-media companies are state actors under the compulsion/encouragement theory, often distinguishing the above cases on the grounds that the facts did not establish a true threat, or were not sufficiently connected to the enforcement action again the plaintiff.

For instance, in O’Handley v. Weber,[108] the 9th U.S. Circuit Court of Appeals dealt directly with the question of the coercion theory in the context of social-media companies moderating misinformation, allegedly at the behest of California’s Office of Elections Cybersecurity (OEC). The OEC flagged allegedly misleading posts on Facebook and Twitter and the social-media companies removed most of those flagged posts.[109] First, the court found there was no threats from the OEC like those in Carlin, nor any incentive offered to take the posts down.[110]  The court then distinguished between “attempts to convince and attempts to coerce,”[111] noting that “[a] private party can find the government’s stated reasons for making a request persuasive, just as it can be moved by any other speaker’s message. The First Amendment does not interfere with this communication so long as the intermediary is free to disagree with the government and to make its own independent judgment about whether to comply with the government’s request.”[112] The court concluded that the OEC did not pressure Twitter to take any particular action against the plaintiff, but went even further by emphasizing that, even if their actions could be seen as a specific request to remove his post, Twitter’s compliance was “purely optional.”[113] In other words, if there is no threat in a government actor’s request to take down content, then it is not impermissible coercion or encouragement.

In Hart v. Facebook,[114] the plaintiff argued that the federal government defendants had—through threats of removing Section 230 immunity and antitrust investigations, as well as comments by President Joe Biden stating that social-media companies were “killing people” by not policing misinformation about COVID-19—coerced Facebook and Twitter into removing his posts.[115] The plaintiff also pointed to recommendations from Biden and an advisory from Surgeon General Vivek Murthy as further evidence of coercion or encouragement. The court rejected this evidence, stating that “the government’s vague recommendations and advisory opinions are not coercion. Nor can coercion be inferred from President Biden’s comment that social media companies are ‘killing people’… A President’s one-time statement about an industry does not convert into state action all later decisions by actors in that industry that are vaguely in line with the President’s preferences.”[116] But even more importantly, the court found that there was no connection between the allegations of coercion and the removal of his particular posts: “Hart has not alleged any connection between any (threat of) agency investigation and Facebook and Twitter’s decisions… even if Hart had plausibly pleaded that the Federal Defendants exercised coercive power over the companies’ misinformation policies, he still fails to specifically allege that they coerced action as to him.”[117]

Other First Amendment cases against social-media companies alleging coercion or encouragement from state actors have been dismissed for reasons similar to those in Hart.[118] In Missouri et al. v. Biden, et al.,[119] the U.S. District Court for the Western District of Louisiana became the first court to find social-media companies could be state actors for purposes of the First Amendment due to a coercion or encouragement theory. After surveying (most of the same) cases as above, the court found that:

Here, Plaintiffs have clearly alleged that Defendants attempted to convince social-media companies to censor certain viewpoints. For example, Plaintiffs allege that Psaki demanded the censorship of the “Disinformation Dozen” and publicly demanded faster censorship of “harmful posts” on Facebook. Further, the Complaint alleges threats, some thinly veiled and some blatant, made by Defendants in an attempt to effectuate its censorship program. One such alleged threat is that the Surgeon General issued a formal “Request for Information” to social-media platforms as an implied threat of future regulation to pressure them to increase censorship. Another alleged threat is the DHS’s publishing of repeated terrorism advisory bulletins indicating that “misinformation” and “disinformation” on social-media platforms are “domestic terror threats.” While not a direct threat, equating failure to comply with censorship demands as enabling acts of domestic terrorism through repeated official advisory bulletins is certainly an action social-media companies would not lightly disregard. Moreover, the Complaint contains over 100 paragraphs of allegations detailing “significant encouragement” in private (i.e., “covert”) communications between Defendants and social-media platforms.

The Complaint further alleges threats that far exceed, in both number and coercive power, the threats at issue in the above-mentioned cases. Specifically, Plaintiffs allege and link threats of official government action in the form of threats of antitrust legislation and/or enforcement and calls to amend or repeal Section 230 of the CDA with calls for more aggressive censorship and suppression of speakers and viewpoints that government officials disfavor. The Complaint even alleges, almost directly on point with the threats in Carlin and Backpage, that President Biden threatened civil liability and criminal prosecution against Mark Zuckerberg if Facebook did not increase censorship of political speech. The Court finds that the Complaint alleges significant encouragement and coercion that converts the otherwise private conduct of censorship on social-media platforms into state action, and is unpersuaded by Defendants’ arguments to the contrary.[120]

There is obvious tension between Missouri v. Biden and the O’Handley and Hart opinions. As noted above, the Missouri v. Biden court did attempt to incorporate O’Handley into its opinion. That court tried to distinguish O’Handley on the grounds that the OEC’s conduct at issue was a mere advisory, whereas the federal defendants in Missouri v. Biden made threats against the plaintiffs.[121]

It is perhaps plausible that Hart can also be read as consistent with Missouri v. Biden, in the sense that while Hart failed to allege sufficient facts of coercion/encouragement or a connection with his specific removal, the plaintiffs in Missouri v. Biden did. Nonetheless, the Missouri v. Biden court accepted many factual arguments that were rejected in Hart, such as those about the relevance of certain statements made by President Biden and his press secretary; threats to revoke Section 230 liability protections; and threats to start antitrust proceedings. Perhaps the difference is that the factual allegations in Missouri v. Biden were substantially longer and more detailed than those in Hart. And while the Missouri v. Biden court did not address it in its First Amendment section, they did note that the social-media companies’ censorship actions generated sufficient injury-in-fact to the plaintiffs to establish standing.[122] In other words, it could just be that what makes the difference is the better factual pleading in Missouri v. Biden, due to more available revelations of government coercion and encouragement.[123]

On the other hand, there may be value to cabining Missouri v. Biden with some of the criteria in O’Handley and Hart. For instance, there could be value in the government having the ability to share information with social-media companies and make requests to review certain posts and accounts that may purvey misinformation. O’Handley emphasizes that there is a difference between convincing and coercing. This is not only important for dealing with online misinformation, but with things like terrorist activity on the platforms. Insofar as Missouri v. Biden is too lenient in allowing cases to go forward, this may be a fruitful distinction for courts to clarify.[124]

Similarly, the requirement in Hart that a specific moderation decision be connected to a particular government action is very important to limit the universe of activity subject to First Amendment analysis. The Missouri v. Biden court didn’t deal sufficiently with whether the allegations of coercion and encouragement were connected to the plaintiffs’ content and accounts being censored. As Missouri v. Biden reaches the merits stage of the litigation, the court will also need to clarify the evidence needed to infer state action, assuming there is no explicit admission of direction by state actors.[125]

Under the law & economics theory laid out in Part I, the coercion or encouragement exception to the strong private/state action distinction is particularly important. The benefits of private social-media companies using their editorial judgment to remove misinformation in response to user and advertiser demand is significantly reduced when the government coerces, encourages, or otherwise induces moderation decisions. In such cases, the government is essentially engaged in covert regulation by deciding for private actors what is true and what is false. This is inconsistent with a “marketplace of ideas” or the “marketplace for speech governance” that the First Amendment’s state-action doctrine protects.

There is value, however, to limiting the Missouri v. Biden holding to ensure that not all requests by government agents automatically transform moderation decisions into state action, and in connecting coercion or encouragement to particular allegations of censorship. Government actors, as much as private actors, should be able to alert social-media companies to the presence of misinformation and even persuade social-media companies to act in certain cases, so long as that communication doesn’t amount to a threat. This is consistent with a “marketplace for speech governance.” Moreover, social-media companies shouldn’t be considered state actors for all moderation decisions, or even all moderation decisions regarding misinformation, due to government coercion or encouragement in general. Without a nexus between the coercion or encouragement and a particular moderation decision, social-media companies would lose the ability to use their editorial judgment on a wide variety of issues in response to market demand, to the detriment of their users and advertisers.

C.      Joint Action or Symbiotic Relationship

There is also state action for the purposes of the First Amendment when the government acts jointly with a private actor,[126] when there is a “symbiotic relationship” between the government and a private actor,[127] or when there is “inextricable entwinement” between a private actor and the government.[128] None of these theories is necessarily distinct,[129] and it is probably easier to define them through examples.[130]

In Lugar v. Edmonson Oil Co., the plaintiff, an operator of a truck stop, was indebted to his supplier.[131] The defendant was a creditor who used a state law in Virginia to get a prejudgment attachment to the truck-stop operator’s property, which was then executed by the county sheriff.[132] A hearing was held 34 days later, pursuant to the relevant statute.[133] The levy at-issue was dismissed because the creditor failed to satisfy the statute. The plaintiff then brought a Section 1983 claim against the defendant on grounds that it had violated the plaintiff’s Due Process rights by taking his property without first providing him with a hearing. The Supreme Court took the case to clarify how the state-action doctrine applied in such matters. The Court, citing previous cases, stated that:

Private persons, jointly engaged with state officials in the prohibited action, are acting “under color” of law for purposes of the statute. To act “under color” of law does not require that the accused be an officer of the State. It is enough that he is a willful participant in joint activity with the State or its agents.[134]

The Court also noted that “we have consistently held that a private party’s joint participation with state officials in the seizure of disputed property is sufficient to characterize that party as a ‘state actor.’”[135] Accordingly, the Court found that the defendant’s use of the prejudgment statute was state action that violated Due Process.[136]

In Burton v. Wilmington Parking Authority,[137] the Court heard a racial-discrimination case in which the question was whether state action was involved when a restaurant refused to serve black customers in a space leased from a publicly owned building attached to a public parking garage.[138] The Court determined that it was state action, noting that “[i]t cannot be doubted that the peculiar relationship of the restaurant to the parking facility in which it is located confers on each an incidental variety of mutual benefits… Addition of all these activities, obligations and responsibilities of the Authority, the benefits mutually conferred, together with the obvious fact that the restaurant is operated as an integral part of a public building devoted to a public parking service, indicates that degree of state participation and involvement in discriminatory action which it was the design of the Fourteenth Amendment to condemn.”[139] While Court didn’t itself call this theory the “symbiotic relationship” test in Burton, later Court opinions did exactly that.[140]

Brentwood Academy v. Tennessee Secondary School Athletic Association arose concerned a dispute between a private Christian school and the statewide athletics association governing interscholastic sports over a series of punishments for alleged “undue influence” in recruiting athletes.[141] The central issue was whether the athletic association was a state actor. The Court analyzed whether state actors were so “entwined” with the private actors in the association to make the resulting action state action.[142] After reviewing the record, the Court noted that 84% of the members of the athletic association were public schools and the association’s rules were made by representatives from those schools.[143] The Court concluded that the “entwinement down from the State Board is therefore unmistakable, just as the entwinement up from the member public schools is overwhelming. Entwinement will support a conclusion that an ostensibly private organization ought to be charged with a public character and judged by constitutional standards; entwinement to the degree shown here requires it.”[144]

Other cases have also considered circumstances in which government regulation, combined with other government actions, can create a situation where private action is considered that of the government. In Skinner v. Railway Labor Executives Association,[145] the Court considered a situation where private railroads engaged in drug testing of employees, pursuant to a federal regulation that authorized them to adopt a policy of drug testing and preempted state laws restricting testing.[146] The Court stated that “[t]he fact that the Government has not compelled a private party to perform a search does not, by itself, establish that the search is a private one. Here, specific features of the regulations combine to convince us that the Government did more than adopt a passive position toward the underlying private conduct.”[147] The Court found the preemption of state law particularly important, finding “[t]he Government has removed all legal barriers to the testing authorized by Subpart D and indeed has made plain not only its strong preference for testing, but also its desire to share the fruits of such intrusions.”[148]

Each of these theories has been pursued by litigants who have had social-media posts or accounts removed by online platforms due to alleged misinformation, including in the O’Handley and Hart cases discussed earlier.

For instance, in O’Handley, the 9th Circuit rejected that Twitter was a state actor under the joint-action test. The court stated there were two ways to prove joint action: either by a conspiracy theory that required a “meeting of the minds” to violate constitutional rights, or by a “willful participant” theory that requires “a high degree of cooperation between private parties and state officials.”[149] The court rejected the conspiracy theory, stating there was no meeting of the minds to violate constitutional rights because Twitter had its own independent interest in “not allowing users to leverage its platform to mislead voters.”[150] The court also rejected the willful-participant theory because Twitter was free to consider and reject flags made by the OEC in the Partner Support Portal under its own understanding of its policy on misinformation.[151] The court analogized the case to Mathis v. Pac. Gas & Elec. Co.,[152] finding this “closely resembles the ‘consultation and information sharing’ that we held did not rise to the level of joint action.”[153] The court concluded that “this was an arm’s-length relationship, and Twitter never took its hands off the wheel.”[154]

Similarly, in Hart, the U.S. District Court for the Northern District of California rejected the joint action theory as applied to Twitter and Facebook. The court found that much of the complained-of conduct by Facebook predated the communications with the federal defendants about misinformation, making it unlikely that there was a “meeting of the minds” to deprive the plaintiff of his constitutional rights.[155] The court also found “the Federal Defendants’ statements… far too vague and precatory to suggest joint action,” adding that recommendations and advisories are both vague and unenforceable.[156] Other courts followed similar reasoning in rejecting First Amendment claims against social-media companies.[157]

Finally, in Children’s Health Defense v. Facebook,[158] the court considered the argument of whether Section 230, much like the regulation at issue in Skinner, could make Facebook into a joint actor with the state when it removes misinformation. The U.S. District Court for the Northern District of California distinguished Skinner, citing a previous case finding “[u]nlike the regulations in Skinner, Section 230 does not require private entities to do anything, nor does it give the government a right to supervise or obtain information about private activity.”[159]

For the first time, a federal district court found state action under the joint action or entwinement theory in Missouri v. Biden. The court found that:

Here, Plaintiffs have plausibly alleged joint action, entwinement, and/or that specific features of Defendants’ actions combined to create state action. For example, the Complaint alleges that “[o]nce in control of the Executive Branch, Defendants promptly capitalized on these threats by pressuring, cajoling, and openly colluding with social-media companies to actively suppress particular disfavored speakers and viewpoints on social media.” Specifically, Plaintiffs allege that Dr. Fauci, other CDC officials, officials of the Census Bureau, CISA, officials at HHS, the state department, and members of the FBI actively and directly coordinated with social-media companies to push, flag, and encourage censorship of posts the Government deemed “Mis, Dis, or Malinformation.”[160]

The court also distinguished O’Handley, finding there was more than an “arms-length relationship” between the federal defendants and the social-media companies:

Plaintiffs allege a formal government-created system for federal officials to influence social-media censorship decisions. For example, the Complaint alleges that federal officials set up a long series of formal meetings to discuss censorship, setting up privileged reporting channels to demand censorship, and funding and establishing federal-private partnership to procure censorship of disfavored viewpoints. The Complaint clearly alleges that Defendants specifically authorized and approved the actions of the social-media companies and gives dozens of examples where Defendants dictated specific censorship decisions to social-media platforms. These allegations are a far cry from the complained-of action in O’Handley: a single message from an unidentified member of a state agency to Twitter.[161]

Finally, the court also found similarities between Skinner and Missouri v Biden that would support a finding of state action:

Section 230 of the CDA purports to preempt state laws to the contrary, thus removing all legal barriers to the censorship immunized by Section 230. Federal officials have also made plain a strong preference and desire to “share the fruits of such intrusions,” showing “clear indices of the Government’s encouragement, endorsement, and participation” in censorship, which “suffice to implicate the [First] Amendment.”

The Complaint further explicitly alleges subsidization, authorization, and preemption through Section 230, stating: “[T]hrough Section 230 of the Communications Decency Act (CDA) and other actions, the federal government subsidized, fostered, encouraged, and empowered the creation of a small number of massive social-media companies with disproportionate ability to censor and suppress speech on the basis of speaker, content, and viewpoint.” Section 230 immunity constitutes the type of “tangible financial aid,” here worth billions of dollars per year, that the Supreme Court identified in Norwood, 413 U.S. at 466, 93 S.Ct. 2804. This immunity also “has a significant tendency to facilitate, reinforce, and support private” censorship. Id. Combined with other factors such as the coercive statements and significant entwinement of federal officials and censorship decisions on social-media platforms, as in Skinner, this serves as another basis for finding government action.[162]

Again, there is tension in the opinions of these cases on the intersection of social media and the First Amendment under the joint-action or symbiotic-relationship test. But there are ways to read the cases consistently. First, there were far more factual allegations in Missouri v. Biden relative to the O’Handley, Hart, or Children’s Health Defense cases, particularly regarding how involved the federal defendants were in prodding social-media companies to moderate misinformation. There is even a way to read the different legal conclusions on Section 230 and Skinner consistently. The court in Biden v. Missouri made clear that it wasn’t Section 230 alone that made it like Skinner, but the combination of Section 230 immunity with other factors present:

The Defendants’ alleged use of Section 230’s immunity—and its obvious financial incentives for social-media companies—as a metaphorical carrot-and-stick combined with the alleged back-room meetings, hands-on approach to online censorship, and other factors discussed above transforms Defendants’ actions into state action. As Defendants note, Section 230 was designed to “reflect a deliberate absence of government involvement in regulating online speech,” but has instead, according to Plaintiffs’ allegations, become a tool for coercion used to encourage significant joint action between federal agencies and social-media companies.[163]

While there could be dangers inherent in treating Section 230 alone as an argument that social-media companies are state actors, the court appears inclined to say it is not Section 230 but rather the threat of removing it, along with the other dealings and communications from the federal government, that makes this state action.

Under the law & economics theory outlined in Part I, the joint-action or symbiotic-relationship test is also an important exception to the general dichotomy between private and state action. In particular, it is important to deter state officials from engaging in surreptitious speech regulation by covertly interjecting themselves into social-media companies’ moderation decisions. The allegations in Missouri v. Biden, if proven true, do appear to outline a vast and largely hidden infrastructure through which federal officials use backchannels to routinely discuss social-media companies’ moderation decisions and often pressure them into removing disfavored content in the name of misinformation. This kind of government intervention into the “marketplace of ideas” and the “market for private speech governance” takes away companies’ ability to respond freely to market incentives in moderating misinformation, and replaces their own editorial discretion with the opinions of government officials.

III.    Applying the First Amendment to Government Regulation of Online Misinformation

A number of potential consequences might stem from a plausible claim of state action levied against online platforms using one of the theories described above. Part III.A will explore the likely result, which is that a true censorship-by-deputization scheme enacted through social-media companies would be found to violate the First Amendment. Part III.B will consider the question of remedies: even if there is a First Amendment violation, those whose content or accounts have been removed may not be restored. Part III.C will then offer alternative ways for the government to deal with the problem of online misinformation without offending the First Amendment.

A.      If State Action Is Found, Removal of Content Under Misinformation Policies Would Violate the First Amendment

At a high level, First Amendment jurisprudence does allow for government regulation of speech in limited circumstances. In those cases, the threshold question is whether the type of speech at issue is protected speech and whether the regulation is content-based.[164] If it is, then the government must show the state action is narrowly tailored to a compelling governmental interest: this is the so-called “strict scrutiny” standard.[165] A compelling governmental interest is the highest interest the state has, something considered necessary or crucial, and beyond simply legitimate or important.[166] “Narrow tailoring” means the regulation uses the least-restrictive means “among available, effective alternatives.”[167] While not an impossible standard for the government to reach, “[s]trict scrutiny leave[s] few survivors.”[168] Moreover, prior restraints of speech, which are defined as situations where speech is restricted before publication, are presumptively unconstitutional.[169]

Only for content- and viewpoint-neutral “time, place, and manner restrictions” will regulation of protected speech receive less than strict scrutiny.[170] In those cases, as long as the regulation serves a “significant” government interest, and there are alternative channels available for the expression, the regulation is permissible.[171]

There are also situations where speech regulation—whether because the regulation aims at conduct but has speech elements or because the speech is not fully protected for some other reason—receives “intermediate scrutiny.”[172] In those cases, the government must show the state action is narrowly tailored to an important or substantial governmental interest, and burdens no more speech than necessary.[173] Beyond the levels of scrutiny to which speech regulation is subject, state actions involving speech also may be struck down for overbreadth[174] or vagueness.[175] Together, these doctrines work to protect a very large sphere of speech, beyond what is protected in most jurisdictions around the world.

The initial question that arises with alleged misinformation is how to even define it. Neither social-media companies nor the government actors on whose behalf they may be acting are necessarily experts in misinformation. This can result in “void-for-vagueness” problems.

In Høeg v. Newsom,[176] the U.S. District Court for the Eastern District of California considered California’s state law AB 2098, which would charge medical doctors with “unprofessional conduct” and subject them to discipline if they shared with patients “false information that is contradicted by contemporary scientific consensus contrary to the standard of care” as part of treatment or advice.[177] The court stated that “[a] statute is unconstitutionally vague when it either ‘fails to provide a person of ordinary intelligence fair notice of what is prohibited, or is so standardless that it authorizes or encourages seriously discriminatory enforcement’”[178] and that “[v]ague statutes are particularly objectionable when they ‘involve sensitive areas of First Amendment freedoms” because “they operate to inhibit the exercise of those freedoms.’”[179] The court rejected the invitation to apply a lower vagueness standard typically used for technical language because “contemporary scientific consensus” has no established technical meaning in the scientific community.[180] The court also asked a series of questions that would be particularly relevant to social-media companies acting on behalf of government actors in efforts to combat misinformation:

[W]ho determines whether a consensus exists to begin with? If a consensus does exist, among whom must the consensus exist (for example practicing physicians, or professional organizations, or medical researchers, or public health officials, or perhaps a combination)? In which geographic area must the consensus exist (California, or the United States, or the world)? What level of agreement constitutes a consensus (perhaps a plurality, or a majority, or a supermajority)? How recently in time must the consensus have been established to be considered “contemporary”? And what source or sources should physicians consult to determine what the consensus is at any given time (perhaps peer-reviewed scientific articles, or clinical guidelines from professional organizations, or public health recommendations)?[181]

The court noted that defining the consensus with reference to pronouncements from the U.S. Centers for Disease Control and Prevention or the World Health Organization would be unhelpful, as those entities changed their recommendations on several important health issues over the course of the COVID-19 pandemic:

Physician plaintiffs explain how, throughout the course of the COVID-19 pandemic, scientific understanding of the virus has rapidly and repeatedly changed. (Høeg Decl. ¶¶ 15-29; Duriseti Decl. ¶¶ 7-15; Kheriaty Decl. ¶¶ 7-10; Mazolewski Decl. ¶¶ 12-13.) Physician plaintiffs further explain that because of the novel nature of the virus and ongoing disagreement among the scientific community, no true “consensus” has or can exist at this stage. (See id.) Expert declarant Dr. Verma similarly explains that a “scientific consensus” concerning COVID-19 is an illusory concept, given how rapidly the scientific understanding and accepted conclusions about the virus have changed. Dr. Verma explains in detail how the so-called “consensus” has developed and shifted, often within mere months, throughout the COVID-19 pandemic. (Verma Decl. ¶¶ 13-42.) He also explains how certain conclusions once considered to be within the scientific consensus were later proved to be false. (Id. ¶¶ 8-10.) Because of this unique context, the concept of “scientific consensus” as applied to COVID-19 is inherently flawed.[182]

As a result, the court determined that “[b]ecause the term ‘scientific consensus’ is so ill-defined, physician plaintiffs are unable to determine if their intended conduct contradicts the scientific consensus, and accordingly ‘what is prohibited by the law.’”[183] The court upheld a preliminary injunction against the law because of a high likelihood of success on the merits.[184]

Assuming the government could define misinformation in a way that wasn’t vague, the next question is what level of First Amendment scrutiny would such edicts receive? It is clear for several reasons that regulation of online misinformation would receive, and fail, the highest form of constitutional scrutiny.

First, the threat of government censorship of speech through social-media misinformation policies could be considered a prior restraint. Prior restraints occur when the government (or actors on their behalf) restrict speech before publication. As the Supreme Court has put it many times, “any system of prior restraints of expression comes to this Court bearing a heavy presumption against its constitutional validity.”[185]

In Missouri v. Biden, the court found the plaintiffs had plausibly alleged prior restraints against their speech, and noted that “[t]hreatening penalties for future speech goes by the name of ‘prior restraint,’ and a prior restraint is the quintessential first-amendment violation.”[186] The court found it relevant that social-media companies could “silence” speakers’ voices at a “mere flick of the switch,”[187] and noted this could amount to “a prior restraint by preventing a user of the social-media platform from voicing their opinion at all.”[188] The court further stated that “bans, shadow-bans, and other forms of restrictions on Plaintiffs’ social-media accounts, are… de facto prior restraints, [a] clear violation of the First Amendment.”[189]

Second, it is clear that any restriction on speech based upon its truth or falsity would be a content-based regulation, and likely a viewpoint-based regulation, as it would require the state actor to take a side on a matter of dispute.[190] Content-based regulation requires strict scrutiny, and a reasonable case can be made that viewpoint-based regulation of speech is per se inconsistent with the First Amendment.[191]

In Missouri v. Biden, the court noted that “[g]overnment action, aimed at the suppression of particular views on a subject which discriminates on the basis of viewpoint, is presumptively unconstitutional.”[192] The court found that “[p]laintiffs allege a regime of censorship that targets specific viewpoints deemed mis-, dis-, or malinformation by federal officials. Because Plaintiffs allege that Defendants are targeting particular views taken by speakers on a specific subject, they have alleged a clear violation of the First Amendment, i.e., viewpoint discrimination.”[193]

Third, even assuming there is clearly false speech that government agents (and social-media companies acting on their behalf) could identify, false speech presumptively receives full First Amendment protection. In United States v. Alvarez[194] the Supreme Court stated that while older cases may have stated that false speech does not receive full protection, those were “confined to the few ‘historic and traditional categories [of expression] long familiar to the bar.’”[195] In other words, there was no “general exception to the First Amendment for false statements.”[196] Thus, as protected speech, any regulation of false speech, as such, would run into strict scrutiny.

In order to survive First Amendment scrutiny, government agents acting through social-media companies would have to demonstrate a parallel or alternative justification to regulate the sort of low-value speech the Supreme Court has recognized as outside the protection of the First Amendment.[197] These exceptions include defamation, fraud, the tort of false light, false statements to government officials, perjury, falsely representing oneself as speaking for the government (and impersonation), and other similar examples of fraud or false speech integral to criminal conduct.[198]

But the Alvarez Court noted that, even in areas where false speech does not receive protection, such as fraud and defamation, the Supreme Court has found the First Amendment requires that claims of fraud be based on more than falsity alone.[199]

When it comes to fraud,[200] for instance, the Supreme Court has repeatedly noted that the First Amendment offers no protection.[201] But “[s]imply labeling an action one for ‘fraud’… will not carry the day.”[202] Prophylactic rules aimed at protecting the public from the (sometimes fraudulent) solicitation of charitable donations, for instance, have been found to be unconstitutional prior restraints on several occasions by the Court.[203] The Court has found that “in a properly tailored fraud action the State bears the full burden of proof. False statement alone does not subject a fundraiser to fraud liability… Exacting proof requirements… have been held to provide sufficient breathing room for protected speech.”[204]

As for defamation,[205] the Supreme Court found in New York Times v. Sullivan[206] that “[a]uthoritative interpretations of the First Amendment guarantees have consistently refused to recognize an exception for any test of truth—whether administered by judges, juries, or administrative officials—and especially one that puts the burden of proving truth on the speaker.”[207] In Sullivan, the Court struck down an Alabama defamation statute, finding that in situations dealing with public officials, the mens rea must be actual malice: knowledge that the statement was false or reckless disregard for whether it was false.[208]

Since none of these exceptions would apply to online misinformation dealing with medicine or election law, social-media companies’ actions on behalf of the government against such misinformation would likely fail strict scrutiny. While it is possible that a court would find protecting public health or election security to be a compelling interest, the government would still face great difficulty showing that a ban on false information is narrowly tailored. It is highly unlikely that a ban on false information, as such, will ever be the least-restrictive means of controlling a harm. As the Court put it in Alvarez:

The remedy for speech that is false is speech that is true… Freedom of speech and thought flows not from the beneficence of the state but from the inalienable rights of the person. And suppression of speech by the government can make exposure of falsity more difficult, not less so. Society has the right and civic duty to engage in open, dynamic, rational discourse. These ends are not well served when the government seeks to orchestrate public discussion through content-based mandates.[209]

As argued above in Part I, a vibrant marketplace of ideas requires that individuals have the ability to express their ideas, so that the best ideas win. This means counter-speech is better than censorship from government actors to help society determine what is true. The First Amendment’s protection against government intervention into the marketplace of ideas promotes a better answer to online misinformation. Thus, a finding that government actors can’t use social-media actors to censor, based on vague definitions of misinformation, through prior restraints and viewpoint discrimination, and aimed at protected speech, is consistent with an understanding of the world where information is dispersed.

B.      The Problem of Remedies for Social-Media ‘Censorship’: The First Amendment Still Only Applies to Government Action

There is a problem, however, for plaintiffs who win cases against social-media companies that are found to be state actors when they remove posts and accounts due to alleged misinformation: the remedies are limited.

First, once the state action is removed through injunction, social-media companies would be free to continue to moderate misinformation as they see fit, free from any plausible First Amendment claim. For instance, in Carlisle Communications, the 9th Circuit found that, once the state action was enjoined, the telecommunications company was again free to determine whether or not to extend its service to the plaintiff. As the court put it:

Mountain Bell insists that its new policy reflected its independent business judgment. Carlin argues that Mountain Bell was continuing to yield to state threats of prosecution. However, the factual question of Mountain Bell’s true motivations is immaterial.

This is true because, inasmuch as the state under the facts before us may not coerce or otherwise induce Mountain Bell to deprive Carlin of its communication channel, Mountain Bell is now free to once again extend its 976 service to Carlin. Our decision substantially immunizes Mountain Bell from state pressure to do otherwise. Should Mountain Bell not wish to extend its 976 service to Carlin, it is also free to do that. Our decision modifies its public utility status to permit this action. Mountain Bell and Carlin may contract, or not contract, as they wish.[210]

This is consistent with the district court’s actions in Missouri v. Biden. There, the court granted the motion for a preliminary injunction, but it only applied against government action and not against the social-media companies at all.[211] For instance, the injunction prohibits a number of named federal officials and agencies from:

(1) meeting with social-media companies for the purpose of urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech posted on social-media platforms;

(2) specifically flagging content or posts on social-media platforms and/or forwarding such to social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression, or reduction of content containing protected free speech;

(3) urging, encouraging, pressuring, or inducing in any manner social-media companies to change their guidelines for removing, deleting, suppressing, or reducing content containing protected free speech;

(4) emailing, calling, sending letters, texting, or engaging in any communication of any kind with social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression ,or reduction of content containing protected free speech;

(5) collaborating, coordinating, partnering, switchboarding, and/or jointly working with the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group for the purpose of urging, encouraging, pressuring, or inducing in any manner removal, deletion, suppression, or reduction of content posted with social-media companies containing protected free speech;

(6) threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech;

(7) taking any action such as urging, encouraging, pressuring, or inducing in any manner social-media companies to remove, delete, suppress, or reduce posted content protected by the Free Speech Clause of the First Amendment to the United States Constitution;

(8) following up with social-media companies to determine whether the social-media companies removed, deleted, suppressed, or reduced previous social-media postings containing protected free speech;

(9) requesting content reports from social-media companies detailing actions taken to remove, delete, suppress, or reduce content containing protected free speech; and

(10) notifying social-media companies to Be on The Lookout (BOLO) for postings containing protected free speech.[212]

In other words, a social-media company would not necessarily even be required to reinstate accounts or posts of those who have been excluded under their misinformation policies. It would become a question of whether, responding to marketplace incentives sans government involvement, the social-media companies continue to find it in their interest to enforce such policies against those affected persons and associated content.

Another avenue for private plaintiffs may be with a civil rights claim under Section 1983.[213] If it can be proved that social-media companies participated in a joint action with government officials to restrict First Amendment rights, it may be possible to collect damages from them, as well as from government officials.[214] Plaintiffs may struggle, however, to prove compensatory damages, which would require proof of harm. Categories of harm like physical injury aren’t relevant to social-media moderation policies, leaving things like diminished earnings or impairment of reputation. In most cases, it is likely that the damages to plaintiffs are de minimis and hardly worth the expense of filing suit. To receive punitive damages, plaintiffs would have to prove “the defendant’s conduct is… motivated by evil motive or intent, or when it involves reckless or callous indifference to the federally protected rights of others.”[215] This seems like it would be difficult to establish against the social-media companies unless there was an admission in the record that those companies’ goal was to suppress rights, rather than that they were attempting in good faith to restrict misinformation or simply acceding to government inducements.

The remedies available for constitutional violations in claims aimed at government officials are consistent with a theory of the First Amendment that prioritizes protecting the marketplace of ideas from intervention. While it leaves many plaintiffs with limited remedies against the social-media companies once the government actions are enjoined or deterred, it does return the situation to one where the social-media companies can freely compete in a market for speech governance on misinformation, as well.

C.      What Can the Government Do Under the First Amendment in Response to Misinformation on Social-Media Platforms?

If direct government regulation or implicit intervention through coercion or collusion with social-media companies is impermissible, the question may then arise as to what, exactly, the government can do to combat online misinformation.

The first option was already discussed in Part III.A in relation to Alvarez and narrow tailoring: counter-speech. Government agencies concerned about health or election misinformation could use social=media platforms to get their own message out. Those agencies could even amplify and target such counter-speech through advertising campaigns tailored to those most likely to share or receive misinformation.

Similarly, government agencies could create their own apps or social-media platforms to publicize information that counters alleged misinformation. While this may at first appear to be an unusual step, the federal government does, through the Corporation for Public Broadcasting, subsidize public television and public radio. If there is a fear of online misinformation, creating a platform where the government can promote its own point of view could combat online misinformation in a way that doesn’t offend the First Amendment.

Additionally, as discussed above in Part II.B in relation to O’Handley and the distinction between convincing and coercion: the government may flag alleged misinformation and even attempt to persuade social-media companies to act, so long as such communications involve no implicit or explicit threats of regulation or prosecution if nothing is done. The U.S. District Court for the Western District of Louisiana distinguished between constitutional government speech and unconstitutional coercion or encouragement in its memorandum accompanying its preliminary injunction in Missouri v. Biden:

Defendants also argue that a preliminary injunction would restrict the Defendants’ right to government speech and would transform government speech into government action whenever the Government comments on public policy matters. The Court finds, however, that a preliminary injunction here would not prohibit government speech… The Defendants argue that by making public statements, this is nothing but government speech. However, it was not the public statements that were the problem. It was the alleged use of government agencies and employees to coerce and/or significantly encourage social-media platforms to suppress free speech on those platforms. Plaintiffs point specifically to the various meetings, emails, follow-up contacts, and the threat of amending Section 230 of the Communication Decency Act. Plaintiffs have produced evidence that Defendants did not just use public statements to coerce and/or encourage social-media platforms to suppress free speech, but rather used meetings, emails, phone calls, follow-up meetings, and the power of the government to pressure social-media platforms to change their policies and to suppress free speech. Content was seemingly suppressed even if it did not violate social-media policies. It is the alleged coercion and/or significant encouragement that likely violates the Free Speech Clause, not government speech, and thus, the Court is not persuaded by Defendants’ arguments here.[216]

As the court highlights, there is a special danger in government communications that remain opaque to the public. Requests for action from social-media companies on misinformation should all be public information and not conducted behind closed doors or in covert communications. Such transparency would make it much easier for the public and the courts to determine whether state actors are engaged in government speech or crossing the line into coercion or substantial encouragement to suppress speech.

On the other hand, laws like the recent SB 262 in Florida[217] go beyond the delicate First Amendment balance that courts have tried to achieve. That law would limit government officials’ ability to share any information with social-media companies regarding misinformation, limiting contacts to the removal of criminal content or accounts, or an investigation or inquiry to prevent imminent bodily harm, loss of life, or property damage.[218] While going beyond the First Amendment standard may be constitutional, these restrictions could be especially harmful when the government has information that may not be otherwise available to the public. As important as it is to restrict government intervention, it would harm the marketplace of ideas to prevent government participation altogether.

Finally, Section 230 reform efforts aimed at limiting immunity in instances where social-media companies have “red flag” knowledge of defamatory material would be another constitutional way to address misinformation.[219] For instance, if a social-media company was presented with evidence that a court or arbitrator finds certain statements to be untrue, it could be required to make reasonable efforts to take down such misinformation, and keep it down.

Such a proposal would have real-world benefits. For instance, in the recent litigation brought by Dominion Voting Systems against Fox News, the court found the various factual claims about Dominion rigging the election for Joseph Biden were false.[220] While there was no final finding of liability due to Fox and Dominion coming to a settlement,[221] if Dominion were to present the court’s findings to a social-media company, the company would, under this proposal, have an obligation to remove content that repeats the claims the court found to be false. Similarly, an arbitrator finding that MyPillow CEO Mike Lindell’s claims that he had evidence of Chinese interference in the election were demonstrably false[222] could be enough to have those claims removed, as well. Rudy Giuliani’s recent finding of liability for defamation against two Georgia election workers could similarly be removed.[223]

However, these benefits may be limited by the fact that not every defamation claim resolves with a court finding falsity of a statement. Some cases settle before it gets that far, and the underlying claims remain unproven allegations. And, as discussed above, defamation itself is not easy to prove, especially for public figures who must also be able to show “actual malice.”[224] As a result, many cases won’t even be brought. This means there could be quite a bit defamatory information put out into the world that courts or arbitrators are unlikely to have occasion to consider.

On the other hand, to make a social-media company responsible for removing allegedly defamatory information in the absence of some competent legal authority finding the underlying claim false could be ripe for abuses that could have drastic chilling effects on speech. Thus, any Section 230 reform must be limited to those occasions where a court or arbitrator of competent authority (and with some finality of judgment) has spoken on the falsity of a statement.

Conclusion

There is an important distinction in First Amendment jurisprudence between private and state action. To promote a free market in ideas, we must also protect private speech governance, like that of social-media companies. Private actors are best placed to balance the desires of people for speech platforms and the regulation of misinformation.

But when the government puts its thumb on the scale by pressuring those companies to remove content or users in the name of misinformation, there is no longer a free marketplace of ideas. The First Amendment has exceptions in its state-action doctrine that would allow courts to enjoin government actors from initiating coercion of or collusion with private actors to do that which would be illegal for the government to do itself. Government censorship by deputization is no more allowed than direct regulation of alleged misinformation.

There are, however, things the government can do to combat misinformation, including counter-speech and nonthreatening communications with social-media platforms. Section 230 could also be modified to require the takedown of adjudicated misinformation in certain cases.

At the end of the day, the government’s role in defining or policing misinformation is necessarily limited in our constitutional system. The production of true knowledge in the marketplace of ideas may not be perfect, but it is the least bad system we have yet created.

[1] West Virginia Bd. of Ed. v. Barnette, 319 U.S. 624, 642 (1943).

[2] United States v. Alvarez, 567 U.S. 709, 728 (2012).

[3] See Amanda Seitz, Disinformation Board to Tackle Russia, Migrant Smugglers, Associated Press (Apr. 28, 2022), https://apnews.com/article/russia-ukraine-immigration-media-europe-misinformation-4e873389889bb1d9e2ad8659d9975e9d.

[4] See, e.g., Rep. Doug Lamafa, Brave New World? Orwellian ‘Disinformation Governance Board’ Goes Against Nation’s Principles, The Hill (May 4, 2022), https://thehill.com/opinion/congress-blog/3476632-brave-new-world-orwellian-disinformation-governance-board-goes-against-nations-principles; Letter to Secretary Mayorkas from Ranking Members of the House Committee on Oversight and Reform (Apr. 29, 2022), available at https://oversight.house.gov/wp-content/uploads/2022/04/Letter-to-DHS-re-Disinformation-Governance-Board-04292022.pdf (stating “DHS is creating the Orwellian-named “Disinformation Governance Board”); Jon Jackson, Joe Biden’s Disinformation Board Likened to Orwell’s ‘Ministry of Truth’, Newsweek (Apr. 29, 2022), https://www.newsweek.com/joe-bidens-disinformation-board-likened-orwells-ministry-truth-1702190.

[5] See Geneva Sands, DHS Shuts Down Disinformation Board Months After Its Efforts Were Paused, CNN (Aug. 24, 2022), https://www.cnn.com/2022/08/24/politics/dhs-disinformation-board-shut-down/index.html.

[6] For an example of this type of hearing, see Preserving Free Speech and Reining in Big Tech Censorship, Hearing before the U.S. House Energy and Commerce Subcommittee on Communications and Technology (Mar. 28, 2023), https://www.congress.gov/event/118th-congress/house-event/115561.

[7] See Ken Klippenstein & Lee Fang, Truth Cops: Leaked Documents Outline DHS’s Plans to Police Disinformation, The Intercept (Oct. 31, 2022), https://theintercept.com/2022/10/31/social-media-disinformation-dhs.

[8] See Matt Taibbi, Capsule Summaries of all Twitter Files Threads to Date, With Links and a Glossary, Racket News (last updated Mar. 17, 2023), https://www.racket.news/p/capsule-summaries-of-all-twitter. For evidence that Facebook received similar pressure from and/or colluded with government officials, see Robby Soave, Inside the Facebook Files: Emails Reveal the CDC’s Role in Silencing COVID-19 Dissent, reason (Jan. 19, 2023), https://reason.com/2023/01/19/facebook-files-emails-cdc-covid-vaccines-censorship; Ryan Tracy, Facebook Bowed to White House Pressure, Removed Covid Posts, Wall St. J. (Jul. 28, 2023), https://www.wsj.com/articles/facebook-bowed-to-white-house-pressure-removed-covid-posts-2df436b7.

[9] See Missouri, et al. v. Biden, et al., No. 23-30445 (5th Cir. Sept. 8, 2023), slip op. at 2-14, available at https://www.ca5.uscourts.gov/opinions/pub/23/23-30445-CV0.pdf. Hearing on the Weaponization of the Federal Government, Hearing Before the Select Subcomm. on the Weaponization of the Fed. Gov’t (Mar. 30, 2023) (written testimony of D. John Sauer), available at https://judiciary.house.gov/sites/evo-subsites/republicans-judiciary.house.gov/files/2023-03/Sauer-Testimony.pdf.

[10] See infra Part I.

[11] Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019).

[12] Cf. Whitney v. California274 U.S. 357, 377 (1927) (Brandeis, J., concurring) (“If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence”).

[13] See, e.g., Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting) (“Persecution for the expression of opinions seems to me perfectly logical. If you have no doubt of your premises or your power and want a certain result with all your heart you naturally express your wishes in law and sweep away all opposition. To allow opposition by speech seems to indicate that you think the speech impotent, as when a man says that he has squared the circle, or that you do not care whole-heartedly for the result, or that you doubt either your power or your premises. But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution. It is an experiment, as all life is an experiment. Every year if not every day we have to wager our salvation upon some prophecy based upon imperfect knowledge. While that experiment is part of our system I think that we should be eternally vigilant against attempts to check the expression of opinions that we loathe and believe to be fraught with death, unless they so imminently threaten immediate interference with the lawful and pressing purposes of the law that an immediate check is required to save the country.”).

[14] Whitney v. California, 274 U.S. 357, 377 (1927). See also, Alvarez, 567 U.S. at 727-28 (“The remedy for speech that is false is speech that is true. This is the ordinary course in a free society. The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straight-out lie, the simple truth. The theory of our Constitution is ‘that the best test of truth is the power of the thought to get itself accepted in the competition of the market.’ The First Amendment itself ensures the right to respond to speech we do not like, and for good reason. Freedom of speech and thought flows not from the beneficence of the state but from the inalienable rights of the person. And suppression of speech by the government can make exposure of falsity more difficult, not less so. Society has the right and civic duty to engage in open, dynamic, rational discourse. These ends are not well served when the government seeks to orchestrate public discussion through content-based mandates.”) (citations omitted).

[15] See, e.g., Jonathan Peters, The “Sovereigns of Cyberspace” and State Action: The First Amendment’s Applications—or Lack Thereof—to Third-Party Platforms, 32 Berk. Tech. L. J. 989 (2017) .

[16] See id. at 990, 992 (2017) (emphasizing the need to “talk about the [state action doctrine] until we settle on a view both conceptually and functionally right.”) (citing Charles L. Black, Jr., The Supreme Court, 1966 Term—Foreword: “State Action,” Equal Protection, and California’s Proposition 14, 81 Harv. L. Rev. 69, 70 (1967)).

[17] Or, in the framing of some: to allow too much harmful speech, including misinformation, if it drives attention to the platforms for more ads to be served. See Karen Hao, How Facebook and Google Fund Global Misinformation, MIT Tech. Rev. (Nov. 20, 2021), https://www.technologyreview.com/2021/11/20/1039076/facebook-google-disinformation-clickbait.

[18] See, e.g., Thomas Sowell, Knowledge and Decisions (1980).

[19] That is to say, the marketplace will not perfectly remove misinformation, but will navigate the tradeoffs inherent in limiting misinformation without empowering any one individual or central authority to determine what is true.

[20] See, e.g., Halleck, 139 S. Ct. at 1928; Denver Area Ed. Telecommunications Consortium, Inc. v. FCC, 518 U.S. 727, 737 (1996) (plurality opinion); Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, 566 (1995); Hudgens v. NLRB, 424 U.S. 507, 513 (1976).

[21] See Part II below.

[22] For instance, a person could order a visitor to leave their home for saying something offensive and the police would, if called upon, help to eject them as trespassers. In general, courts will enforce private speech restrictions that governments could never constitutionally enact. See Mark D. Rosen, Was Shelley v. Kraemer Incorrectly Decided? Some New Answers, 95 Cal. L. Rev. 451, 458-61 (2007) (listing a number of cases where the holding of Shelley v. Kraemer that court enforcement of private agreements was state action did not extend to the First Amendment, meaning that private agreements to limit speech are enforced).

[23] Halleck, 139 S. Ct. at 1928, 1934 (citations omitted) (emphasis added).

[24] Id. at 1930.

[25] Id. at 1930-31.

[26] It is worth noting that application of the right to editorial discretion to social-media companies is a question that will soon be before the Supreme Court in response to common-carriage laws passed in Florida and Texas that would require carriage of certain speech. The 5th and 11th U.S. Circuit Courts of Appeal have come to opposite conclusions on this point. Compare NetChoice, LLC v. Moody, 34 F.4th 1196 (11th Cir. 2022) (finding the right to editorial discretion was violated by Florida’s common-carriage law) and NetChoice, LLC v. Paxton, 49 F.4th 439 (5th Cir. 2022) (finding the right to editorial discretion was not violated by Texas’ common-carriage law).

[27] Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241, 256 (1974).

[28] See id. at 247-54.

[29] Id. at 255 (citing Columbia Broadcasting System, Inc. v. Democratic National Committee, 412 U. S. 94, 117 (1973)),

[30] 47 U.S.C. §230(c).

[31] For a further discussion, see generally Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L. J. 26 (2022).

[32] Much of this section is adapted from Ben Sperry, An L&E Defense of the First Amendment’s Protection of Private Ordering, Truth on the Market (Apr. 23, 2021), https://truthonthemarket.com/2021/04/23/an-le-defense-of-the-first-amendments-protection-of-private-ordering.

[33] See F.A. Hayek, The Use of Knowledge in Society, 35 Am. Econ. Rev. 519 (1945).

[34] Id. at 520.

[35] See supra notes 13-14 and associated text. See also David Schultz, Marketplace of Ideas, First Amendment Encyclopedia, https://www.mtsu.edu/first-amendment/article/999/marketplace-of-ideas (last updated by Jun. 2017 by David L. Hudson) (noting the history of the “marketplace of ideas” justification by the Supreme Court for the First Amendment’s protection of free speech from government intervention); J.S. Mill, On Liberty, Ch. 2 (1859); John Milton, Areopagitica (1644).

[36] Without delving too far into epistemology, some argue that this is even the case in the scientific realm. See, e.g., Thomas Kuhn, The Structure of Scientific Revolutions (1962). Even according to the perspective that some things are universally true across time and space, they still amount to a tiny fraction of what we call human knowledge. “Information” may be a better term for what economists are actually talking about.

[37] The Supreme Court has recently affirmed that the government may not compel speech by businesses subject to public-accommodation laws. See 303 Creative LLC v. Elenis, No. 21-476, slip op. (Jun. 30, 2023), available at https://www.supremecourt.gov/opinions/22pdf/21-476_c185.pdf. The Court will soon also have to determine whether common-carriage laws can be applied to social-media companies consistent with the First Amendment in the NetChoice cases noted above. See supra note 26.

[38] Sowell, supra note 18, at 240.

[39] Even those whom we most trust to have considered opinions and an understanding of the facts may themselves experience “expert failure”—a type of market failure—that is made likelier still when government rules serve to insulate such experts from market competition. See generally Roger Koppl, Expert Failure (2018).

[40] See, e.g., West Virginia Bd. of Ed. v. Barnette, 319 U.S. 624, 642 (1943) (“If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. If there are any circumstances which permit an exception, they do not now occur to us.”).

[41] See, e.g., Alvarez, 567 U.S. at 728 (“Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth.”).

[42] Cf. Halleck, 131 S. Ct. at 1930-31.

[43] For a good explanation, see Jamie Whyte, Polluting Words: Is There a Coasean Case to Regulate Offensive Speech?, ICLE White Paper (Sep. 2021), available at https://laweconcenter.org/wp-content/uploads/2021/09/Whyte-Polluting-Words-2021.pdf.

[44] R.H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1, 2 (1960) (“The traditional approach has tended to obscure the nature of the choice that has to be made. The question is commonly thought of as one in which A inflicts harm on B and what has to be decided is: how should we restrain A? But this is wrong. We are dealing with a problem of a reciprocal nature. To avoid the harm to B would inflict harm on A. The real question that has to be decided is: should A be allowed to harm B or should B be allowed to harm A? The problem is to avoid the more serious harm.”).

[45] See id. at 8-10.

[46] See generally David S. Evans & Richard Shmalensee, Matchmakers: The New Economics of Multisided Platforms (2016).

[47] For more on how and why social-media companies govern online speech, see Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 HARV. L. REV. 1598 (2018).

[48] See Kate Conger, Tiffany Hsu, & Ryan Mac, Elon Musk’s Twitter Faces Exodus of Advertisers and Executives, The New York Times (Nov. 1, 2022), https://www.nytimes.com/2022/11/01/technology/elon-musk-twitter-advertisers.html (“[A]dvertisers — which provide about 90 percent of Twitter’s revenue — are increasingly grappling with Mr. Musk’s ownership of the platform. The billionaire, who is meeting advertising executives in New York this week, has spooked some advertisers because he has said he would loosen Twitter’s content rules, which could lead to a surge in misinformation and other toxic content.”); Ryan Mac & Tiffany Hsu, Twitter’s US Ad Sales Plunge 59% as Woes Continue, The New York Times (Jun. 5, 2013), https://www.nytimes.com/2023/06/05/technology/twitter-ad-sales-musk.html (“Six ad agency executives who have worked with Twitter said their clients continued to limit spending on the platform. They cited confusion over Mr. Musk’s changes to the service, inconsistent support from Twitter and concerns about the persistent presence of misleading and toxic content on the platform.”).

[49] See, e.g., Brian Fung, Twitter Prepares to Roll Out New Paid Subscription Service That Includes Blue Checkmark, CNN (Nov. 5, 2022), https://www.cnn.com/2022/11/05/business/twitter-blue-checkmark-paid-subscription/index.html.

[50] Sowell, supra note 18, at 244.

[51] See Halleck, 139 S. Ct. at 1931 (“The Constitution does not disable private property owners and private lessees from exercising editorial discretion over speech and speakers on their property.”).

[52] Cf. Tornillo, 418 U.S. at 255 (“The power of a privately owned newspaper to advance its own political, social, and economic views is bounded by only two factors: first, the acceptance of a sufficient number of readers—and hence advertisers —to assure financial success; and, second, the journalistic integrity of its editors and publishers.”).

[53] See Ben Sperry & R.J. Lehmann, Gov. Desantis’ Unconstitutional Attack on Social Media, Tampa Bay Times (Mar. 3, 2021), https://www.tampabay.com/opinion/2021/03/03/gov-desantis-unconstitutional-attack-on-social-media-column (“Social-media companies and other tech platforms find themselves in a very similar position [as the newspaper in Tornillo] today. Just as newspapers do, Facebook, Google and Twitter have the right to determine what kind of content they want on their platforms. This means they can choose whether and how to moderate users’ news feeds, search results and timelines consistent with their own views on, for example, what they consider to be hate speech or misinformation. There is no obligation for them to carry speech they don’t wish to carry, which is why DeSantis’ proposal is certain to be struck down.”).

[54] See 47 U.S.C. §230.

[55] See, e.g., Jennifer Huddleston, Competition and Content Moderation: How Section 230 Enables Increased Tech Marketplace Entry, at 4, Cato Policy Analysis No. 922 (Jan. 31, 2022), available at https://www.cato.org/sites/cato.org/files/2022-01/policy-analysis-922.pdf (“The freedom to adopt content moderation policies tailored to their specific business model, their advertisers, and their target customer base allows new platforms to please internet users who are not being served by traditional media. In some cases, the audience that a new platform seeks to serve is fairly narrowly tailored. This flexibility to tailor content moderation policies to the specific platform’s community of users, which Section 230 provides, has made it possible for websites to establish online communities for a highly diverse range of people and interests, ranging from victims of sexual assault, political conservatives, the LGBTQ+ community, and women of color to religious communities, passionate stamp collectors, researchers of orphan diseases, and a thousand other affinity groups. Changing Section 230 to require websites to accept all comers, or to limit the ability to moderate content in a way that serves specific needs, would seriously curtail platforms’ ability to serve users who might otherwise be ignored by incumbent services or traditional editors.”). 

[56] See, e.g., Rui Gu, Lih-Bin Oh, & Kanliang Wang, Multi-Homing On SNSS: The Role of Optimum Stimulation Level and Perceived Complementarity in Need Gratification, 53 Information & Management 752 (2016), available at https://kd.nsfc.gov.cn/paperDownload/ZD19894097.pdf (“Given the increasingly intense competition for social networking sites (SNSs), ensuring sustainable growth in user base has emerged as a critical issue for SNS operators. Contrary to the common belief that SNS users are committed to using one SNS, anecdotal evidence suggests that most users use multiple SNSs simultaneously. This study attempts to understand this phenomenon of users’ multi-homing on SNSs. Building upon optimum stimulation level (OSL) theory, uses and gratifications theory, and literature on choice complementarity, a theoretical model for investigating SNS users’ multi-homing intention is proposed. An analysis of survey data collected from 383 SNS users shows that OSL positively affects users’ perceived complementarity between different SNSs in gratifying their four facets of needs, namely, interpersonal communication, self-presentation, information, and entertainment. Among the four dimensions of perceived complementarity, only interpersonal communication and information aspects significantly affect users’ intention to multi-home on SNSs. The results from this study offer theoretical and practical implications for understanding and managing users’ multi-homing use of SNSs.”).

[57] See, e.g., How Has Social Media Emerged as a Powerful Communication Medium, University Canada West Blog (Sep. 25, 2022), https://www.ucanwest.ca/blog/media-communication/how-has-social-media-emerged-as-a-powerful-communication-medium:

Social media has taken over the business sphere, the advertising sphere and additionally, the education sector. It has had a long-lasting impact on the way people communicate and has now become an integral part of their lives. For instance, WhatsApp has redefined the culture of IMs (instant messaging) and taken it to a whole new level. Today, you can text anyone across the globe as long as you have an internet connection. This transformation has not only been brought about by WhatsApp but also Facebook, Twitter, LinkedIn and Instagram. The importance of social media in communication is a constant topic of discussion.

Online communication has brought information to people and audiences that previously could not be reached. It has increased awareness among people about what is happening in other parts of the world. A perfect example of the social media’s reach can be seen in the way the story about the Amazon Rainforest fire spread. It started with a single post and was soon present on everyone’s newsfeed across different social media platforms.

Movements, advertisements and products are all being broadcasted on social media platforms, thanks to the increase in the social media users. Today, businesses rely on social media to create brand awareness as well as to promote and sell their products. It allows organizations to reach customers, irrespective of geographical boundaries. The internet has facilitated a resource to humankind that has unfathomable reach and benefits.

[58] Governmental intervention here could be particularly destructive if it leads to the imposition of “expert” opinions from insulated government actors from the “intelligence community.” Koppl, in his study on expert failure, described the situation as “the entangled deep state,” stating in relevant part:

The entangled deep state is an only partially hidden informal network linking the intelligence community, military, political parties, large corporations including defense contractors, and others. While the interests of participants in the entangled deep state often conflict, members of the deep state share a common interest in maintaining the status quo of the political system independently of democratic processes. Therefore, denizens of the entangled deep state may sometimes have an incentive to act, potentially in secret, to tamp down resistant voices and to weaken forces challenging the political status quo… The entangled deep state produces the rule of experts. Experts must often choose for the people because the knowledge on the basis of which choices are made is secret, and the very choice being made may also be a secret involving, supposedly, “national security.”… The “intelligence community” has incentives that are not aligned with the general welfare or with democratic process. Koppl, supra note 39, at 228, 230-31.

[59] Halleck, 139 S. Ct. at 1928 (internal citations omitted).

[60] 326 U.S. 501 (1946).

[61] Id. at 506.

[62] Id. at 509 (“When we balance the Constitutional rights of owners of property against those of the people to enjoy freedom of press and religion, as we must here, we remain mindful of the fact that the latter occupy a preferred position.”).

[63] 391 U.S. 308 (1968).

[64] See id. at 316-19. In particular, see id. at 318 (“The shopping center here is clearly the functional equivalent of the business district of Chickasaw involved in Marsh.”).

[65] See id. at 325.

[66] 407 U.S. 551 (1972).

[67] Id. at 562.

[68] Id.

[69] See id. at 568 (“[T]he courts properly have shown a special solicitude for the guarantees of the First Amendment, this Court has never held that a trespasser or an uninvited guest may exercise general rights of free speech on property privately owned and used nondiscriminatorily for private purposes only.”).

[70] Id. at 569.

[71] See id. at 570.

[72] 424 U.S. 507 (1976).

[73] Id. at 513.

[74] See id. at 516 (“Under what circumstances can private property be treated as though it were public? The answer that Marsh gives is when that property has taken on all the attributes of a town, i. e., `residential buildings, streets, a system of sewers, a sewage disposal plant and a “business block” on which business places are situated.’ (Logan Valley, 391 U.S. at 332 (Black, J. dissenting) (quoting Marsh, 326 U.S. at 502)).

[75] See id. at 518 (“It matters not that some Members of the Court may continue to believe that the Logan Valley case was rightly decided. Our institutional duty is to follow until changed the law as it now is, not as some Members of the Court might wish it to be. And in the performance of that duty we make clear now, if it was not clear before, that the rationale of Logan Valley did not survive the Court’s decision in the Lloyd case.”).

[76] Id. at 521.

[77] Jackson v. Metropolitan Edison Co., 419 U.S. 345, 352 (1974).

[78] See, e.g., the discussion about Prager University v. Google below.

[79] Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017).

[80] Id. (internal citation omitted).

[81] See, e.g., Brock v. Zuckerberg, 2021 WL 2650070, at *3 (S.D.N.Y. Jun. 25, 2021); Freedom Watch, Inc. v. Google Inc., 816 F. App’x 497, 499 (D.C. Cir. 2020); Zimmerman v. Facebook, Inc., 2020 WL 5877863 at *2 (N.D. Cal. Oct. 2, 2020); Ebeid v. Facebook, Inc., 2019 WL 2059662 at *6 (N.D. Cal. May 9, 2019); Green v. YouTube, LLC, 2019 WL 1428890, at *4 (D.N.H. Mar. 13, 2019); Nyabwa v. FaceBook, 2018 WL 585467, at *1 (S.D. Tex. Jan. 26, 2018); Shulman v. Facebook.com, 2017 WL 5129885, at *4 (D.N.J. Nov. 6, 2017).

[82] Halleck, 139 S. Ct. at 1929 (emphasis in original).

[83] Id. at 1930.

[84] Id.

[85] Id. at 1930-31.

[86] 951 F.3d 991 (9th Cir. 2020).

[87] See id. at 997-98. See also, Prager University v. Google, LLC, 2018 WL 1471939, at *6 (N.D. Cal. Mar. 26, 2018) (“Plaintiff primarily relies on the United States Supreme Court’s decision in Marsh v. Alabama to support its argument, but Marsh plainly did not go so far as to hold that any private property owner “who operates its property as a public forum for speech” automatically becomes a state actor who must comply with the First Amendment.”).

[88] See PragerU, 951 F.3d at 996-99 (citing Halleck 12 times, Hudgens 3 times, and Tanner 3 times).

[89] See supra n. 7-9 and associated text.

[90] Cf. Norwood v. Harrison, 413 U.S. 455, 465 (1973) (“It is axiomatic that a state may not induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.”).

[91] Blum v. Yaretsky, 457 U.S. 991, 1004 (1982).

[92] Id. at 1004-05.

[93] Id. (noting that “the factual setting of each case will be significant”).

[94] 372 U.S. 58 (1963).

[95] See id. at 66-67.

[96] See id. at 68.

[97] Id. at 67.

[98] Id. at 68-69.

[99] 827 F.2d 1291 (9th Cir. 1987).

[100] See id. at 1295.

[101] Id.

[102] See id. (“Simply by ‘command[ing] a particular result,’ the state had so involved itself that it could not claim the conduct had actually occurred as a result of private choice.”) (quoting Peterson v. City of Greenville, 373 U.S. 244, 248 (1963)).

[103] See Backpage.com, LLC v. Dar, 807 F.3d 229 (7th Cir. 2015).

[104] See id. at 231, 232.

[105] Id. at 230.

[106] Id. at 235.

[107] Id. at 231.

[108] 2023 WL 2443073 (9th Cir. Mar. 10, 2023).

[109] See id. at *2-3.

[110] See id. at *5-6.

[111] Id. at *6.

[112] Id.

[113] Id.

[114] 2022 WL 1427507 (N.D. Cal. May 5, 2022).

[115] See id. at *8.

[116] Id.

[117] Id. (emphasis in original).

[118] See, e.g., Trump v. Twitter, 602 F.Supp.3d 1213, 1218-26 (2022); Children’s Health Def. v. Facebook, 546 F.Supp.3d 909, 932-33 (2021).

[119] 2023 WL 2578260 (W.D. La. Mar. 20, 2023). See also Missouri, et al. v. Biden, et al., 2023 WL 4335270 (W.D. La. Jul. 4., 2023) (memorandum opinion granting the plaintiffs’ motion for preliminary injunction).

[120] 2023 WL 2578260 at *30-31.

[121] See id.

[122] See id. at *17-19.

[123] It is worth noting that all of these cases were decided at the motion-to-dismiss stage, during which all of the plaintiffs’ allegations are assumed to be true. The plaintiffs in Missouri v. Biden will have to prove their factual case of state action. Now that the Western District of Louisiana has ruled on the motion for preliminary injunction, it is likely that there will be an appeal before the case gets to the merits.

[124] The district court in Missouri v. Biden discussed this distinction further in the memorandum ruling on request for preliminary injunction:

The Defendants argue that by making public statements, this is nothing but government speech. However, it was not the public statements that were the problem. It was the alleged use of government agencies and employees to coerce and/or significantly encourage social-media platforms to suppress free speech on those platforms. Plaintiffs point specifically to the various meetings, emails, follow-up contacts, and the threat of amending Section 230 of the Communication Decency Act. Plaintiffs have produced evidence that Defendants did not just use public statements to coerce and/or encourage social-media platforms to suppress free speech, but rather used meetings, emails, phone calls, follow-up meetings, and the power of the government to pressure social-media platforms to change their policies and to suppress free speech. Content was seemingly suppressed even if it did not violate social-media policies. It is the alleged coercion and/or significant encouragement that likely violates the Free Speech Clause, not government speech, and thus, the Court is not persuaded by Defendants’ arguments here.

Missouri v. Biden, 2023 WL 4335270, at *56 (W.D. La. July 4, 2023).

[125] While the district court did talk in significantly greater detail about specific allegations as to each federal defendant’s actions in coercing or encouraging changes in moderation policies or enforcement actions, there is still a lack of specificity as to how it affected the plaintiffs. See id. at *45-53 (applying the coercion/encouragement standard to each federal defendant). As in its earlier decision at the motion-to-dismiss stage, the court’s opinion accompanying the preliminary injunction does deal with this issue to a much greater degree in its discussion of standing, and specifically of traceability. See id. at *61-62:

Here, Defendants heavily rely upon the premise that social-media companies would have censored Plaintiffs and/or modified their content moderation policies even without any alleged encouragement and coercion from Defendants or other Government officials. This argument is wholly unpersuasive. Unlike previous cases that left ample room to question whether public officials’ calls for censorship were fairly traceable to the Government; the instant case paints a full picture. A drastic increase in censorship, deboosting, shadow-banning, and account suspensions directly coincided with Defendants’ public calls for censorship and private demands for censorship. Specific instances of censorship substantially likely to be the direct result of Government involvement are too numerous to fully detail, but a birds-eye view shows a clear connection between Defendants’ actions and Plaintiffs injuries.

The Plaintiffs’ theory of but-for causation is easy to follow and demonstrates a high likelihood of success as to establishing Article III traceability. Government officials began publicly threatening social-media companies with adverse legislation as early as 2018. In the wake of COVID-19 and the 2020 election, the threats intensified and became more direct. Around this same time, Defendants began having extensive contact with social-media companies via emails, phone calls, and in-person meetings. This contact, paired with the public threats and tense relations between the Biden administration and social-media companies, seemingly resulted in an efficient report-and-censor relationship between Defendants and social-media companies. Against this backdrop, it is insincere to describe the likelihood of proving a causal connection between Defendants’ actions and Plaintiffs’ injuries as too attenuated or purely hypothetical.

The evidence presented thus goes far beyond mere generalizations or conjecture: Plaintiffs have demonstrated that they are likely to prevail and establish a causal and temporal link between Defendants’ actions and the social-media companies’ censorship decisions. Accordingly, this Court finds that there is a substantial likelihood that Plaintiffs would not have been the victims of viewpoint discrimination but for the coercion and significant encouragement of Defendants towards social-media companies to increase their online censorship efforts.

[126] See Lugar v. Edmonson Oil Co., 457 U.S. 922, 941-42 (1982).

[127] See Brentwood Acad. v. Tennessee Secondary Sch. Athletic Ass’n, 531 U.S. 288, 294 (2001).

[128] See id. at 296.

[129] For instance, in Mathis v. Pacific Gas & Elec. Co., 75 F.3d 498 (9th Cir. 1996), the 9th Circuit described the plaintiff’s “joint action” theory as one where a private person could only be liable if the particular actions challenged are “inextricably intertwined” with the actions of the government. See id. at 503.

[130] See Brentwood, 531 U.S. at 296 (noting that “examples may be the best teachers”).

[131] See Lugar, 457 U.S. at 925.

[132] See id.

[133] See id.

[134] Id. at 941 (internal citations omitted).

[135] Id.

[136] See id. at 942.

[137] 365 U.S. 715 (1961).

[138] See id. at 717-20.

[139] Id. at 724.

[140] See Rendell-Baker v. Kohn, 457 U.S. 830, 842-43 (1982).

[141] See Brentwood, 531 U.S. at 292-93.

[142] See id. at 296 (“[A] challenged activity may be state action… when it is ‘entwined with governmental policies,’ or when government is ‘entwined in [its] management or control.’”) (internal citations omitted).

[143] See id. at 298-301.

[144] Id. at 302.

[145] 489 U.S. 602 (1989).

[146] See id. at 606-12, 615.

[147] Id. at 615.

[148] Id.

[149] O’Handley, 2023 WL 2443073, at *7.

[150] Id.

[151] See id. at *7-8.

[152] 75 F.3d 498 (9th Cir. 1996).

[153] O’Handley, 2023 WL 2443073, at *8.

[154] Id.

[155] Hart, 2022 WL 1427507, at *6.

[156] Id. at *7.

[157] See, e.g., Fed. Agency of News LLC v. Facebook, Inc., 432 F. Supp. 3d 1107, 1124-27 (N.D. Cal. 2020); Children’s Health Def. v. Facebook Inc., 546 F. Supp. 3d 909, 927-31 (N.D. Cal. 2021); Berenson v. Twitter, 2022 WL1289049, at *3 (N.D. Cal. Apr. 29, 2022).

[158] 546 F. Supp. 3d 909 (N.D. Cal. 2021).

[159] Id. at 932 (citing Divino Grp. LLC v. Google LLC, 2021 WL 51715, at *6 (N.D. Cal. Jan. 6, 2021)).

[160] Missouri v. Biden, 2023 WL 2578260, at *33.

[161] Id.

[162] Id. at *33-34.

[163] Id. at *34.

[164] A government action is content based if it can’t be applied without considering its content. See, e.g., Reed v. Town of Gilbert, Ariz., 576 U.S. 155, 163 (2015) (“Government regulation of speech is content based if a law applies to particular speech because of the topic discussed or the idea or message expressed.”).

[165] See, e.g., Citizens United v. Fed. Election Comm’n, 558 U.S. 310, 340 (2010) (“Laws that burden political speech are ‘subject to strict scrutiny,’ which requires the Government to prove that the restriction ‘furthers a compelling interest and is narrowly tailored to achieve that interest.’”) (internal citations omitted).

[166] See Fulton v. City of Philadelphia, Pennsylvania, 141 S. Ct. 1868, 1881 (2021) (“A government policy can survive strict scrutiny only if it advances ‘interests of the highest order’…”).

[167] Ashcroft v. ACLU, 542 U.S. 656, 666 (2004). In that case, the Court compared the Children’s Online Protection Act’s age-gating to protect children from online pornography to blocking and filtering software available in the marketplace, and found those alternatives to be less restrictive. The Court thus struck down the regulation. See id. at 666-70.

[168] Alameda Books v. City of Los Angeles, 535 U.S. 425, 455 (2002).

[169] See, e.g., New York Times Co. v. United States, 403 U.S. 713, 714 (1971).

[170] The classic example being an ordinance on noise that doesn’t require the government actor to consider the content or viewpoint of the speaker in order to enforce. See Ward v. Rock Against Racism, 491 U.S. 781 (1989).

[171] See id. at 791 (“Our cases make clear, however, that even in a public forum the government may impose reasonable restrictions on the time, place, or manner of protected speech, provided the restrictions ‘are justified without reference to the content of the regulated speech, that they are narrowly tailored to serve a significant governmental interest, and that they leave open ample alternative channels for communication of the information.’”) (internal citations omitted).

[172] See Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 662 (1994) (finding “the appropriate standard by which to evaluate the constitutionality of must-carry is the intermediate level of scrutiny applicable to content-neutral restrictions that impose an incidental burden on speech.”).

[173] See id. (“[A] content-neutral regulation will be sustained if ‘it furthers an important or substantial governmental interest; if the governmental interest is unrelated to the suppression of free expression; and if the incidental restriction on alleged First Amendment freedoms is no greater than is essential to the furtherance of that interest.’”) (quoting United States v. O’Brien, 391 U.S. 367, 377 (1968)).

[174] See Broadrick v. Oklahoma, 413 U.S. 601, 615 (1973) (holding that “the overbreadth of a statute must not only be real, but substantial as well, judged in relation to the statute’s plainly legitimate sweep”).

[175] See Kolender v. Lawson, 461 U.S. 352, 357 (1983) (holding that a law must have “sufficient definiteness that ordinary people can understand what conduct is prohibited and in a manner that does not encourage arbitrary and discriminatory enforcement”).

[176] 2023 WL 414258 (E.D. Cal. Jan. 25, 2023).

[177] Cal. Bus. & Prof. Code § 2270.

[178] Høeg, 2023 WL 414258, at *6 (internal citations omitted).

[179] Id. at *7.

[180] See id.

[181] Id. at *8.

[182] Id. at *9.

[183] Id. at *9.

[184] See id. at *12.

[185] New York Times Co. v. United States, 403 U.S. 713, 714 (1971) (quoting Bantam Books, 372 U.S. at 70).

[186] Missouri v. Biden, 2023 WL2578260, at *35 (quoting Backpage.com, 807 F.3d at 230).

[187] See id. (comparing the situation to cable operators in the Turner Broadcasting cases).

[188] Id.

[189] Id.

[190] See discussion of United States v. Alvarez, 567 U.S. 709 (2012) below.

[191] See Minnesota Voters Alliance v. Mansky, 138 S. Ct. 1876, 1885 (2018) (“In a traditional public forum — parks, streets, sidewalks, and the like — the government may impose reasonable time, place, and manner restrictions on private speech, but restrictions based on content must satisfy strict scrutiny, and those based on viewpoint are prohibited.”).

[192] Missouri v. Biden, 2023 WL2578260, at *35.

[193] Id.

[194] 567 U.S. 709 (2012).

[195] Id. at 717 (quoting United States v. Stevens, 559 U.S. 460, 468 (2010)).

[196] Id. at 718.

[197] See Chaplinsky v. New Hampshire, 315 U.S. 568, 571-72 (1942) (“There are certain well-defined and narrowly limited classes of speech, the prevention and punishment of which has never been thought to raise any Constitutional problem.”)

[198] See Alvarez, 567 U.S. at 718-22.

[199] See id. at 719 (“Even when considering some instances of defamation and fraud, moreover, the Court has been careful to instruct that falsity alone may not suffice to bring the speech outside the First Amendment. The statement must be a knowing or reckless falsehood.”). This means that the First Amendment was found to limit common law actions against false speech which did not receive constitutional protection.

[200] Under the common law, the elements of fraud include (1) a misrepresentation of a material fact or failure to disclose a material fact the defendant was obligated to disclose, (2) intended to induce the victim to rely on the misrepresentation or omission, (3) made with knowledge that the statement or omission was false or misleading, (4) the plaintiff relied upon the representation or omission, and (5) suffered damages or injury as a result of the reliance. See, e.g., Mandarin Trading Ltd v. Wildenstein, 919 N.Y.S.2d 465, 469 (2011); Kostryckyj v. Pentron Lab. Techs., LLC, 52 A.3d 333, 338-39 (Pa. Super. 2012); Masingill v. EMC Corp., 870 N.E.2d 81, 88 (Mass. 2007). Similarly, commercial speech regulation on deceptive or misleading advertising or health claims have also been found to be consistent with the First Amendment. See Virginia State Bd. of Pharmacy v. Virginia Citizens Consumer Council, 425 U.S. 748, 771-72 (1976) (“Obviously, much commercial speech is not provably false, or even wholly false, but only deceptive or misleading. We foresee no obstacle to a State’s dealing effectively with this problem. The First Amendment, as we construe it today does not prohibit the State form insuring that the stream of commercial information flow cleanly as well as freely.”).

[201] See, e.g., Donaldson v. Read Magazine, Inc. 333 U.S. 178, 190 (1948) (the government’s power “to protect people against fraud” has “always been recognized in this country and is firmly established”).

[202] Illinois, ex rel. Madigan v. Telemarketing Associates, Inc., 538 U.S. 600, 617 (2003).

[203] See, e.g., Schaumburg v. Citizens for a Better Environment, 444 U.S. 620 (1980); Secretary of State of Md. v. Joseph H. Munson Co., 467 U.S. 947 (1984); Riley v. National Federation of Blind of N. C., Inc., 487 U.S. 781 (1988).

[204] Madigan, 538 U.S. at 620.

[205] Under the old common-law rule, proving defamation required a plaintiff to present a derogatory statement and demonstrate that it could hurt their reputation. The falsity of the statement was presumed, and the defendant had the burden to prove the statement was true in all of its particulars. Re-publishing something from someone else could also open the new publisher to liability. See generally Samantha Barbas, The Press and Libel Before New York Times v. Sullivan, 44 Colum. J.L. & Arts 511 (2021).

[206] 376 U.S. 254 (1964).

[207] Id. at 271. See also id. at 271-72 (“Erroneous statement is inevitable in free debate, and [] it must be protected if the freedoms of expression are to have the ‘breathing space that they need to survive.’”) (quoting N.A.A.C.P. v. Button, 371 U.S. 415, 433 (1963)).

[208] Id. at 279-80.

[209] Id. at 727-28.

[210] Carlin Commc’ns, 827 F.2d at 1297.

[211] See Missouri, et al. v. Biden, et al., Case No. 3:22-CV-01213 (W.D. La. Jul. 4, 2023), available at https://int.nyt.com/data/documenttools/injunction-in-missouri-et-al-v/7ba314723d052bc4/full.pdf.

[212] Id. See also Missouri, et al. v. Biden, et al., 2023 WL 4335270, at *45-56 (W.D. La. Jul. 4., 2023) (memorandum ruling on request for preliminary injunction). But see Missouri, et al. v. Biden, et al., No. 23-30445 (5th Cir. Sept. 8, 2023), slip op., available at https://www.ca5.uscourts.gov/opinions/pub/23/23-30445-CV0.pdf (upholding the injunction but limiting the parties it applies to); Murthy et al. v. Missouri, et al., No: 3:22-cv-01213 (Sept. 14, 2023) (order issued by Justice Aliso issuing an administrative stay of the preliminary injunction until Sept. 22, 2023 at 11:509 p.m. EDT).

[213] 42 U.S.C. §1983.

[214] See, e.g., Adickes v. SH Kress & Co., 398 U.S. 144, 152 (1970) (“Although this is a lawsuit against a private party, not the State or one of its officials, our cases make clear that petitioner will have made out a violation of her Fourteenth Amendment rights and will be entitled to relief under § 1983 if she can prove that a Kress employee, in the course of employment, and a Hattiesburg policeman somehow reached an understanding to deny Miss Adickes service in the Kress store, or to cause her subsequent arrest because she was a white person in the company of Negroes. The involvement of a state official in such a conspiracy plainly provides the state action essential to show a direct violation of petitioner’s Fourteenth Amendment equal protection rights, whether or not the actions of the police were officially authorized, or lawful… Moreover, a private party involved in such a conspiracy, even though not an official of the State, can be liable under § 1983.”) (internal citations omitted).

[215] Smith v. Wade, 461 U.S. 30, 56 (1983).

[216] See Missouri, et al. v. Biden, et al., 2023 WL 4335270, at *55, 56 (W.D. La. Jul. 4., 2023).

[217] Codified at Fla. Stat. § 112.23, available at https://casetext.com/statute/florida-statutes/title-x-public-officers-employees-and-records/chapter-112-public-officers-and-employees-general-provisions/part-i-conditions-of-employment-retirement-travel-expenses/section-11223-government-directed-content-moderation-of-social-media-platforms-prohibited.

[218] Id.

[219] For more on this proposal, Manne, Stout, & Sperry, supra note 31, at 106-112.

[220] See Dominion Voting Sys. v. Fox News Network, LLC, C.A. No.: N21C-03-257 EMD (Sup. Ct. Del. Mar. 31, 2023), available at https://www.documentcloud.org/documents/23736885-dominion-v-fox-summary-judgment.

[221] See, e.g.,  Jeremy W. Peters & Katie Robertson, Fox Will Pay $787.5 Million to Settle Defamation Suit, New York Times (Apr. 18, 2023), https://www.nytimes.com/live/2023/04/18/business/fox-news-dominion-trial-settlement#fox-dominion-defamation-settle.

[222] See, e.g., Neil Vigdor, ‘Prove Mike Wrong’ for $5 Million, Lindell Pitched. Now, He’s Told to Pay Up., New York Times (Apr. 20, 2023), https://www.nytimes.com/2023/04/20/us/politics/mike-lindell-arbitration-case-5-million.html.

[223] See Stephen Fowler, Judge Finds Rudy Giuliani Liable for Defamation of Two Georgia Election Workers, national public radio (Aug. 30, 2023), https://www.npr.org/2023/08/30/1196875212/judge-finds-rudy-giuliani-liable-for-defamation-of-two-georgia-election-workers.

[224] See supra notes 206-09 and associated text.

Continue reading
Innovation & the New Economy

ICLE Reply Comments to FCC Re: Customer Blackout Rebates

Regulatory Comments I. Introduction The International Center for Law & Economics (“ICLE”) thanks the Federal Communications Commission (“FCC” or “the Commission”) for the opportunity to offer reply . . .

I. Introduction

The International Center for Law & Economics (“ICLE”) thanks the Federal Communications Commission (“FCC” or “the Commission”) for the opportunity to offer reply comments to this notice of proposed rulemaking (“NPRM”), as the Commission proposes to require cable operators and direct-broadcast satellite (DBS) providers to grant their subscribers rebates when those subscribers are deprived of video programming they expected to receive during programming blackouts that resulted from failed retransmission-consent negotiations or failed non-broadcast carriage negotiations.[1]

As noted in the NPRM, the Communications Act of 1934 requires that cable operators and satellite-TV providers obtain a broadcast TV station’s consent in order to lawfully retransmit that station’s signal to subscribers. Commercial stations or networks may either (1) demand carriage pursuant to the Commission’s must-carry rules or (2) elect for carriage consent and negotiate for compensation in exchange for carriage. If a channel elects for retransmission consent but is unable to reach agreement for carriage, the cable operator or DBS provider loses the right to carry that signal. As a result, the cable operator or DBS provider’s subscribers typically lose access entirely to the channel’s signal unless and until the parties are able to reach an agreement, a situation that is often described as a “blackout.”

Blackouts tend to generate eye-catching headlines and often annoy affected consumers.[2] This annoyance is amplified when consumers don’t receive a rebate for the loss of signal, especially when they believe that they are merely bystanders in the dispute between the cable operator or DBS provider and the channel.[3] The Commission appears to echo theses concerns, concluding that its proposed rebate mandate would ensure “subscribers are made whole when they face interruptions of service that are outside their control” and would prevent subscribers “from being charged for services for the period that they did not receive them.”[4]

This framing, however, oversimplifies retransmission-consent negotiations and mischaracterizes consumers’ agency in subscribing to and using multichannel-video-programming distributors (“MVPDs”). Moreover, there are numerous questions raised by the NPRM regarding the proposal’s feasibility, including how to identify which consumers would qualify for rebates, how those rebates would be calculated, and how they would be distributed. Several comments submitted in this proceeding suggest that any implementation of this proposal would be arbitrary and unfair to cable operators, DBS providers, and consumers. In particular:

  • Blackouts result from a temporary or permanent failure to reach an agreement in negotiations between channels and either cable operators or DBS providers. The Commission’s proposal explicitly and unfairly assigns liability for blackouts to the cable operator or DBS provider. As a result, the proposal would provide channels with additional negotiating leverage relative to the status quo. Smaller cable operators may be especially disadvantaged.
  • Each consumer is unique in how much they value a particular channel and how much they would be economically harmed by a blackout. For example, in the event of a cable or DBS blackout, some consumers can receive the programming via an over-the-air antenna or a streaming platform and would suffer close to no economic harm. Other consumers may assign no value to the blacked-out channel’s programming and would likewise suffer no harm.
  • Complexities and confidentiality in programming contracts would make it impossible to accurately or fairly calculate the price or cost associated with any given channel over some set period of time. For example, cable operators and DBS providers typically sell bundles of channels, not a la carte offerings, making it impossible to calculate an appropriate rebate for one specific channel or set of channels.
  • Even if it were possible to calculate an appropriate rebate, any mandated rebate based on such calculations would constitute prohibited rate regulation.

These reply comments respond to many of the issues raised in comments on this matter. We conclude that the Commission is proposing a set of unworkable and arbitrary rules. Even if rebates could be reasonably and fairly calculated, the amount of such rebates would likely be only a few dollars and may be as little as a few pennies. In such cases, the enormous cost to the Commission, cable operators, and DBS providers would be many times greater than the amount of rebates provided to consumers. It would be a much better use of the FCC’s and MVPD providers’ resources to abandon this rulemaking process and refrain from mandating rebates for programming blackouts.

II. Who Is to Blame for Blackouts?

As discussed above, it appears the FCC’s view is that consumers who experience blackouts are mere bystanders in a dispute, as the Commission invokes “consumer protection” and “customer service” as justifications for the proposed rules mandating rebates.[5] If we believe both that consumers are bystanders and that they are harmed by blackouts, then it is crucial to identify the parties to whom blame should be assigned for those blackouts. A key principle of the law & economics approach is that the party better-positioned to avoid the blackout should bear more—or, in some cases, all—of its costs.[6]

In comments submitted by Dish Network, William Zarakas and Jeremy Verlinda note that: “Programming fees are established through bilateral negotiations between content providers and MVPDs, and depend in large part on the relative bargaining position of the two sides.”[7] This comment illustrates the obvious but important fact that both content providers and MVPD operators must reach agreement and, in any given negotiation, either side may have more bargaining power. Because of this reality, it is impossible to draw general conclusions about which party will be the least-cost avoider of blackouts, as borne out in the submitted comments.

On the one hand, the ATVA argues that programmers are the cause of blackouts: “Blackouts happen to cable and satellite providers and their subscribers.”[8] NTCA supports this claim and reports that “[s]mall providers lack negotiating power in retransmission consent discussions.”[9] On the other hand, the NAB claims the “leading cause of such disruptions” is “the pay TV industry’s desire to use consumers as pawns to push for a change in law” and that MVPDs have a “strategy of creating negotiating impasses” in order to obtain a policy change.[10] Writing in Truth on the Market, Eric Fruits concludes:

With the wide range of programming and delivery options, it’s probably unwise to generalize who has the greater bargaining power in the current system. But if one had to choose, it seems that networks and, to a lesser extent, local broadcasters are in a slightly superior position. They have the right to choose must carry or retransmission and, in some cases, have alternative outlets (such as streaming) to distribute their programming.[11]

Peer-reviewed published research by Eun-A Park, Rob Frieden, and Krishna Jayakar attempts to identify the “predictors” of blackouts using a database of nearly 400 retransmission agreements executed between 2011 and 2018.[12] The authors identify three factors associated with more blackouts and longer blackouts:

  1. Cable, satellite, and other MVPDs with larger customer bases are associated with more frequent and longer blackouts;
  2. Multi-station broadcaster groups with network affiliations are associated with more frequent but shorter blackouts; and
  3. The National Football League (“NFL”) season (g., “must see” real-time programming) has no significant relationship with blackout frequency, but when blackouts occur during the season, they are significantly shorter.

The simplistic takeaway is both that everyone is to blame, and no one is to blame. Ultimately, Park and her co-authors conclude that “the statistical analysis is not able to identify the parties or the tactics responsible for blackouts.”[13] Based on this research, it is not clear which parties in given negotiations are more likely to be the least-cost avoider of blackouts.

Nevertheless, the Commission’s proposal explicitly assigns liability for blackouts to cable operators and DBS providers.[14] Under the proposed rules, not only would cable operators and DBS providers suffer financial consequences, but they also would be made to face reputational harms stemming from a federal agency suggesting the fault for any retransmission-consent or carriage-agreement blackouts falls squarely on their shoulders.

Such reputational damage is almost certain to increase subscriber churn and impose additional subscriber-acquisition and retention costs on cable operators and DBS providers.[15] In comments on the Commission’s proposed rules for cable-operator and DBS-provider billing practices, ICLE reported that these costs are substantial and that, in addition to these costs, churn increases the uncertainty of cable-operator and DBS-provider revenues and profits.[16]

III. Consumers Are Not Bystanders

As noted earlier in these comments, the Commission’s proposal appears to be rooted in the belief that, when consumers experience a blackout, they are mere bystanders in a dispute between channels and cable operators or DBS providers. The Commission further seems to believe that the full force of the federal government is needed for these consumers to be “made whole.”[17] The implication is that consumers lack the foresight to anticipate the possibility of blackouts or the ability to respond to blackouts when they occur.

As the NPRM notes, subscribers are often informed of the risk of blackouts—and their consequences—in their service agreements with cable operators or DBS providers.[18] This is supported in ATVA’s comments:

Cable and satellite carriers make this quite clear in the contracts they offer subscribers—existing contracts which the Commission seeks to abrogate here. This language also makes clear that cable and satellite operators can and do change the programming offered in those bundles from time to time. … Cable and satellite providers add and subtract programming from their offerings to consumers frequently, and subscription agreements do not promise that all channels in a particular tier will be carried in perpetuity, let alone (with limited exception) assign a specific value to particular programming.[19]

The NPRM asks, “if a subscriber initiates service during a blackout, would that subscriber be entitled to a rebate or a lower rate?”[20] The question implicitly acknowledges that, for these subscribers, blackouts are not just a possibility, but a certainty. Yet they nonetheless enter into such agreements, knowing they may not be compensated for the interruption of service.

Many cable operators and DBS providers do offer credits[21] or other accommodations[22] to requesting subscribers affected by a blackout. In addition, many consumers have a number of options to circumvent a blackout by obtaining the programming elsewhere. Comments in this proceeding indicate that these options include the use of over-the-air antennas[23] or streaming services.[24] Given the many alternatives available in so many cases, it is unlikely that a blackout would deprive these consumers of the desired programming and any economic harm to them would be de minimis.

If cable or DBS blackouts are (or become) widespread or pernicious, consumers also have the ability to terminate service and switch providers, including by switching to streaming options. This is demonstrated by the well-known and widespread phenomenon of “cord cutting.” ATVA’s comments note that, in the third quarter of 2023, nearly one million subscribers canceled their traditional linear-television service, with just under 55% of occupied households now subscribing, the lowest share since 1989.[25] NYPSC concludes that, if the current trend of cord-cutting continues, “any final rules adopted here could become obsolete over time.”[26]

Due in part to cord cutting, ATVA reported that last year “several cable television companies either had already shut down their television services or were in the process of doing so.”[27] NTCA reports that nearly 40% of surveyed rural providers indicated they are not likely to continue service or already have plans to discontinue service, with many of them blaming the “difficulty negotiating retransmission consent agreements.”[28]

The fact that so many consumers are switching to alternatives to cable and DBS is a clear demonstration that they have the opportunity and ability obtain programming from a wide range of competitive providers. This places them in the driver’s seat, rather than suffering as helpless bystanders. It is telling that neither the NPRM nor any of the comments submitted to date offer any estimate of the cost to consumers associated with blackouts from retransmission-consent or carriage negotiations. This is likely because any costs are literally incalculable (i.e., impossible to calculate) or so small as to discourage any efforts at estimation. In either case, the Commission’s proposal to mandate and enforce blackout rebates looks to be a costly and time-consuming exercise that would yield little to no noticeable consumer benefits.

IV. Mandatory Rebates Will Increase Programmer Bargaining Power and Increase Prices to Cable and DBS Subscribers

A common theme of comments submitted in this matter is that the proposed rules would “place a thumb on the scale” in favor of channels relative to cable operators and DBS providers.[29] Without delving deeply into the esoteric details of bargaining theory, the comments identify two key factors that have, over time, improved programmers’ bargaining position relative to cable operators and DBS providers:

  1. Increased competition among MVPD providers, which has reduced cable and DBS bargaining power;[30] and
  2. Consolidation in the broadcast industry, which has increased programmer bargaining power.[31]

The Commission’s proposed rules are intended and designed to impose an additional cost on cable operators and DBS providers who do not reach an agreement with stations and networks, thereby diminishing the providers’ relative bargaining position. As profit-maximizing enterprises, it would be reasonable to expect stations and networks to exploit this additional bargaining power to extract higher retransmission fees or other concessions.

Jeffrey Eisenach notes that the first “significant” retransmission agreement to involve monetary compensation from a cable provider to a broadcaster occurred in 2005.[32] By 2008, retransmission fees totaled $500 million, according to Variety.[33] By 2020, S&P Global reported that annual retransmission fees were approximately $12 billion.[34] This represents an average annual increase of 30% between 2008 and 2020. This is in line with Zarakas & Verlinda’s estimate that retransmission fees charged by local network stations have increased at annual growth rates of 9.8% to 61.0% since 2009.[35] According to information reported by the Pew Research Center, revenues from retransmission fees for local stations now nearly equal those stations’ advertising revenues (Figure 1).

[36]

Dish Network indicated that programmers have been engaged in an “aggressive campaign of imposing steep retransmission and carriage price increases on MVPDs.”[37] Simultaneous with these steep increases in retransmission fees, networks began imposing “reverse transmission compensation” on their affiliates.[38] Previously, networks paid local affiliates for airtime in order to run network advertisements during their programming. The new arrangements have reversed that flow of compensation, such that affiliates are now expected to compensate the networks, as explained in Variety:

Station owners also face increased pressure to secure top fees for their retrans rights because their Big Four network partners now demand that affiliate stations fork over a portion of their retrans windfall to help them pay for pricey franchises like the NFL, “American Idol” and high-end scripted series.[39]

Dish Network concludes: “While MVPDs and OVDs compete aggressively with each other, the programming price increases will likely be passed through to consumers despite that competition. The reason is that all MVPDs will face the same programming price increase.”[40] NCTA further notes that increased programming costs are “borne by the cable operator or passed onto the consumer.”[41]

The most recent research cited in the comments reports that MVPDs pass through approximately 100% of retransmission-fee increases in the form of higher subscription prices.[42] Aaron Heresco and Stephanie Figueroa provided examples of how increased retransmission fees are passed on to subscribers:

On the other side of the simplified ESPN transaction are MVPD ranging from global conglomerates like Spectrum/Time Warner to small local or independent cable carriers. These MVPD pay ESPN $7.21/subscriber/month for the right to carry/transmit ESPN content to subscribing households. MVPD, with a keen eye on profits and shareholder value, pass through the costs to consumers (irrespective of if subscribers actually watch ESPN or any other network) in the form of increased monthly cable bills. Not only does this suggest that the “free lunch” of TV programming isn’t free, it also indicates that the dynamic of revenue generation via viewership is changing. As another example, consider the case of the Weather Channel, which in 2014 asked for a $.01 increase in retransmission fees despite a 20% drop in ratings (Sahagian 2014). Viewers may demand access to the channel in case of weather emergencies but may only tune in to the channel a handful of times per year. Nonetheless, the demand for access to channels drive up retransmission revenue even if the day-to-day or week-to-week ratings are weak.[43]

In some cases, however, increased retransmission fees cannot be passed on in the form of higher subscription prices. As we noted above, NTCA reports that nearly 40% of surveyed rural providers indicated they are unlikely to continue service or already have plans to discontinue service, with many of them blaming the “difficulty negotiating retransmission consent agreements.”[44] The Commission’s proposed rules would not only lead to higher prices for consumers, but they may also reduce MVPD options for some consumers, as cable operators exit the industry.

V. Proposed Rebate Mandate Would be Arbitrary and Unworkable

The NPRM asks for comments on how to implement the proposed rebate mandate. In doing so, the NPRM identifies numerous factors that illustrate the arbitrary and unworkable nature of the Commission’s proposal:[45]

  • Should cable operators and DBS providers be required to pay rebates or provide credits?
  • Should rebates apply to any channel that is blacked out?
  • What if the parties never reach an agreement for carriage? For example, should subscribers be entitled to rebates in perpetuity?
  • How should rebates be calculated when terms of the retransmission-consent agreements are confidential?
  • Should the rebate be based on the cost that the cable operator or DBS provider paid to the programmer to retransmit or carry the channel prior to the carriage impasse?
  • How should rebates account for bundling?
  • If a subscriber initiates or renews a contract during a blackout, should the subscriber receive a rebate?
  • Should the Commission deem unenforceable service agreements that explicitly specify that the cable operator or DBS provider is not liable for credits or refunds if programming becomes unavailable? Should existing service agreements be abrogated?
  • How should rebates account for (g.) advertising time as a component of the retransmission-consent agreement?

As we note above, when blackouts occur, many cable operators and DBS providers offer credits or other accommodations to requesting subscribers affected by a blackout.[46] The NPRM “tentatively concludes” there is no legal distinction between “rebates,” “refunds,” and “credits.”[47] If the Commission approves rules mandating rebates in the event of blackouts, the rules should be sufficiently flexible to allow credits or other accommodations—such as providing over-the-air antennas or programming upgrades—to satisfy the rules.

The NPRM asks whether the proposed rebate rules should apply to any channel that is blacked out,[48] citing to news stories regarding The Weather Channel.[49] The NPRM provides no context for these citations, but the cited articles suggest that The Weather Channel is of minimal value to most consumers. The channel had 105,000 primetime viewers in February 2024, which was slightly less than PopTV and slightly more than Disney Junior and VH1.[50] The Deadline article cited in the NPRM indicates that The Weather Channel averages 13 cents per-subscriber per-month across pay-TV systems.[51] Much of the channel’s content is freely available on its website (weather.com) and app, and similar weather content is freely available across numerous sources and media.

The NPRM’s singling out of the Weather Channel highlights several flaws with the Commission’s proposal. The channel has low viewership, numerous competing substitutes for content, and is relatively low-cost. During a blackout, few subscribers would notice. Even fewer would suffer any harm and, if they did, the harm would be about 13 cents a month. It seems a waste of valuable resources to impose a complex regulatory regime to “make consumers whole” to the tune of pennies a month.

The NPRM asks whether the Commission should require rebates if the parties never reach a carriage agreement and, if so, whether those rebates should be provided in perpetuity.[52] NCTA points out that it would be impossible for any regulator to determine whether any particular blackout is the result of a negotiation impasse or business decision by the cable operator or DBS provider to no longer carry the channel.[53] For example, a channel may be dropped because of changes to the programming available on the channel.[54] Indeed, the programming offered at the beginning of a retransmission-consent agreement may be very different from the content provided at the time of renegotiation.[55] Moreover, it would be impossible to know with any certainty whether any carriage termination is temporary or permanent.[56] Verizon is correct to call this inquiry “absurd,”[57] as it proposes a “Hotel California” approach to carriage agreements, in which cable operators and DBS providers can check out, but they can never leave.

To illustrate the challenges of calculating a reasonable and economically coherent rebate, Dish Network offered a hypothetical set of three options for carriage of a local station and the Tennis Channel, both owned by Sinclair.[58]

  1. $4 for the local station on a tier serving all subscribers, no carriage of Tennis Channel;
  2. $2 for the local station and $2 for the Tennis Channel, both on tiers serving all subscribers; or
  3. $2 for the local station on a tier serving all subscribers and $4 for the Tennis Channel on a tier serving 50% of subscribers.

In this hypothetical, the cable operator or DBS provider is indifferent to the details of how the package is priced. Similarly, consumers are indifferent to the pricing details of the agreement. Under the Commission’s proposal, however, these details become critical to how a rebate would be calculated. In the event of a Tennis Channel blackout, either no subscriber would receive a rebate, every subscriber would receive a $2 rebate, or half of all subscribers would receive a $4 rebate—with the amount of rebate depending on how the agreement’s pricing was structured.

Dish Network’s hypothetical demonstrates another consequence of the Commission’s proposal: the easiest way to avoid the risk of paying a rebate is to forgo carrying the channel. The hypothetical assumes a cable operator “does not particularly want to carry” the Tennis Channel, but is willing to do so in exchange for an agreement with Sinclair for the local station.[59] Under the Commission’s proposed rules, the risk of incurring the cost of providing rebates introduces another incentive to eschew carriage of the Tennis Channel.

One reason Dish Network presented a hypothetical instead of an “actual” example is because, as noted in several comments, carriage agreements are subject to confidentiality provisions.[60] Separate and apart from the impossibility of allocating a rebate across the various terms of an agreement, even if the terms were known, such an exercise would require abrogating these confidentiality agreements between the negotiating parties.

The NPRM asks whether it would be reasonable to require a cable operator or DBS provider to rebate the cost that it paid to the programmer to retransmit or carry the channel prior to the carriage impasse.[61] The NPRM cites Spectrum Northeast LLC v. Frey, a case involving early-termination fees in which the 1st U.S. Circuit Court of Appeals stated that “[a] termination event ends cable service, and a rebate on termination falls outside the ‘provision of cable service.’”[62] In the NPRM, the Commission “tentatively conclude[s] that the courts’ logic” in Spectrum Northeast “applies to the rebate requirement for blackouts.”[63]

If the Commission accepts the court’s logic that a termination event ends service on the consumer side, then it would be reasonable to conclude that the end of a retransmission or carriage agreement similarly ends service. To base a rebate on a prior agreement would mean basing the rebate on a fiction—an agreement that does not exist.

To illustrate, consider Dish Network’s hypothetical. Assume the initial agreement is Option 2 ($2 for the local station and $2 for the Tennis Channel, both on tiers serving all subscribers). The negotiations stall, leading to a blackout. Assume the parties eventually agree to Option 1, in which the Tennis Channel is no longer carried. Would subscribers be due a rebate for a channel that is no longer carried? Or, if the parties instead agree to Option 3 ($2 for the local station on a tier serving all subscribers and $4 for the Tennis Channel on a tier serving 50% of subscribers), would all subscribers be due a $2 rebate for the Tennis Channel, or would half of subscribers be due a $4 rebate? There is no “good” answer because any answer is necessarily arbitrary and devoid of economic logic.

As noted above, many retransmission and carriage agreements involve “bundles” of programming,[64] as well as “a wide range of pricing and non-pricing terms.”[65] Moreover, ATVA reports that subscribers purchase bundled programming, rather than individual channels, and that consumers are well-aware of bundling when they enter into service agreements with cable operators and DBS providers.[66] NCTA reports that bundling complicates the already-complex challenge of allocating costs across specific channels over specific periods of time.[67] Thus, any attempt to do so with an eye toward mandating rebates during blackouts is likewise arbitrary and devoid of economic logic.

In summary, the Commission is proposing a set of unworkable and arbitrary rules to distribute rebates to consumers during programming blackouts. Even if such rebates could be reasonably and fairly calculated, the sums involved would likely be only a few dollars, and may be as little as a few pennies. In these cases, the enormous costs to the Commission, cable operators, and DBS providers would be many times greater than the rebates provided to consumers. It would be a much better use of the FCC’s and MVPD providers’ resources to abandon this rulemaking process and refrain from mandating rebates for programming blackouts.

[1] Notice of Proposed Rulemaking, In the Matter of Customer Rebates for Undelivered Video Programming During Blackouts, MB Docket No. 24-20 (Jan. 17, 2024), available at https://docs.fcc.gov/public/attachments/FCC-24-2A1.pdf [hereinafter “NPRM”], at para. 1.

[2] See id. at n. 5, 7.

[3] Eric Fruits, Blackout Rebates: Tipping the Scales at the FCC, Truth on the Market (Mar. 6, 2024), https://truthonthemarket.com/2024/03/06/blackout-rebates-tipping-the-scales-at-the-fcc.

[4] NPRM, supra note 1 at para. 10.

[5] NPRM, supra note 1 at para. 13 (proposed rules “provide basic protections for cable customers”) and ¶ 7 (“How would requiring cable operators and DBS providers to provide rebates or credits change providers’ current customer service relations during a blackout?”).

[6] This is known as the “least-cost avoider” or “cheapest-cost avoider” principle. See Harold Demsetz, When Does the Rule of Liability Matter?, 1 J. Legal Stud. 13, 28 (1972); see generally Ronald Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960).

[7] Comments of DISH Network LLC, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030975783920/1 [hereinafter “DISH Comments”], Exhibit 1, Declaration of William Zarakas & Jeremy Verlinda [hereinafter “Zarakas & Verlinda”] at ¶ 8.

[8] Comments of the American Television Alliance, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/103082522212825/1 [hereinafter “ATVA Comments”] at i and 2 (“Broadcasters and programmers cause blackouts. This is, of course, true as a legal matter, as cable and satellite providers cannot lawfully deliver programming to subscribers without the permission of the rightsholder. It makes no sense to say that a cable or satellite provider has ‘blacked out’ programming by failing to obtain permission to carry it. A programmer ‘blacks out’ programming by declining to grant such permission.”).

[9] Comments of NTCA—The Rural Broadband Association, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308589412414/1 [hereinafter “NTCA Comments”] at 2.

[10] Comments of the National Association of Broadcasters, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030894019700/1 [hereinafter “NAB Comments”] at 4-5.

[11] Fruits, supra note 4.

[12] Eun-A Park, Rob Frieden, & Krishna Jayakar, Factors Affecting the Frequency and Length of Blackouts in Retransmission Consent Negotiations: A Quantitative Analysis, 22 Int’l. J. Media Mgmt. 117 (2020).

[13] Id. at 131.

[14] NPRM, supra note 1 at paras. 4, 6 (“We seek comment on whether and how to require cable operators and DBS providers to give their subscribers rebates when they blackout a channel due to a retransmission consent dispute or a failed negotiation for carriage of a non-broadcast channel.”); id. at para. 9 (“We tentatively conclude that sections 335 and 632 of the Act provide us with authority to require cable operators and DBS providers to issue a rebate to their subscribers when they blackout a channel.”) [emphasis added].

[15] See Zarakas & Verlinda supra note 7 at para. 14 (blackouts are costly “in the form of lost subscribers and higher incidence of retention rebates”).

[16] Comments of the International Center for Law & Economics, MB Docket No. 23-405 (Feb. 5, 2024), https://www.fcc.gov/ecfs/document/10204246609086/1 at 9-10 (“In its latest quarterly report to the Securities and Exchange Commission, DISH Network reported that it incurs ‘significant upfront costs to acquire Pay-TV’ subscribers, amounting to subscriber acquisition costs of $1,065 per new DISH TV subscriber. The company also reported that it incurs ‘significant’ costs to retain existing subscribers. These retention costs include upgrading and installing equipment, as well as free programming and promotional pricing, ‘in exchange for a contractual commitment to receive service for a minimum term.’”)

[17] See NPRM, supra note 1 at paras. 4, 8, 10 (using “make whole” language)

[18] See id. at n. 7, citing Spectrum Residential Video Service Agreement (“In the event particular programming becomes unavailable, either on a temporary or permanent basis, due to a dispute between Spectrum and a third party programmer, Spectrum shall not be liable for compensation, damages (including compensatory, direct, indirect, incidental, special, punitive or consequential losses or damages), credits or refunds of fees for the missing or omitted programming. Your sole recourse in such an event shall be termination of the Video Services in accordance with the Terms of Service.”) and para. 6 (“To the extent that the existing terms of service between a cable operator or DBS provider and its subscriber specify that the cable operator or DBS provider is not liable for credits or refunds in the event that programming becomes unavailable, we seek comment on whether to deem such provisions unenforceable if we were to adopt a rebate requirement.”)

[19] ATVA Comments, supra note 8 at 11.

[20] NPRM, supra note 1 at para. 6.

[21] See ATVA Comments, supra note 8 at 3 (“The Commission seeks information on the extent to which MVPDs grant rebates today. The answer is that, in today’s competitive marketplace, many ATVA members provide credits, with significant variations both among providers and among classes of subscribers served by individual providers. This, in turn, suggests that cable and satellite companies already address the issues identified by the Commission, but in a more nuanced and individualized manner than proposed in the Notice.”). See also id. at 5-6 (reporting DIRECTV provides credits to existing customers and makes the offer of credits easy to find online or via customer service representatives). See also id. at 7 (reporting DIRECTV and DISH provide credits to requesting subscribers and Verizon compensates subscribers “in certain circumstances”).

[22] See Zarakas & Verlinda, supra note 7 at para. 21 (“DISH provides certain offers to requesting customers in the case of programming blackouts, which may include a $5 per month credit, a free over-the-air antenna for big 4 local channel blackouts, or temporary free programming upgrades for cable network blackouts.”).

[23] See id. at para. 21.

[24] See ATVA Comments, supra note 8 at 4 (“If Disney blacks out ESPN on a cable system, for example, subscribers still have many ways to get ESPN. This includes both traditional competitors to cable (which are losing subscribers) and a wide array of online video providers (which are gaining subscribers).”); Comments of Verizon, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308316105453/1 [hereinafter “Verizon Comments”] at 12 (“In today’s competitive marketplace, consumers have many options for viewing broadcasters’ content in the event of a blackout — they can switch among MVPDs, or forgo MVPD services altogether and watch on a streaming platform or over the air. And when a subscriber switches or cancels service, it is extremely costly for video providers to win them back.”); DISH Comments, supra note 7 at 7 (“[L]ocal network stations have also been able to use another lever: the phenomenal success of over-the-top video streaming and the emergence of several online video distributors (‘OVDs’), some of which have begun incorporating local broadcast stations in their offerings.”); Comments of the New York State Public Service Commission, MB Docket 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308156370046/1 [hereinafter “NYPSC Comments”] at 2 (identifying streaming services and Internet Protocol Television (IPTV) providers such as YouTube TV, Sling, and DirecTV Stream as available alternatives).

[25] See ATVA Comments, supra note 8 at 4.

[26] NYPSC Comments, supra note 22 at 2.

[27] ATVA Comments, supra note 8 at 4-5.

[28] NTCA Comments, supra note 9 at 3; see Luke Bouma, Another Cable TV Company Announces It Will Shut Down Its TV Service Because of “Extreme Price Increases from Programmers,” Cord Cutters News (Dec. 10, 2023), https://cordcuttersnews.com/another-cable-tv-company-announces-it-will-shut-down-its-tv-service-because-of-extreme-price-increases-from-programmers (reporting the announced shutdown of DUO Broadband’s cable TV and streaming TV services because of increased programming fees, affecting several Kentucky counties).

[29] ATVA Comments, supra note 8 at note 15; DISH Comments, supra note 7 at 3, 8; NAB Comments, supra note 10 at 5; Comments of NCTA—The Internet & Television Alliance, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030958439598/1 [hereinafter “NCTA Comments”] at 2, 11.

[30] See ATVA Comments, supra note 8 at n. 19 (“With more distributors, programmers ‘lose less’ if they fail to reach agreement with any individual cable or satellite provider.”); Zarakas & Verlinda, supra note 7 at para. 6 (“This bargaining power has been further exacerbated by the increase in the number of distribution platforms coming from the growth of online video distributors. The bargaining leverage of cable networks has also received a boost from the proliferation of distribution platforms.”); id. at para. 13 (“Growth of OVDs has reduced MVPD bargaining leverage”).

[31] See DISH Comments, supra note 7 at 6 (“For one thing, the consolidation of the broadcast industry over the last ten years has exacerbated the imbalance further. This consolidation, fueled itself by the broadcasters’ interest in ever-steeper retransmission price increases, has effectively been a game of “and then there were none,” with small independent groups of two or three stations progressively vanishing from the picture.”); Zarakas & Verlinda, supra note 7 at para. 6 (concluding consolidation among local networks is associated with increased retransmission fees).

[32] See Jeffrey A. Eisenach, The Economics of Retransmission Consent, at 9 n.22 (Empiris LLC, Mar. 2009), available at https://nab.org/documents/resources/050809EconofRetransConsentEmpiris.pdf.

[33] See Robert Marich, TV Faces Blackout Blues, Variety (Dec. 10, 2011), https://variety.com/2011/tv/news/tv-faces-blackout-blues-1118047261.

[34] See Economics of Broadcast TV Retransmission Revenue 2020, S&P Global Mkt. Intelligence (2020), https://www.spglobal.com/marketintelligence/en/news-insights/blog/economics-of-broadcast-tv-retransmission-revenue-2020.

[35] Cf. Zarakas & Verlinda, supra note 7 at para. 6.

[36] Retransmission Fee Revenue for U.S. Local TV Stations, Pew Research Center (Jul. 2022), https://www.pewresearch.org/journalism/chart/sotnm-local-tv-u-s-local-tv-station-retransmission-fee-revenue; Advertising Revenue for Local TV, Pew Research Center (Jul. 13, 2021), https://www.pewresearch.org/journalism/chart/sotnm-local-tv-advertising-revenue-for-local-tv.

[37] DISH Comments, supra note 7 at 4.

[38] Park, et al., supra note 13 at 118 (“With stations receiving more retransmission compensation, a new phenomenon has also emerged since the 2010s: reverse retransmission revenues, whereby networks receive a portion of their affiliates and owned-and-operated stations’ retransmission revenues. As retransmission fees have become more important to television stations, broadcast networks and MVPDs, negotiations over contract terms and fees have become more contentious and protracted.”).

[39] Marich, supra note 33.

[40] DISH Comments, supra note 7 at 11.

[41] NCTA Comments, supra note 29 at 2.

[42] See Zarakas & Verlinda supra note 8 at para. 15 (citing George S. Ford, A Retrospective Analysis of Vertical Mergers in Multichannel Video Programming Distribution Markets: The Comcast-NBCU Merger, Phoenix Ctr. for Advanced L. & Econ. Pub. Pol’y Studies (Dec. 2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3138713).

[43] Aaron Heresco & Stephanie Figueroa, Over the Top: Retransmission Fees and New Commodities in the U.S. Television Industry, 29 Democratic Communiqué 19, 36 (2020).

[44] NTCA Comments, supra note 9 at 3.

[45] NPRM, supra note 1 at paras. 6-8.

[46] See supra notes 21-22 and accompanying text.

[47] NPRM, supra note 1 at n. 9.

[48] See id. at para. 6.

[49] See id. at n.12 (citing Alex Weprin, Weather Channel Brushes Off a Blackout, Politico (Feb. 6, 2014) https://www.politico.com/media/story/2014/02/weather-channel-brushes-off-a-blackout-001667); David Lieberman, The Weather Channel Returns To DirecTV, Deadline (April 8, 2014), https://deadline.com/2014/04/the-weatherchannel-returns-directv-deal-711602.

[50] See U.S. Television Networks, USTVDB (retrieved Mar. 28, 2024), https://ustvdb.com/networks.

[51] See Lieberman, supra note 49.

[52] See NPRM, supra note 1 at para. 6.

[53] See NCTA Comments, supra note 29 at 5.

[54] See id. at 3; see also Lieberman, supra note 49 (indicating that carriage consent agreement ending a blackout of The Weather Channel on DIRECTV required The Weather Channel to cut its reality programming by half on weekdays).

[55] See Alex Weprin & Lesley Goldberg, What’s Next for Freeform After Being Dropped by Charter, Hollywood Reporter (Dec. 14, 2023), https://www.hollywoodreporter.com/tv/tv-news/freeform-disney-charter-hulu-1235589827 (reporting that Freeform is a Disney-owned cable channel that currently caters to younger women; the channel began as a spinoff of the Christian Broadcasting Network, was subsequently rebranded as The Family Channel, then Fox Family Channel, and then ABC Family, before rebranding as Freeform).

[56] See NCTA Comments, supra note 29 at 5.

[57] Verizon Comments, supra note 24 at 13 (“Also, as the Commission points out, ‘What if the parties never reach an agreement for carriage? Would subscribers be entitled to rebates in perpetuity and how would that be calculated?’ The absurdity of these questions underscores the absurdity of the proposed regulation.”)

[58] See DISH Comments, supra note 7 at 13.

[59] Id.; see also id. at 22 (“Broadcasters increasingly demand that an MVPD agree to carry other broadcast stations or cable networks as a condition of obtaining retransmission consent for the broadcaster’s primary signal, without giving a real economic alternative to carrying just the primary signal(s).”)

[60] ATVA Comments, supra note 8 at 13 (“here is the additional complication that cable and satellite companies generally agree to confidentiality provisions with broadcasters and programmers—typically at the insistence of the broadcaster or programmer”); DISH Comments, supra note 7 at 21 (reporting broadcasters and programmers “insist” on confidentiality); NCTA Comments, supra note 27 at 6 (“It also bears emphasis that this approach would necessarily publicly expose per- subscriber rates and other highly confidential business information, and that the contracts between the parties prohibit disclosure of this and other information that each find competitively sensitive.”).

[61] NPRM, supra note 1 at para. 8.

[62] Spectrum Northeast, LLC v. Frey, 22 F.4th 287, 293 (1st Cir. 2022), cert denied, 143 S. Ct. 562 (2023); see also In the Matter of Promoting Competition in the American Economy: Cable Operator and DBS Provider Billing Practices, MB Docket No. 23-405, at n. 55 (Jan. 5, 2024), available at https://docs.fcc.gov/public/attachments/DOC-398660A1.pdf.

[63] NPRM, supra note 1 at para. 13.

[64] See supra note 59 and accompanying text for an example of a bundle.

[65] NCTA Comments, supra note 29 at 6.

[66] ATVA Comments, supra note 8 at 11.

[67] NCTA Comments, supra note 29 at 6.

Continue reading
Telecommunications & Regulated Utilities

Murthy Oral Arguments: Standing, Coercion, and the Difficulty of Stopping Backdoor Government Censorship

TOTM With Monday’s oral arguments in Murthy v. Missouri, we now have more of a feel for how the U.S. Supreme Court appears to be considering . . .

With Monday’s oral arguments in Murthy v. Missouri, we now have more of a feel for how the U.S. Supreme Court appears to be considering the issues of social-media censorship—in this case, done allegedly at the behest of federal officials.

In the International Center for Law & Economics’ (ICLE) amicus brief in the case, we argued that the First Amendment protects a marketplace of ideas, and government agents can’t intervene in that marketplace by coercing social-media companies into removing disfavored speech. But if the oral arguments are any indication, there are reasons to be skeptical that the Court will uphold the preliminary injunction the district court issued against the government officials (later upheld in a more limited form by the 5th U.S. Circuit Court of Appeals).

Read the full piece here.

Continue reading
Innovation & the New Economy

ICLE Comments to FTC on Children’s Online Privacy Protection Rule NPRM

Regulatory Comments Introduction We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online . . .

Introduction

We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online Privacy Protection Rule (“COPPA Rule”).

The International Center for Law and Economics (ICLE) is a nonprofit, nonpartisan research center whose work promotes the use of law & economics methodologies to inform public-policy debates. We believe that intellectually rigorous, data-driven analysis will lead to efficient policy solutions that promote consumer welfare and global economic growth.[1]

ICLE’s scholars have written extensively on privacy and data-security issues, including those related to children’s online safety and privacy. We also previously filed comments as part of the COPPA Rule Review and will make some of the same points below.[2]

The Children’s Online Privacy Protection Act (COPPA) sought to strike a balance in protecting children without harming the utility of the internet for children. As Sen. Richard Bryan (D-Nev.) put it when he laid out the purpose of COPPA:

The goals of this legislation are: (1) to enhance parental involvement in a child’s online activities in order to protect the privacy of children in the online environment; (2) to enhance parental involvement to help protect the safety of children in online fora such as chatrooms, home pages, and pen-pal services in which children may make public postings of identifying information; (3) to maintain the security of personally identifiable information of children collected online; and (4) to protect children’s privacy by limiting the collection of personal information from children without parental consent. The legislation accomplishes these goals in a manner that preserves the interactivity of children’s experience on the Internet and preserves children’s access to information in this rich and valuable medium.[3]

In other words, COPPA was designed to protect children from online threats by promoting parental involvement in a way that also preserves a rich and vibrant marketplace for children’s content online. Consequently, the pre-2013 COPPA Rule did not define personal information to include persistent identifiers standing alone. It is these persistent identifiers that are critical for the targeted advertising that funds the interactive online platforms and the creation of children’s content the legislation was designed to preserve.

COPPA applies to the “operator of any website or online service” that is either “directed to children that collects personal information from children” or that has “actual knowledge that it is collecting personal information from a child.”[4] These operators must “obtain verifiable parental consent for the collection, use, or disclosure of personal information.” The NPRM, following the mistaken 2013 amendments to the COPPA Rule, continues to define “personal information” to include persistent identifiers that are necessary for the targeted advertising undergirding the internet ecosystem.

Below, we argue that, before the FTC moves further toward restricting platform operators and content creators’ ability to monetize their work through targeted advertising, it must consider the economics of multisided platforms. The current path will lead to less available free content for children and more restrictions on their access to online platforms that depend on targeted advertising. Moreover, the proposed rules are inconsistent with the statutory text of COPPA, as persistent identifiers do not by themselves enable contacting specific individuals. Including them in the definition of “personal information” is also contrary to the statute’s purpose, as it will lead to a less vibrant internet ecosystem for children.

Finally, there are better ways to protect children online, including by promoting the use of available technological and practical solutions to avoid privacy harms. To comply with existing First Amendment jurisprudence regarding online speech, it is necessary to rely on these less-restrictive means to serve the goal of protecting children without unduly impinging their speech interests online.

I. The Economics of Online Multisided Platforms

Most of the “operators of websites and online services” subject to the COPPA Rule are what economists call multisided markets, or platforms.[5] Such platforms derive their name from the fact that they serve at least two different types of customers and facilitate their interaction. Multisided platforms generate “indirect network effects,” described by one economist as a situation where “participants on one side value being able to interact with participants on the other side… lead[ing] to interdependent demand.”[6]

Online platforms provide content to one side and access to potential consumers on the other side. In order to keep demand high, online platforms often offer free access to users, whose participation is subsidized by those participants on the other side of the platform (such as advertisers) that wish to reach them.[7] This creates a positive feedback loop in which more participants on one side of the platform leads to more participants on the other.

This dynamic is also true of platforms with content “directed to children.” Revenue is collected not from those users, but primarily from the other side of the platform—i.e., advertisers who pay for access to the platform’s users. To be successful, online platforms must keep enough—and the right type of—users engaged to maintain demand for advertising.

Moreover, many “operators” under COPPA are platforms that rely on user-generated content. Thus, they must also consider how to attract and maintain high-demand content creators, often accomplished by sharing advertising revenue. If platforms fail to serve the interests of high-demand content creators, those creators may leave the platform, thus reducing its value.

Online platforms acting within the market process are usually going to be the parties best-positioned to make decisions on behalf of platforms users. Operators with content directed to children may even compete on privacy policies and protections for children by providing tools to help users avoid what they (or, in this context, their parents and guardians) perceive to be harms, while keeping users on the platform and maintaining value for advertisers.[8]

There may, however, be examples where negative externalities[9] stemming from internet use are harmful to society more broadly. A market failure could result, for instance, if platforms’ incentives lead them to collect too much (or the wrong types of) information for targeted advertising, or to offer up content that is harmful for children or keeps them hooked to using the platform.

In situations where there are negative externalities from internet use, there may be a case to regulate online platforms in various ways. Any case for regulation must, however, acknowledge potential transaction costs, as well as how platforms and users may respond to changes in those costs. To get regulation right, the burden of avoiding a negative externality should fall on the least-cost avoider.

The Coase Theorem, derived from the work of Nobel-winning economist Ronald Coase[10] and elaborated on in the subsequent literature,[11] helps to explain the issue at-hand:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the least-cost avoider, while taking into consideration the total social costs of the institutional framework.

In one of Coase’s examples, the noise from a confectioner using his candy-making machine is a potential cost to the doctor next door, who consequently cannot use his office to conduct certain testing. Simultaneously, the doctor moving his office next door to the confectioner is a potential cost to the confectioner’s ability to use his equipment.

In a world of well-defined property rights and low transaction costs, the initial allocation of rights would not matter, because the parties could bargain to overcome the harm in a mutually beneficial manner—i.e., the confectioner could pay the doctor for lost income or to set up sound-proof walls, or conversely, the doctor could pay the confectioner to reduce the sound of his machines.[12] But since there are transaction costs that prevent this sort of bargain, it is important whether the initial right is allocated to the doctor or the confectioner. To maximize societal welfare, the cost should be placed on the entity that can avoid the harm at the lowest cost.[13]

In the context of the COPPA Rule, website operators and online services create incredible value for their users, but they also can, at times, impose negative externalities relevant to children who use their services. In the absence of transaction costs, it would not matter whether operators must obtain verifiable parental consent before collecting, using, or disclosing personal information, or whether the initial burden is placed on parents and children to avoid the harms associated with such collection, use, or disclosure.

But given that there are transaction costs involved in obtaining (and giving) verifiable parental consent,[14] it matters how the law defines personal information (which serves as a proxy for a property right, in Coase’s framing). If personal information is defined too broadly and the transaction costs for providers to gain verifiable parental consent are too high, the result may be that the societal benefits of children’s internet use will be lost, as platform operators restrict access beyond the optimum level.

The threat of liability for platform operators under COPPA also risks excessive collateral censorship.[15] This arguably has already occurred, as operators like YouTube have restricted content creators’ ability to monetize their work through targeted advertising, leading on balance to less children’s content. By wrongly placing the burden on operators to avoid harms associated with targeted advertising, societal welfare is reduced, including the welfare of children who no longer get the benefits of that content.

On the other hand, there are situations where website operators and online services are the least-cost avoiders. For example, they may be the parties best-placed to monitor and control harms associated with internet use in cases where it is difficult or impossible to hold those using their platforms accountable for the harms they cause.[16] In other words, operators should still be held liable under COPPA when they facilitate adults’ ability to message children, or to identify a child’s location without parental consent, in ways that could endanger children.[17] Placing the burden on children or their parents to avoid such harms could allow operators to impose un- or undercompensated harms on society.

Thus, in order to get the COPPA Rule’s balance right, it is important to determine whether it is the operators or their users who are the least-cost avoiders. Placing the burden on the wrong parties would harm societal welfare, either by reducing the value that online platforms confer to their users, or in placing more uncompensated negative externalities on society.

II. Persistent Identifiers and ‘Personal Information’

As mentioned above, under COPPA, a website operator or online service that is either directed to children or that has actual knowledge that it collects personal information from a child must obtain “verifiable parental consent” for the “collection, use or disclosure” of that information.[18] But the NPRM continues to apply the expanded definition of “personal information” to include persistent identifiers from the 2013 amendments.

COPPA’s definition for personal information is “individually identifiable information” collected online.[19] The legislation included examples such as first and last name; home or other physical address; as well as email address, telephone number, or Social Security number.[20] These are all identifiers obviously connected to people’s real identities. COPPA does empower the FTC to determine whether other identifiers should be included, but the commission must permit “the physical or online contacting of a specific individual”[21] or “information concerning the child or the parents of that child that the website collects online from the child and combines with an identifier described in this paragraph.”[22]

In 2013, the FTC amended the definition of personal information to include:

A persistent identifier that can be used to recognize a user over time and across different Web sites or online services. Such persistent identifier includes, but is not limited to, a customer number held in a cookie, an Internet Protocol (IP) address, a processor or device serial number, or unique device identifier.[23]

The NPRM here continues this error.

Neither IP addresses nor device identifiers alone “permit the physical or online contacting of a specific individual,” as required by 15 U.S.C. §?6501(8)(F). A website or app could not identify personal identity or whether a person is an adult or child from these pieces of information alone. In order for persistent identifiers, like those relied upon for targeted advertising, to be counted as personal information under 15 U.S.C. §?6501(8)(G), they need to be combined with other identifiers listed in the definitions. In other words, it is only when a persistent identifier is combined with a first and last name, an address, an email, a phone number, or a Social Security number that it should be considered personal information protected by the statute.

While administrative agencies receive Chevron deference in court challenges when definitions are ambiguous, this text, when illuminated by canons of statutory construction,[24] is clear. The canon of ejusdem generis applies when general words follow an enumeration of two or more things.[25] The general words are taken to apply only to persons or things of the same general kind or class as those mentioned specifically. Persistent identifiers, such as cookies, bear little resemblance to the other examples of “personally identifiable information” listed in the statute, such as first and last name, address, phone, email, or Social Security number. Only when combined with such information could a persistent identifier become personal information.

The NPRM states that the Commission is “not persuaded” by this line of argumentation, pointing back to the same reasoning offered in the 2013 amendments. The NPRM states that it is “the reality that at any given moment a specific individual is using that device,” which “underlies the very premise behind behavioral advertising.”[26] Moreover the NPRM reasons that “while multiple people in a single home often use the same phone number, home address, and email address, Congress nevertheless defined these identifiers as ‘individually identifiable information’ in the COPPA statute.”[27] But this reasoning is flawed.

While multiple people regularly share an address, and sometimes even a phone number or email, each of these identifiers allows for contacting an individual person in a way that a persistent identifier simply does not. In each of those cases, bad actors can use such information to send direct messages to people (phone numbers and emails); find their physical location (address); and potentially to cause them harm.

A persistent identifier, on its own, is not the same. Without the subpoena of an internet service provider (ISP) or virtual private network (VPN), a bad actor that intended harm could not tell either where the person to whom the persistent identifier is assigned is located, or to message them directly. Persistent identifiers are useful primarily to online platforms in supporting their internal operations (which the NPRM continues to allow) and serving users targeted advertising.

Moreover, the fact that bills seeking to update COPPA—proposed but never passed by Congress—have proposed expanding the definition of personal information to include persistent identifiers suggests that the FTC has asserted authority that it does not have under the current statute.[28] Under Supreme Court precedent,[29] when considering whether an agency has the authority that it claims to pass rules, courts must consider whether Congress has rejected proposals to expand the agency’s jurisdiction in similar ways.

The NPRM also ignores the practical realities of the relationship between parents and children when it comes to devices and internet use. Parental oversight is already built into any type of advertisement (including targeted ads) that children see. Few children can view those advertisements without their parents providing them a device and the internet access to do so. Even fewer children can realistically make their own purchases. Consequently, the NPRM misunderstands targeted advertising in the context of children’s content, which is not based on any knowledge about the users as individuals, but on the browsing and search history of the device they happen to be using.

Children under age 13, in particular, are extremely unlikely to have purchased the devices they use; to have paid for the internet access to use those devices; or to have any disposable income or means to pay for goods and services online. Thus, contrary to the NPRM’s assumptions, the actual “targets” of this advertising—even on websites or online services that host children’s content—are the children’s parents.

This NPRM continues the 2013 amendments’ mistake and will continue to greatly reduce the ability of children’s content to generate revenue through the use of relatively anonymous persistent identifiers. As we describe in the next section, the damage done by the 2013 amendments is readily apparent, and the Commission should take this opportunity to rectify the problem.

III. More Parental Consent, Less Children’s Content

As outlined above, in a world without transaction costs—or, at least, one in which such costs are sufficiently low—verifiable parental consent would not matter, because it would be extremely easy for a bargain to be struck between operators and parents. In the real world, however, transaction costs exist. In fact, despite the FTC’s best efforts under the COPPA Rule, the transaction costs associated with obtaining verifiable parental consent continue to be sufficiently high as to prevent most operators from seeking that consent for persistent identifiers. As we stated in our previous comments, the economics are simple: if content creators lose access to revenue from targeted advertising, there will be less content created from which children can benefit.

FIGURE 1: Supply Curve for Children’s Online Content

The supply curve for children’s online content shifts left as the marginal cost of monetizing it increases. The marginal cost of monetizing such content is driven upward by the higher compliance costs of obtaining verifiable parental consent before serving targeted advertising. This supply shift means that less online content will be created for children.

These results are not speculative at this point. Scholars who have studied the issue have found the YouTube settlement, made pursuant to the 2013 amendments, has resulted in less child-directed online content, due to creators’ inability to monetize that content through targeted advertising. In their working paper “COPPAcalypse? The YouTube Settlement’s Impact on Kids Content,”[30] Garrett Johnson, Tesary Lin, James C. Cooper, & Liang Zhong summarized the issue as follows:

The Children’s Online Privacy Protection Act (COPPA), and its implementing regulations, broadly prohibit operators of online services directed at children under 13 from collecting personal information without providing notice of its data collection and use practices and obtaining verifiable parental consent. Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA essentially acts as a de facto ban on the collection of personal information by providers of free child-directed content. In 2013, the FTC amended the COPPA rules to include in the definition of personal information “persistent identifier that can be used to recognize a user over time and across different Web sites or online services,” such as a “customer number held in a cookie . . . or unique device identifier.” This regulatory change meant that, as a practical matter, online operators who provide child-directed content could no longer engage in personalized advertising.

On September 4, 2019, the FTC entered into a consent agreement with YouTube to settle charges that it had violated COPPA. The FTC’s allegations focused on YouTube’s practice of serving personalized advertising on child-directed content at children without obtaining verifiable parental consent. Although YouTube maintains it is a general audience website and users must be at least 13 years old to obtain a Google ID (which makes personalized advertising possible), the FTC complaint alleges that YouTube knew that many of its channels were popular with children under 13, citing YouTube’s own claims to advertisers. The settlement required YouTube to identify child-directed channels and videos and to stop collecting personal information from visitors to these channels. In response, YouTube required channel owners producing [“made-for-kids”] MFK content to designate either their entire channels or specific videos as MFK, beginning on January 1, 2020. YouTube supplemented these self-designations with an automated classifier designed to identify content that was likely directed at children younger than 13. In so doing, YouTube effectively shifted liability under COPPA to the channel owners, who could face up to $42,530 in fines per video if they fail to self-designate and are not detected by YouTube’s classifier.[31]

By requiring verifiable parental consent, the rule change and settlement increased the transaction costs imposed on online platforms that host content created by others. YouTube’s economically rational response was to restrict content creators’ ability to benefit from (considerably more lucrative) personalized advertising. The result was less content created for children, including by driving out less-profitable content creators:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.[32]

This is not the only finding regarding COPPA’s role in reducing the production of content for children. Morgan Reed—president of the App Association, a global trade association for small and medium-sized technology companies—presented extensively at the FTC’s 2019 COPPA Workshop.[33] Reed’s testimony detailed that the transaction costs associated with obtaining verifiable parental consent did little to enhance parental control, but much to reduce the quality and quantity of content directed to children.

It is worth highlighting, in particular, Reed’s repeated use of the words “friction,” “restriction,” and “cost” to describe how COPPA’s institutional features affect the behavior of social-media platforms, parents, and children. While noting that general audience content is “unfettered, meaning that you do not feel restricted by what you can get to, how you do it. It’s easy, it’s low friction. Widely available. I can get it on any platform, in any case, in any context and I can get to it rapidly,” Reed said that COPPA-regulated apps and content are, by contrast, all about:

Friction, restriction, and cost. Every layer of friction you add alters parent behavior significantly. We jokingly refer to it as the over the shoulder factor. If a parent wants access to something and they have to pass it from the back seat to the front seat of the car more than one time, the parent moves on to the next thing. So the more friction you add to an application directed at children the less likely it is that the parent is going to take the steps necessary to get through it because the competition, of course, is as I said, free, unfettered, widely available. Restriction. Kids balk against some of the restrictions. I can’t get to this, I can’t do that. And they say that to the parent. And from the parent’s perspective, fine, I’ll just put in a different age date. They’re participating, they’re parenting but they’re not using the regulatory construction that we all understand.

The COPPA side, expensive, onerous or friction full. We have to find some way around that. Restrictive, fewer features, fewer capabilities, less known or available, and it’s entertaining-ish. …

Is COPPA the barrier? I thought this quote really summed it up. “Seamlessness is expected. But with COPPA, seamlessness is impossible.” And that has been one of the single largest areas of concern. Our folks are looking to provide a COPPA compliant environment. And they’re finding doing VPC is really hard. We want to make it this way, we just walked away. And why do they want to do it? We wanted to create a hub for kids to promote creativity. So these are not folks who are looking to take data and provide interest based advertising. They’re trying to figure out how to do it so they can build an engaging product. Parental consent makes the whole process very complicated. And this is the depressing part. …

We say that VPC is intentional friction. It’s clear from everything we’ve heard in the last two panels that the authors of COPPA, we don’t really want information collected on kids. So friction is intentional. And this is leading to the destruction of general audience applications basically wiping out COPPA apps off the face of the map.[34]

Reed’s use of the word “friction” is particularly enlightening. The economist Mike Munger of Duke University has often described transaction costs as frictions—explaining that, to consumers, all costs are transaction costs.[35] When higher transaction costs are imposed on social-media platforms, end users feel the impact. In this case, the result is that children and parents receive less quality children’s apps and content.

Thus, when the NPRM states that “the Commission [doesn’t] find compelling the argument that the 2013 persistent identifier modification has caused harm by hindering the ability of operators to monetize online content through targeted advertising,”[36] in part because “the 2013 Amendments permit monetization… through providing notice and seeking parental consent for the use of personal information for targeted advertising,”[37] it misses how transaction costs prevent this outcome. The FTC should not ignore the data provided by scholars who have researched the question, nor the direct testimony of app developers.

IV. Lower-Cost Ways to Avoid Harms to Children

Widely available practical and technological means are a lower-cost way to avoid the negative externalities associated with internet use, relative to verifiable-parental-consent laws. As NetChoice put it in the complaint the group filed against Arkansas’ social-media age-verification law, “[p]arents have myriad ways to restrict their children’s access to online services and to keep their children safe on such services.”[38]

NetChoice’s complaint recognized the subjective nature of negative externalities, stating:

Just as people inevitably have different opinions about what books, television shows, and video games are appropriate for minors, people inevitably have different views about whether and to what degree online services are appropriate for minors. While many minors use online services in wholesome and productive ways, online services, like many other technologies, can be abused in ways that may harm minors.[39]

They proceeded to list all the ways that parents can take control and help their children avoid online harms, including with respect to the decisions to buy devices for their children and to set terms for how and when they are permitted to use them.[40] Parents can also choose to use tools offered by cell-phone carriers and broadband providers to block certain apps and sites from their children’s devices, or to control with whom their children can communicate and for how long they can use the devices.[41]

NetChoice also pointed to wireless routers that allow parents to filter and monitor online content;[42] parental controls at the device level;[43] third-party filtering applications;[44] and numerous tools offered by NetChoice members that offer relatively low-cost monitoring and control by parents, or even by teen users acting on their own behalf.[45] Finally, they noted that, in response to market demand,[46] NetChoice members expend significant resources curating content to ensure that it is appropriate.[47]

Similarly, parents can protect their children’s privacy simply by taking control of the devices they allow their children to use. Tech-savvy parents can, if they so choose, install software or use ad-blockers to prevent collection of persistent identifiers.[48] Even less tech-savvy parents can make sure that their children are not subject to ads and tracking simply by monitoring their device usage and ensuring they only use YouTube Kids or other platforms created explicitly for children. In fact, most devices and operating systems now have built-in, easy-to-use controls that enable both monitoring and blocking of children’s access to specific apps and websites.[49]

This litany of less-restrictive means to accomplish the goal of protecting children online bears repeating, because even children have some First Amendment interests in receiving online speech.[50] If a court were to examine the COPPA Rule as a speech regulation that forecloses children’s access to online content, it would be subject to strict scrutiny. This means the rules would need to be the least-restrictive possible in order to fulfill the statute’s purpose. Educating parents and children on the available practical and technological means to avoid harms associated with internet use, including the collection of data for targeted advertising, would clearly be a less-restrictive alternative to a de facto ban of targeted advertising.

A less-restrictive COPPA rule could still enhance parental involvement and protect children from predators without impairing the marketplace for children’s online content significantly. Parents already have the ability to review their children’s content-viewing habits on devices they buy for them. A COPPA rule that enhances parental control by requiring verifiable parental consent when children are subject to sharing personal information—like first and last name, address, phone number, email address, or Social Security number—obviously makes sense, along with additions like geolocation data. But it is equally obvious that it is possible to avoid, at lower cost, the relatively anonymized collection of persistent identifiers used to support targeted ads through practical and technological means, without requiring costly verifiable parental consent.

V. Perils of Bringing More Entities Under the COPPA Rule

The costs of the COPPA Rule would be further exacerbated by the NPRM’s proposal to modify the criteria for determining whether a site or service is directed toward children.[51] These proposed changes, particularly the reliance on third-party services and comparisons with “similar websites or online services,” raise significant concerns about both their practical implementation and potential unintended consequences. The latter could include further losses of online content for both children and adults, as content creators drawn into COPPA’s orbit lose access to revenue from targeted advertising.

The FTC’s current practice employs a multi-factor test to ascertain whether a site or service is directed at children under 13. This comprehensive approach considers various elements, including subject matter, visual and audio content, and empirical evidence regarding audience composition.[52] The proposed amendments aim to expand this test by introducing such factors as marketing materials, representations to third parties and, notably, reviews by users or third parties and comparisons with similar websites or services.[53]

The inclusion of third-party reviews and comparisons with similar services as factors in determining a site’s target audience introduces a level of ambiguity and unreliability that would be counterproductive to COPPA’s goals. Without clear standards to evaluate their competence or authority, relying on third-party reviews would leave operators without a solid foundation upon which to assess compliance. This ambiguity could lead to overcompliance. In particular, online platforms that carry third-party content may err on the side of caution in order to align with the spirit of the rule. This threatens to stifle innovation and free expression by restricting creators’ ability to monetize content that has any chance to be considered “directed to children.” Moreover, to avoid this loss of revenue, content creators could shift their focus exclusively to content clearly aimed only at adults, rather than that which could be interesting to adults and children alike.

Similarly, the proposal to compare operators with “similar websites or online services” is fraught with challenges. The lack of guidance on how to evaluate similarity or to determine which service sets the standard for compliance would increase burdens on operators, with little evidence of tangible realized benefits. It’s also unclear who would make these determinations and how disputes would be resolved, leading to further compliance costs and potential litigation. Moreover, operators may be left in a position where it is impractical to accurately assess the audience of similar services, thereby further complicating compliance efforts.

Given these considerations, the FTC should not include reliance on third-party services or comparisons with similar websites or online services in its criteria for determining whether content is directed at children under 13. These approaches introduce a level of uncertainty and unreliability that could lead to overcompliance, increased costs, and unintended negative impacts on online content and services, including further restrictions on content creators who create content interesting to both adults and children. Instead, the FTC should focus on providing clear, direct guidelines that allow operators to assess their compliance with COPPA confidently, without the need to rely on potentially biased or manipulative third-party assessments. This approach will better serve the FTC’s goal of protecting children’s online privacy, while ensuring a healthy, innovative online ecosystem.

Conclusion

The FTC should reconsider the inclusion of standalone persistent identifiers in the definition of “personal information.” The NPRM continues to enshrine the primary mistake of the 2013 amendments. This change was inconsistent with the purposes and text of the COPPA statute. It already has reduced, and will continue to reduce, the availability of children’s online content.

[1] ICLE has received financial support from numerous companies, organizations, and individuals, including firms with interests both supportive of and in opposition to the ideas expressed in this and other ICLE-supported works. Unless otherwise noted, all ICLE support is in the form of unrestricted, general support. The ideas expressed here are the authors’ own and do not necessarily reflect the views of ICLE’s advisors, affiliates, or supporters.

[2] Much of these comments are adapted from ICLE’s 2019 COPPA Rule Review Comments, available at https://laweconcenter.org/wp-content/uploads/2019/12/COPPA-Comments-2019.pdf; Ben Sperry, A Law & Economics Approach to Social-Media Regulation, CPI TechREG Chronicle (Feb. 29, 2022), https://laweconcenter.org/resources/a-law-economics-approach-to-social-media-regulation; Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes (ICLE Issue Brief, Nov. 9, 2023), available at https://laweconcenter.org/wp-content/uploads/2023/11/Issue-Brief-Transaction-Costs-of-Protecting-Children-Under-the-First-Amendment-.pdf.

[3] 144 Cong. Rec. 11657 (1998) (Statement of Sen. Richard Bryan), available at https://www.congress.gov/crec/1998/10/07/CREC-1998-10-07.pdf#page=303.

[4] 15 U.S.C. §?6502(b)(1)(A).

[5] See, e.g., Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Euro. Econ. Ass’n 990 (2003).

[6] David S. Evans, Multisided Platforms in Antitrust Practice, at 3 (Oct. 17, 2023), forthcoming, Michael Noel, Ed., Elgar Encyclopedia on the Economics of Competition and Regulation, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4606511.

[7] For instance, many nightclubs hold “ladies’ night” events in which female patrons receive free admission or discounted drinks in order to attract more men, who pay full fare for both.

[8] See, e.g., Ben Sperry, Congress Should Focus on Protecting Teens from Real Harms, Not Targeted Ads, The Hill (Feb. 16, 2023), https://thehill.com/opinion/congress-blog/3862238-congress-should-focus-on-protecting-teens-from-real-harms-not-targeted-ads.

[9] An externality is a side effect of an activity that is not reflected in the cost of that activity—basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action.

[10] See Ronald H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960)

[11] See Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[12] See Coase, supra note 8, at 8-10.

[13] See id. at 34 (“When an economist is comparing alternative social arrangements, the proper procedure is to compare the total social product yielded by these different arrangements.”).

[14] See Part III below.

[15] See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Liability, 87 Notre Dame L. Rev. 293, 295-96 (2011); Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L J. 26, 39 (2022); Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-firstamendment-and-online-intermediary-liability.

[16] See Geoffrey A. Manne, Kristian Stout, & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (Sep. 6, 2023), https://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach.

[17] See Statement of Commissioner Alvaro M. Bedoya On the Issuance of the Notice of Proposed Rulemaking to Update the Children’s Online Privacy Protection Rule (COPPA Rule), at 3-4 (Dec. 20, 2023), available at https://www.ftc.gov/system/files/ftc_gov/pdf/BedoyaStatementonCOPPARuleNPRMFINAL12.20.23.pdf (listing examples of these types of enforcement actions).

[18] 15 U.S.C. §?6502(b)(1)(A)(ii).

[19] 15 U.S.C. §?6501(8).

[20] 15 U.S.C. §?6501(8)(A)-(E).

[21] 15 U.S.C. §?6501(8)(F).

[22] 15 U.S.C. §?6501(8)(G).

[23] 16 CFR § 312.2 (Personal information)(7).

[24] See Chevron U.S.A. Inc. v. Natural Resources Defense Council, Inc., 467 U. S. 837, 843 n.9 (1984) (“If a court, employing traditional tools of statutory construction, ascertains that Congress had an intention on the precise question at issue, that intention is the law and must be given effect.”).

[25] What is EJUSDEM GENERIS?, The Law Dictionary: Featuring Black’s Law Dictionary Free Online Legal Dictionary 2nd Ed. (last accessed Dec. 9, 2019), https://thelawdictionary.org/ejusdem-generis.

[26] NPRM at 2043.

[27] Id.

[28] See, e.g., Children and Teens’ Online Privacy Protection Act, S. 1418, §2(a)(3) 118th Cong. (2024).

[29] See FDA v. Brown & Williamson, 529 U.S. 120, 148-50 (2000).

[30] Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[31] Id. at 6-7 (emphasis added).

[32] Id. at 1.

[33] The Future of the COPPA Rule: An FTC Workshop Part 2, Federal Trade Commission (Oct. 7, 2019), available at https://www.ftc.gov/system/files/documents/public_events/1535372/transcript_of_coppa_workshop_part_2_1.pdf.

[34] Id. at 6 (emphasis added).

[35] See Michael Munger, To Consumers, All Costs are Transaction Costs, Am. Inst. Econ. Rsch. (June 13, 2023), https://www.aier.org/article/to-consumers-all-costs-are-transaction-costs.

[36] NPRM at 2043.

[37] Id. at 2034, n. 121.

[38] See NetChoice Complaint, NetChoice LLC v. Griffin, NO. 5:23-CV-05105, 2023 U.S. Dist. LEXIS 154571 (W.D. Ark. 2023), available at https://netchoice.org/wp-content/uploads/2023/06/NetChoice-v-Griffin_-Complaint_2023-06-29.pdf.

[39] Id. at para. 13.

[40] See id. at para. 14

[41] See id.

[42] See id. at para 15.

[43] See id. at para 16.

[44] See id.

[45] See id. at para. 17, 19-21

[46] Sperry, supra note 8.

[47] See NetChoice Complaint, supra note 36, at para. 18.

[48] See, e.g., Mary James & Catherine McNally, The Best Ad Blockers 2024, all about cookies (last updated Feb. 29, 2024), https://allaboutcookies.org/best-ad-blockers.

[49] See, e.g., Parental Controls for Apple, Android, and Other Devices, internet matters (last accessed Mar. 7, 2024), https://www.internetmatters.org/parental-controls/smartphones-and-other-devices.

[50] See, e.g., Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 794-95 (2011); NetChoice, LLC v. Griffin, 2023 WL 5660155, at *17 (W.D. Ark. Aug. 31, 2023) (finding Arkansas’s Act 689 “obviously burdens minors’ First Amendment rights” by “bar[ring] minors from opening accounts on a variety of social media platforms.”).

[51] See NPRM at 2047.

[52] See id. at 2046-47.

[53] Id. at 2047 (“Additionally, the Commission believes that other factors can help elucidate the intended or actual audience of a site or service, including user or third-party reviews and the age of users on similar websites or services.”).

Continue reading
Data Security & Privacy

A Law & Economics Approach to Social-Media Regulation

Popular Media The thesis of this essay is that policymakers must consider what the nature of social media companies as multisided platforms means for regulation. The balance . . .

The thesis of this essay is that policymakers must consider what the nature of social media companies as multisided platforms means for regulation. The balance struck by social media companies acting in response to the incentives they face in the market could be upset by regulation that favors the interests of some users over others. Promoting the use of technological and practical means to avoid perceived harms by users themselves would preserve the benefits of social media to society without the difficult tradeoffs of regulation. Part I will introduce the economics of multisided platforms like social media, and how this affects the incentives of these platforms. Social-media platforms, acting within the market process, are best usually best positioned to balance the interests of their users, but there could be occasions where the market process fails due to negative externalities. Part II will consider these situations where there are negative externalities due to social media and introduce the least-cost avoider principle. Usually, social-media users are the least-cost avoiders of harms, but sometimes social media are better placed to monitor and control harms. This involves a balance, as the threat of collateral censorship or otherwise reducing opportunities to speak and receive speech could result from social media regulation. Part III will then apply the insights from Part I and II to the areas of privacy, children’s online safety, and speech regulation.

I. Introduction

Policymakers at both the state and federal levels have been actively engaged in recent years with proposals to regulate social media, whether the subject is privacy, children’s online safety, or concerns about censorship, misinformation, and hate speech.[1] While there may not be consensus about precisely why social media is bad, there is broad agreement that the major online platforms are to blame for at least some harms to society. It is also generally recognized, though often not emphasized, that social media brings great value to its users. In other words, there are costs and benefits, and policymakers should be cautious when introducing new laws that would upset the balance that social-media companies must strike in order to serve their users well.

This essay will propose a general approach, informed by the law & economics tradition, to assess when and how social media should be regulated. Part I will introduce the economics of multisided platforms, and how they affects social-media platforms’ incentives. The platforms themselves, acting within the market process, are best usually best-positioned to balance the interests of their users, but there could be occasions where the market process fails due to negative externalities. Part II will consider such externalities and introduce the least-cost avoider principle. Usually, social-media users are the least-cost avoiders of harms, but platforms themselves are sometimes better placed to monitor and control harms. This requires a balance, as social-media regulation raises the threat of collateral censorship or otherwise reducing opportunities to speak and receive speech. Part III will apply the insights from Part I and II to the areas of privacy, children’s online safety, and speech regulation.

The thesis of this essay is that policymakers must consider social-media companies’ status as multisided platforms means for regulation. The balance struck by social-media companies acting in response to the market incentives they face could be upset by regulation that favors the interests of some users over others. Promoting the use of technological and practical means to avoid perceived harms would allow users to preserve the benefits of social media without the difficult tradeoffs of regulation.

II. The Economics of Social-Media Platforms

Mutually beneficial trade is a fundamental bedrock of the market process. Entrepreneurs—including those that act through formal economic institutions like business corporations—seek to discover the best ways to serve consumers. Various types of entities help connect those who wish to buy products or services to those who are trying to sell them. Physical marketplaces are common around the world: those places set up to facilitate interactions between buyers and sellers. If those marketplaces fail to serve the interests of those who use them, others will likely arise.

Social-media companies are a virtual example of what economists call multi-sided markets or platforms.[2] Such platforms derive their name from the face that they serve at least two different types of customers and facilitate their interaction. Multi-sided platforms have “indirect network effects,” described by one economist as a situation where “participants on one side value being able to interact with participants on the other side… lead[ing] to interdependent demand.”[3] In some situations, a platform may determine it can only raise revenue from one side of the platform if demand on the other side of the platform is high. In such cases, the platform may choose to offer one side free access to the platform to boost such demand, which is subsidized by participants on the other side of the platform.[4] This creates a positive feedback loop in which more participants on one side of the platform leads to more participants on the other.

In this sense, social-media companies are much like newspapers or television in that, by solving a transaction cost problem,[5] these platforms bring together potential buyers and sellers by providing content to one side and access to consumers on the other side. Recognizing that their value lies in reaching users, these platforms sell advertising and offer access to content for a lower price, often at the price of zero (or free). In other words, advertisers subsidize the access to content for platform users.

Therefore, most social-media companies are free for users. Revenue is primarily collected from the other side of the platform—i.e., from advertisers. In effect, social-media companies are attention platforms: They supply content to users, while collecting data for targeted advertisements for businesses who seek access to those users. To be successful, social-media companies must keep enough (and the right type of) users engaged so as to maintain demand for advertising. Social-media companies must curate content that users desire in order to persuade them to spend time on the platform.

But unlike newspapers or television, social-media companies primarily rely on their users to produce content rather than creating their own. Thus, they must also consider how to attract and maintain high-demand content creators, as well as how to match user-generated content to the diverse interests of other users. If they fail to serve the interests of high-demand content creators, those users may leave the platform, thus reducing time spent on the platform by all users, which thereby reduces the value of advertising. Similarly, if they fail to match content to user interests, those users will be less engaged on the platform, reducing its value to advertisers.

Moreover, this means that social-media companies need to balance the interests of advertisers and other users. Advertisers may desire more data to be collected for targeting, but users may desire less data collection. Similarly, advertisers may desire more ads, while users may prefer fewer ads. Advertisers may prefer content that keeps users engaged on the platform, even if it is harmful for society, whether because it is false, hateful, or leads to mental-health issues for minors. On the other hand, brand-conscious advertisers may not want to run ads next to content with which they disagree. Moreover, users may not want to see certain content. Social-media companies need to strike a balance that optimizes their value, recognizing that losing participants on either side would harm the other.

Usually, social-media companies acting within the market process are going to be best-positioned to make decisions on behalf of their users. Thus, they may create community rules that restrict content that would, on net, reduce user engagement.[6] This could include limitations on hate speech and misinformation. On the other hand, if they go too far in restricting content that users consider desirable, that could reduce user engagement and thus value to advertisers. Social-media companies therefore compete on moderation policies, trying to strike the appropriate balance to optimize platform value. A similar principle applies when it comes to privacy policies and protections for minors: social-media companies may choose to compete by providing tools to help users avoid what they perceive as harms, while keeping users on the platform and maintaining value for advertisers.

There may, however, be scenarios where social media produces negative externalities[7] that are harmful to society. A market failure could result, for instance, if platforms have too great of an incentive to allow misinformation or hate speech that keeps users engaged, or to collect too much (or the wrong types of) information for targeted advertising, or to offer up content that is harmful for minors and keeps them hooked to using the platform.

In sum, social-media companies are multi-sided platforms that facilitate interactions between advertisers and users by curating user-generated content that drives attention to their platforms. To optimize the platform’s value, a social-media company must keep users engaged. This will often include privacy policies, content-moderation standards, and special protections for minors. On the other hand, incentives could become misaligned and lead to situations where social-media usage leads to negative externalities due to insufficient protection of privacy, too much hate speech or misinformation, or harms to minors.

III. Negative Social-Media Externalities and the Least-Cost-Avoider Principle

In situations where there are negative externalities from social-media usage, there may be a case for regulation. Any case for regulation must, however, recognize the presence of transaction costs, and consider how platforms and users may respond to changes in those costs. To get regulation right, the burden of avoiding a negative externality should fall on the least-cost avoider.

The Coase Theorem, derived from the work of Nobel-winning economist Ronald Coase[8] and elaborated on in the subsequent literature,[9] helps to explain the issue at hand:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the least-cost avoider, while taking into consideration the total social costs of the institutional framework.

In one of Coase’s examples, the noise from a confectioner using his machinery is a potential cost to the doctor next door, who consequently can’t use his office to conduct certain testing. Simultaneously, the doctor moving his office next door is a potential cost to the confectioner’s ability to use his equipment. In a world of well-defined property rights and low transaction costs, the initial allocation of a right would not matter, because the parties could bargain to overcome the harm in a beneficial manner—i.e., the confectioner could pay the doctor for lost income or to set up sound-proof walls, or the doctor could pay the confectioner to reduce the sound of his machines.[10] But since there are transaction costs that prevent this sort of bargain, it is important whether the initial right is allocated to the doctor or the confectioner. To maximize societal welfare, the cost should be placed on the entity that can avoid the harm at the lowest cost.[11]

Here, social-media companies create incredible value for their users, but they also arguably impose negative externalities in the form of privacy harms, misinformation and hate speech, and harms particular to minors. In the absence of transaction costs, the parties could simply bargain away the harms associated with social-media usage. But since there are transaction costs, it matters whether the burden to avoid harms is placed on the users or the social-media companies. If the burden is wrongly placed, it may end up that the societal benefits of social media will be lost.

For instance, imposing liability on social-media companies risks collateral censorship, which occurs when platforms decide that liability risk is too large and opt to over-moderate or not host user-generated content, or to restrict access to such content either by charging higher prices or excluding those who could be harmed (like minors).[12] By wrongly placing the burden to avoid harms on social-media platforms, societal welfare will be reduced.

On the other hand, there may be situations where social-media companies are the least-cost avoiders. For instance, they may be best-placed to monitor and control harms associated with social-media usage when it is difficult or impossible to hold those using their platforms accountable for harms they cause.[13] For instance, if a social-media company allows anonymous or pseudonymous use, with no realistic possibility of tracking down users who cause harms, illegal conduct could go undeterred. In such cases, placing the burden on social-media users could lead to social media imposing uncompensated harms on society.

Thus, it is important to determine whether the social-media companies or their users are the least-cost avoiders. Placing the burden on the wrong party or parties would harm societal welfare, either by reducing the value of social media or by creating more uncompensated negative externalities.

IV. Applying the Lessons of Law & Economics to Social-Media Regulation

Below, I will examine the areas of privacy, children’s online safety, and content moderation, and consider both the social-media companies’ incentives and whether the platforms or their users are the least-cost avoiders.

A. Privacy

As discussed above, social-media companies are multi-sided platforms that provide content to attract attention from users, while selling information collected from those users for targeted advertising. This leads to the possibility that social-media companies will collect too much information in order to increase revenue from targeted advertising. In other words, as the argument goes, the interests of the paying side of the platform will outweigh the interests of social-media users, thereby imposing a negative externality on them.

Of course, this assumes that the collection and use of information for targeted advertisements is considered a negative externality by social-media users. While this may be true for some, for others, it may be something they care little about or even value, because targeted advertisements are more relevant to them. Moreover, many consumers appear to prefer free content with advertising to paying a subscription fee.[14]

It does seem likely, however, that negative externalities are more likely to arise when users don’t know what data is being collected or how it is being used. Moreover, it is a clear harm if social-media companies misrepresent what they are collecting and how they are using it. Thus, it is generally unobjectionable—at least, in theory—for the Federal Trade Commission or another enforcer to hold social-media companies accountable for their privacy policies.[15]

On the other hand, privacy regulation that requires specific disclosures or verifiable consent before collecting or using data would increase the cost of targeted advertising, thus reducing its value to advertisers, and thereby further reducing the platform’s incentives of to curate valuable content for users. For instance, in response to the FTC’s consent agreement with YouTube charging that it violated the Children’s Online Privacy Protection Act (COPPA), YouTube required channel owners producing children’s content to designate their channels as such, along with automated processes designed to identify the same.[16] This reduced content creators’ ability to benefit from targeted advertising if their content was directed to children. The result was less content created for children with poorer matching as well:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.

Alternatively, a social-media company could raise the price it charges to users, as it can no longer use advertising revenue to subsidize users’ access. This is, in fact, exactly what has happened in Europe, as Meta now offers an ad-free version of Facebook and Instagram for $14 a month.[18]

In other words, placing the burden on social-media companies to avoid the perceived harms from the collection and use of information for targeted advertising could lead to less free content available to consumers. This is a significant tradeoff, and not one that most social-media consumers appear willing to make voluntarily.

On the other hand, it appears that social-media users could avoid much of the harm from the collection and use of their data by using available tools, including those provided by social-media companies. For instance, most of the major social-media companies offer two-factor authentication, privacy-checkup tools, the ability to browse the service privately, to limit audience, and to download and delete data.[19] Social-media users could also use virtual private networks (VPNs) to protect their data privacy while online.[20] Finally, users could just not post private information or could limit interactions with businesses (through likes or clicks on ads) if they want to reduce the amount of information used for targeted advertising.

B. Children’s Online Safety

Some have argued that social-media companies impose negative externalities on minors by serving them addictive content and/or content that results in mental-health harms.[21] They argue that social-media companies benefit from these harms because they are able to then sell data from minors to advertisers.

While it is true that social-media companies want to attract users through engaging content and interfaces, and that they make money through targeted advertising, it is highly unlikely that they are making much money from minors themselves. Very few social-media users under 18 have considerable disposable income or access to payment-card options that would make them valuable to advertisers. Thus, regulations that raise the costa to social-media companies of serving minors, whether through a regulatory duty of care[22] or through age verification and verifiable parental consent,[23] could lead social-media companies to invest more excluding minors than in creating vibrant and safe online spaces for them.

Federal courts considering age-verification laws have noted there are costs to companies, as well as users, in obtaining this information. In Free Speech Coalition Inc. v. Colmenero,[24] the U.S. District Court in Austin, Texas, considered a law that required age verification before viewing online pornography, and found that the costs of obtaining age verification were high, citing the complaint that stated “several commercial verification services, showing that they cost, at minimum, $40,000.00 per 100,000 verifications.”[25] But just as importantly, the transaction costs in this example also include the subjective costs borne by those who actually go through with verifying their age to access pornography. As the court noted, “the law interferes with the Adult Video Companies’ ability to conduct business, and risks deterring adults from visiting the websites.”[26] Similarly, in NetChoice v. Griffin,[27] the U.S. District Court for Western District of Arkansas found that a challenged law’s age-verification requirements were “costly” and would put social-media companies covered by the law in the position of needing to take drastic action to either implement age verification, restrict access for Arkansans, or face the possibility of civil and criminal enforcement.[28]

On the other hand, social-media companies—responding to demand from minor users and their parents—have also exerted considerable effort to reduce harmful content being introduced to minors. For instance, they have invested in content-moderation policies and their enforcement, including through algorithms, automated tools, and human review, to remove, restrict, or add warnings to content inappropriate for minors.[29] On top of that, social-media companies offer tools to help minors and their parents avoid many of the harms associated with social-media usage.[30] There are also options available at the ISP, router, device, and browser level to protect minors while online. As the court put it in Griffin, “parents may rightly decide to regulate their children’s use of social media—including restricting the amount of time they spend on it, the content they may access, or even those they chat with. And many tools exist to help parents with this.”[31]

In other words, parents and minors working together can use technological and practical means to make marginal decisions about social-media usage at a lower cost than a regulatory environment that would likely lead to social-media companies restricting use by minors altogether.[32]

C. Content Moderation

There have been warring allegations about social-media companies’ incentives when it comes to content moderation. Some claim that salacious misinformation and hate speech drives user engagement, making platforms more profitable for advertisers; others argue that social-media companies engage in too much “censorship” by removing users and speech in a viewpoint-discriminatory way.[33] The U.S. Supreme Court is currently reviewing laws from Florida and Texas that would force social-media companies to carry speech.[34]

Both views fail to take into account that social-media companies are largely just responding to the incentives they face as multi-sided platforms. Social-media companies are solving a Coasean speech problem, wherein some users don’t want to be subject to certain speech from other users. As explained above, social-media companies must balance these interests by setting and enforcing community rules for speech. This may include rules against misinformation and hate speech. On the other hand, social-media companies can’t go too far in restricting high-demand speech, or they will risk losing users. Thus, they must strike a delicate balance.

Laws that restrict the “editorial discretion” of social-media companies may fail the First Amendment,[35] but they also reduce the companies’ ability to give their customers a valuable product in light of user (and advertiser) demand. For instance, the changes in the moderation standards of X (formerly Twitter) in the last year since the purchase by Elon Musk have led to many users and advertisers exiting the platform due to a perceived increase in hate speech and misinformation.[36]

Social-media companies need to be free to moderate as they see fit, free from government interference. Such interference includes not just the forced carriage of speech, but in government efforts to engage in censorship-by-proxy, as has been alleged in Murthy v. Missouri.[37] From the perspective of the First Amendment, government intervention by coercing or significantly encouraging the removal of disfavored speech, even in the name of misinformation, is just as harmful as the forced carriage of speech.[38] But more importantly for our purposes here, such government actions reduce platforms’ value by upsetting the balance that social-media companies strike with respect to their users’ speech interests.

Users can avoid being exposed to unwanted speech by averting their digital eyes from it—i.e., by refusing to interact with it and thereby training social-media companies’ algorithms to serve speech that they prefer. They can also take their business elsewhere by joining a social-media network with speech-moderation policies more to their liking. Voting with one’s digital feet (and eyes) is a much lower-cost alternative than either mandating the carriage of speech or censorship by government actors.

V. Conclusion

Social-media companies are multisided platforms that must curate compelling content while restricting harms to users in order to optimize their value to the advertisers that pay for access. This doesn’t mean they always get it right. But they are generally best-positioned to make those decisions, subject to the market process. Sometimes, there may be negative externalities that aren’t fully internalized. But as Coase taught us, that is only the beginning of the analysis. If social-media users can avoid harms at lower cost than social-media companies, then regulation should not place the burden on social-media companies. There are tradeoffs in social-media regulation, including the possibility that it will result in a less-valuable social-media experience for users.

[1] See e.g. Mary Clare Jalonick, Congress eyes new rules for tech, social media: What’s under consideration, Associated Press (May 8, 2023), https://www.wvtm13.com/article/whats-under-consideration-congress-eyes-new-rules-for-tech-social-media/43821405#;  Khara Boender, Jordan Rodell, & Alex Spyropoulos, The State of Affairs: What Happened in Tech Policy During 2023 State Legislative Sessions?, Project Disco (Jul. 25, 2023), https://www.project-disco.org/competition/the-state-of-affairs-statetech-policy-in-2023 (noting laws passed and proposed addressing consumer data privacy, content moderation, and children’s online safety at the state level).

[2] See e.g. Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Euro. Econ. Ass’n 990 (2003).

[3] David S. Evans, Multisided Platforms in Antitrust Practice, at 3 (Oct. 17, 2023), forthcoming, Michael Noel, Ed., Elgar Encyclopedia on the Economics of Competition and Regulation, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4606511.

[4] For instance, many nightclubs hold “Ladies Night” where ladies get in free in order to attract more men who pay for entrance.

[5] Transaction costs are the additional costs borne in the process of buying or selling, separate and apart from the price of the good or service itself — i.e. the costs of all actions involved in an economic transaction. Where transaction costs are present and sufficiently large, they may prevent otherwise beneficial agreements from being concluded.

[6] See David S. Evans, Governing Bad Behavior by Users of Multi-Sided Platforms, 27 Berkeley Tech. L. J. 1201 (2012); Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 HARV. L. REV. 1598 (2018).

[7] An externality is a side effect of an activity that is not reflected in the cost of that activity — basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action.

[8] See R.H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960)

[9] See Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[10] See Coase, supra note 9, at 8-10.

[11] See id. at 34 (“When an economist is comparing alternative social arrangements, the proper procedure is to compare the total social product yielded by these different arrangements.”).

[12] See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Liability, 87 Notre Dame L. Rev. 293, 295-96 (2011); Geoffrey A. Manne, Ben Sperry & Kristian Stout, Who Moderates the Moderators: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L J. 26, 39 (2022); Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-firstamendment-and-online-intermediary-liability.

[13] See Geoffrey A. Manne, Kristian Stout & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (Sep. 6, 2023), https://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach.

[14] See, e.g., Matt Kaplan, What Do U.S. consumers Think About Mobile Advertising?, InMobi (Dec. 15, 2021), https://www.inmobi.com/blog/what-us-consumers-think-about-mobile-advertising (55% of consumers agree or strongly agree that they prefer mobile apps with ads rather than paying to download apps); John Glenday, 65% of US TV viewers will tolerate ads for free content, according to report, The Drum (Apr. 22, 2022), https://www.thedrum.com/news/2022/04/22/65-us-tv-viewers-will-tolerate-ads-free-content-according-report (noting that a report from TiVO found 65% of consumers prefer free TV with ads to paying without ads). Consumers often prefer lower subscription fees with ads to higher subscription fees without ads as well. See e.g. Toni Fitzgerald, Netflix Gets it Right: Study Confirms People Prefer Paying Less With Ads, Forbes (Apr. 25, 2023), https://www.forbes.com/sites/tonifitzgerald/2023/04/25/netflix-gets-it-right-study-confirms-more-people-prefer-paying-less-with-ads/.

[15] See 15 U.S.C. § 45.

[16] See Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, at 6-7, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[17] Id. at 1.

[18] See Sam Schechner, Meta Plans to Charge $14 a Month for Ad-Free Instagram or Facebook, Wall Street J. (Oct. 3, 2023), https://www.wsj.com/tech/meta-floats-charging-14-a-month-for-ad-free-instagram-or-facebook-5dbaf4d5.

[19] See Christopher Lin, Tools to Protect Your Privacy on Social Media, NetChoice (Nov. 16, 2023), https://netchoice.org/tools-to-protect-your-privacy-on-social-media/.

[20] See e.g. Chris Stobing, The Best VPN Services for 2024, PC Mag (Jan. 4, 2024), https://www.pcmag.com/picks/the-best-vpn-services.

[21] See e.g. Jonatahan Stempel, Diane Bartz & Nate Raymond, Meta’s Instagram linked to depression, anxiety, insomnia in kids – US state’s lawsuit, Reuters (Oct. 25, 2023), https://www.reuters.com/legal/dozens-us-states-sue-meta-platforms-harming-mental-health-young-people-2023-10-24/ (describing complaint from 33 states alleging Meta “knowingly induced young children and teenagers into addictive and compulsive social media use”).

[22] See e.g. California Age-Appropriate Design Code Act, AB 2273 (2022), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202120220AB2273AADC; Kids Online Safety Act, S. 1409, 118th Cong. (2023), as amended and posted by the Senate Committee on Commerce, Science, and Transportation on July 27, 2023, available at  https://www.congress.gov/bill/118th-congress/senate-bill/1409 (last accessed Dec. 19, 2023).

[23] See e.g. Arkansas Act 689 of 2023, the “Social Media Safety Act.”

[24] Free Speech Coal. Inc. v. Colmenero, No. 1:23-CV-917-DAE, 2023 U.S. Dist. LEXIS 154065 (W.D. Tex., Aug. 31, 2023), available at https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172751222/gov.uscourts.txwd.1172751222.36.0.pdf.

[25] Id. at 10.

[26] Id.

[27] NetChoice LLC. v. Griffin, Case No. 5:23-CV-05105 (W.D. Ark., Aug. 31, 2023), available at https://netchoice.org/wpcontent/uploads/2023/08/GRIFFIN-NETCHOICE-GRANTED.pdf.

[28] See id. at 23.

[29] See id. at 18-19.

[30] See id. at 19-20.

[31] Id. at 15.

[32] For more, see Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes, at 23 (ICLE Issue Brief, Nov. 9, 2023), https://laweconcenter.org/wp-content/uploads/2023/11/Issue-Brief-Transaction-Costs-of-Protecting-Children-Under-the-First-Amendment-.pdf.

[33] For an example of a hearing where Congressional Democrats argue the former and Congressional Republicans argue the latter, see Preserving Free Speech and Reining in Big Tech Censorship, Libr. of Cong. (Mar. 28, 2023), https://www.congress.gov/event/118th-congress/house-event/115561.

[34] See Moody v. NetChoice, No. 22-555 (challenging Florida’s SB 7072); NetChoice v. Paxton, No. 22-277 (challenging Texas’s HB 20).

[35] See e.g. Brief of International Center for Law & Economics as Amicus Curiae in Favor of Petitioners in 22-555 and Respondents in 22-277, Moody v. NetChoice, NetChoice v. Paxton, In the Supreme Court of the United States (Dec. 7, 2023), available at https://www.supremecourt.gov/DocketPDF/22/22-277/292986/20231211144416746_Nos.%2022-277%20and%2022-555_Brief_corrected.pdf. .

[36] See e.g. Ryan Mac & Tiffany Hsu, Twitter’s U.S. Ad Sales Plunge 59% as Woes Continue, New York Times (Jun. 5, 2023), https://www.nytimes.com/2023/06/05/technology/twitter-ad-sales-musk.html (“Six ad agency executives who have worked with Twitter said their clients continued to limit spending on the platform. They cited confusion over Mr. Musk’s changes to the service, inconsistent support from Twitter and concerns about the persistent presence of misleading and toxic content on the platform.”); Kate Conger, Tiffany Hsu & Ryan Mac, Elon Musk’s Twitter Faces Exodus of Advertisers and Executives, New York Times (Nov. 1, 2022), https://www.nytimes.com/2022/11/01/technology/elon-musk-twitter-advertisers.html (“At the same time, advertisers — which provide about 90 percent of Twitter’s revenue — are increasingly grappling with Mr. Musk’s ownership of the platform. The billionaire, who is meeting advertising executives in New York this week, has spooked some advertisers because he has said he would loosen Twitter’s content rules, which could lead to a surge in misinformation and other toxic content.”).

[37] See Murthy v. Missouri, No.23A-243; see also Missouri v. Biden, No. 23-30445, slip op. (5th Cir. Sept. 8, 2023).

[38] See Ben Sperry, Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social Media Platforms, (ICLE White Paper Sept. 22, 2023), forthcoming 59 Gonz. L. Rev. (2023), available at https://laweconcenter.org/resources/knowledge-and-decisions-in-the-information-age-the-law-economics-of-regulating-misinformation-on-social-media-platforms/.

 

Should be block quote

Continue reading
Innovation & the New Economy